Boston Consulting Group; Akash Bhatia, Zia Yusuf, David Ritter and Nicolas Hunke
from
More than 400 companies offer IoT platforms today. Enterprise software and service companies and IoT startups account for the largest portion (22% and 32%, respectively) of companies that claim to offer IoT platforms. In addition, industrial technology providers (at 18%) are offering IoT platforms in an effort to shift away from a hardware-centric business model. Internet companies and telcos make up the remainder of the IoT platform vendors.
Although some companies are emerging from the pack as possible leaders, choosing the right provider from the plethora of companies vying for the platform market remains a challenge. Buyers are looking for insight into how to make the best choice when selecting an IoT platform today. The most useful approach, we believe, includes analyzing the ecosystem from the vendor’s perspective and identifying the key factors that will determine which companies win the IoT platform wars.
Remote Sensing in Ecology and Conservation blog, Timothy G. O’Brien
from
Article 29 of the Nagoya Protocol mandates all signatories to the Convention on Biological Diversity (CBD) to monitor their implementation of CBD obligations and document progress toward Aichi 2020 targets. Given the multi-dimensional character of biodiversity, a single, comprehensive metric is clearly not feasible. Rather we rely on a range of factors to measure the status and conservation of biodiversity. Pereira et al, (2013) proposed a list of essential biodiversity variables (EBVs) to enable the study, reporting and management of biodiversity change.
Three EBVs of particular interest to wildlife biologists and conservationists are species distribution, population abundance, and taxonomic diversity. Global trend dashboards using such data include the Wild Bird Index, the Living Planet Index and the Wildlife Picture Index. The Wildlife Picture Index uses data from camera trap images to generate trends in species abundance and distribution. Camera trap data can, of course, also be used to assess trends in species composition and taxonomic diversity. Automated camera traps remotely sense the passage of moderate sized wildlife, are especially useful for monitoring terrestrial and semi-terrestrial mammals and birds, and are, therefore, an excellent method for gathering data on EBVs.
There’s no denying that advancements in artificial intelligence and machine learning are occurring faster than most experts anticipated. The era of self driving cars is just on the horizon, the Jabberwacky chatbot can convince you it’s a human, and Google’s deep dream can produce creative works of art. Advancements in networking, information processing algorithms, and data storage technologies are enabling computers to acquire complex skillsets and capabilities. In the meantime, the world is left to wonder exactly how these technologies will be implemented and how they will impact existing markets and industries. The predictive maintenance industry is no exception. There is no question that predictive maintenance is a superior strategy in comparison to common preventive maintenance and especially reactive maintenance. According to the Department of Energy’s operations and maintenance best practices guide, a predictive maintenance strategy can realize savings of 30-40% and 8-12% over reactive and preventive strategies respectively.
The benefits of a predictive program originate from the fact that it is a strategy driven by data such as vibration, ultrasound, temperature, electrical, and other measurements. This data ultimately drives informed decisions such as figuring out the specific repair required to fix a machine or timing the repair appropriately to minimize the risk of catastrophic failure.
I’ve been around computing since my older brother got a Commodore 64 for Christmas in 1983. I took my first “business machines” class in high school in 1991, attended my first computer science class in 1994 (learning Pascal), and moved to Silicon Valley in 1997 after Cisco converted my internship into a permanent position. I worked in Cisco’s IT department for several years before moving to their engineering group where I designed networking protocols. I went to grad school at MIT in 2004 where I met the founders of several companies in Y Combinator’s first couple of batches and worked on Hubspot before it was Hubspot. After writing several books for O’Reilly and attending the first O’Reilly Web 2.0 and MIT Sloan Sports Analytics conferences, I started a “Web 2.0 for Sports” company called StatSheet.com in 2007, which in 2010 pivoted into the first Natural Language Generation (NLG) company called Automated Insights. I recently stepped back at Ai to become a Ph.D. student at UNC studying Artificial Intelligence.
All of that to say I’ve had a bird’s eye view to watch the incredible innovation that’s occurred over the past 30 years in technology. I’ve been lucky to be in the right place at the right time.
At a demo showcase held at District Hall in South Boston last week , everyone seemed excited about the potential of voice-driven technology. A local Amazon employee, Robert McCauley, talked about how easy it is to build games and other apps for the Echo device. He also plugged a new device, the Echo Show, which includes a screen for conducting video chats with friends or seeing who’s standing at your front door.
The showcase also included entrepreneurs like Scott Cohen of BigR.io, a Concord consulting firm that is creating its own intelligent persona, named Jaxon, to help retailers conduct conversations with prospective customers, answer questions, and — ideally — close the deal.
Boston-based Vesper Technologies was showing low-power microphones that could be built into all sorts of devices — from smartphones to speakers to trash cans — founder Matt Crowley explained, so that you can control them with your voice.
The central idea behind the global Vision Zero movement is that traffic crashes are preventable.
At Microsoft, we believe that data science and complex machine learning can aid cities in their life-saving Vision Zero commitment. That’s why we partnered with Datakind in 2015, and since then, we’ve worked with them to use city-specific data to identify where traffic safety conditions could be improved to ease traffic and protect citizens.
Today, we are releasing this video case study to showcase the project, its learnings, and its future potential.
The new Computing Research Association (CRA) report “Generation CS: Computer Science Undergraduate Enrollments Surge Since 2006” (http://cra.org/data/Generation-CS/) describes the dramatic increase in enrollments in computer science (CS) over the last 11 years, with an especially rapid increase since 2009. Sixty percent of academic units surveyed more than doubled their enrollment in that time. The report describes a new generation of undergraduate students who realize the importance of computing education.
Amaury Sport Organisation (A.S.O.), organizers of the Tour de France, and Dimension Data, the Official Technology Partner of the Tour de France, announced the introduction of machine learning technologies at this year’s Tour de France to give cycling fans across the globe an unprecedented experience of this year’s event. The race begins in Düsseldorf on Saturday and finishes at the Champs-Elysees in Paris on 23 July.
This year, Dimension Data’s data analytics platform, which was developed in partnership with A.S.O., incorporates machine learning and complex algorithms that combine live and historical race data to provide even deeper levels of insight as the race unfolds. Fans will also benefit from rider profiles to understand more about environments and circumstances in which riders perform best.
As part of a new pilot this year, A.S.O. and Dimension Data are exploring the role of predictive analytics technologies to assess the likelihood of various race scenarios, such as whether the peloton will catch the breakaway riders at certain stages of the race.
In five years, a sky-scanning telescope in Chile will begin hunting the heavens for asteroids on a collision course with Earth, and scientists at the University of Washington are at the forefront of work to spot them.
We introduce a deep learning approach for denoising Monte Carlo-rendered images that produces high-quality results suitable for production. We train a convolutional neural network to learn the complex relationship between noisy and reference data across a large set of frames with varying distributed effects from the film Finding Dory. The trained network can then be applied to denoise new images from other films with significantly different style and content, such as Cars 3 (right), with production-quality results.
Much work, and many tools, are still needed to integrate artificial intelligence into the software engineering workflow, noted Peter Norvig, Google’s director of research, speaking at the O’Reilly Artificial Intelligence conference in New York last week.
Fundamentally, AI software is inherently different from other forms of widely used software, said Norvig, who is also a co-author of perhaps the most popular book of programming instruction for the field, Artificial Intelligence: A Modern Approach.
“One way of looking at the traditional model of programming is to look at the programmer is a micro-manager, who tells a computer exactly how to do something step by step,” he said. With AI, we should look at the programmer more as a teacher, rather than a micro-manager.
We are at a very early stage with these technologies and a group of the world’s leading neuroscientists, including Dr Birbaumer, are calling for ethical guidelines to be implemented now. According to Jens Clausen from the Center for Ethics in the Sciences at the University of Tubingen:
“Technological advances in the BMI field are currently developing at such rapid rate that it is high time to define a legal and ethical framework”
Materials scientists from Duke University have demonstrated a shortcut to the traditional trial-and-error process. Using high throughput computational models that predict magnetism in new materials, the scientists have successfully developed, atom by atom, two new magnetic materials: cobalt, magnesium and titanium (Co2MnTi); and manganese, platinum and palladium (Mn2PtPd).
Using the computer model, the researchers focused on Heusler alloys, or materials made with atoms from three different elements arranged in one of three different structures. With 55 elements to choose from (and all possible potential arrangements), the manual process would have required testing 236,115 combinations. The model permitted the team to test hundreds of thousands of possibilities rapidly, resulting in two magnets that could be fabricated at thermodynamic equilibrium.
The world’s leading drug companies are turning to artificial intelligence to improve the hit-and-miss business of finding new medicines, with GlaxoSmithKline unveiling a new $43 million deal in the field on Sunday.
Trento, Italy The event is organised by the Italian Football Federation (FIGC), in collaboration with the University of Trento and the Autonomous Province of Trento. October 14-15. [registration required]
This package contains a python interface for Stanford CoreNLP that contains a reference implementation to interface with the Stanford CoreNLP server. The package also contains a base class to expose a python-based annotation provider (e.g. your favorite neural NER system) to the CoreNLP pipeline via a lightweight service.
Announcing our new Foundation for Deep Learning acceleration MIOpen 1.0 which introduces support for Convolution Neural Network acceleration — built to run on top of the ROCm software stack!
Recently, one of our founders, Dino Citraro, attended the Rostock Retreat in Germany as a keynote speaker. Prospective participants were asked to visualize one of three demographic datasets as part of their application. In preparation for his talk we decided to play around with a couple of those datasets as case studies of how we explore and visualize data.