You might say 2017 is the year of Assistant. While Google’s AI chatbot launched last year, it’s really come into form over the past few months, with an array of third-party actions, support for all Marshmallow and Nougat phones, Android Wear integration, and most recently, the ability to access millions of recipes. But now it’s ready to take the biggest leap of all.
Google is opening up Assistant to developers with a new preview SDK, as it aims to greatly expand its presence beyond Android devices and its Home speaker. As product Manager Chris Ramsdale writes in a blog post, “With this SDK you can now start building your own hardware prototypes that include the Google Assistant, like a self-built robot or a voice-enabled smart mirror. This allows you to interact with the Google Assistant from any platform, not just Android.”
“On the one hand, it is very exciting that these ideas are being discussed,” says Miguel A.L. Nicolelis, the Duke University neuroscientist whose lab has been at the center of brain-machine interface research since the late 1990s. “But the announcement was more like science fiction than something grounded in physical reality.”
Facebook wants to outrace the competition to the next big computing platform, whether it’s virtual reality, augmented reality, or now machine-brain interfaces. Apple and Google beat Mark Zuckerberg and company to the smartphone, and he doesn’t want to lose again. But as always in Silicon Valley, there are other motivations at work here. Facebook is also a company that wants to be seen as the kind of innovator that will do good for the world, especially at a time when so many people are questioning the company’s impact on public discourse.
Earlier this year, artificial intelligence scientist Sebastian Thrun and colleagues at Stanford University demonstrated that a “deep learning” algorithm was capable of diagnosing potentially cancerous skin lesions as accurately as a board-certified dermatologist.
The cancer finding, reported in Nature, was part of a stream of reports this year offering an early glimpse into what could be a new era of “diagnosis by software,” in which artificial intelligence aids doctors—or even competes with them.
Experts say medical images, like photographs, x-rays, and MRIs, are a nearly perfect match for the strengths of deep-learning software, which has in the past few years led to breakthroughs in recognizing faces and objects in pictures.
This week, Toulon, France hosts the 5th International Conference on Learning Representations (ICLR 2017), a conference focused on how one can learn meaningful and useful representations of data for Machine Learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.
Researchers from the Georgia Institute of Technology have created a new interface for manipulating robots that’s more foolproof. Instead of having to position a robot with six different directions, their new interface allows an operator to simply tap on an object on a touchscreen. The robot will figure out the best way to navigate to that object and grab it.
The FY2018 Big Data Regional Innovation Hubs: Establishing Spokes to Advance Big Data Applications (BD Spokes) solicitation has been released from NSF. Deadline is June 19.