MIT’s Department of Brain and Cognitive Sciences (BCS) recently launched a new post-baccalaureate program, and applications for the 2017-2018 academic year are now open.
A Stanford University team of computer scientists is working to overcome artificial intelligence’s lack of social intelligence. They’ve created the Jackrabbot, a one-meter-high robot that can travel up to five miles per hour and, incidentally, looks adorable in a hat and tie.
Equipped with motion sensors and software that utilizes an algorithm based on hours of aerial video footage of busy sidewalks, Jackrabbot goes on regular expeditions through Stanford’s busy campus making on-the-fly judgments about right-of-way and personal space. Since its March 2015 debut, the robot has made several dozen outings to test its software.
New remote sensing maps of the forest canopy in Peru test the strength of current forest protections and identify new regions for conservation effort, according to a report led by Carnegie’s Greg Asner published in Science.
Asner and his Carnegie Airborne Observatory team used their signature technique, called airborne laser-guided imaging spectroscopy, to identify preservation targets by undertaking a new approach to study global ecology—one that links a forest’s variety of species to the strategies for survival and growth employed by canopy trees and other plants. Or, to put it in scientist-speak, their approach connects biodiversity and functional diversity.
Company Data Science News
xnor.aispun out from Seattle’s Allen Institute for Artificial Intelligence (AI2) with $2.6m in funding. Oren Etzioni, head of AI2, said the lightweight technology “supports increased battery life, strong privacy (data stays on the device), and disconnected use (the device doesn’t have to connect to the internet),” overcoming previous computational and battery limitations associated with AI processes.
Shelf Engineeliminates food waste by using AI to help prepared-food retailers predict how many of which food supplies to order. If you are interested in FoodTech, there are great meetup groups in New York, LA, and San Francisco.
The fashion + AI space is more challenging than it looks. Rent the Runway, StitchFix, and many others have tried to combine recommender systems and online ordering. Results are lukewarm. Maybe this new Google + H&M partnership that throws geotracking into the model will move the needle. Or maybe it will at least recommend clothes from stores in your area to facilitate the last-mile returns problem.
David Ferrucci, the man who led the team that invented IBM Watson worked at the hedgefund Bridgewater for several years. He has now launched Elemental Cognition, an AI firm to watch.
Microsoft Cognitive services offers 25 APIs for AI in five domain areas: vision, speech, language, knowledge, and search.
Minje Kim, an assistant professor of intelligent systems engineering at the School of Informatics and Computing at IU Bloomington, has received a gift from Intel to pursue a method of lowering the power and computing cost of deep learning processes in artificial intelligence. Intel sought a portfolio of research projects focused on compelling new human-computer interaction advancements that have HCI on the precipice of a breakthrough.
As smart devices have become more ubiquitous, advances in deep learning have allowed AI to reach a near-human level. Deep learning allows complicated intelligence jobs — such as computer vision, near real-time language translation and music recognition to be performed quickly, but such computing comes at a cost. Because neural networks present each of the millions of parameters of a computation in up to 64-bit forms, the computations required are both sizeable and hungry for power.
Switzerland is creating a National Center for Data Science to foster innovation in data science, multidisciplinary research and open science. Today, the inauguration of the Swiss Data Science Center (SDSC) is taking place in Bern.
Switzerland is launching a National Center for Data Science in order to innovate in the realm of data and computer science, and to provide an infrastructure for fostering multidisciplinary research and open science, with applications ranging from personalized health to environmental issues. It is a joint venture between EPFL and ETH Zürich with offices in both Lausanne and Zürich. The initiative will ensure that Switzerland possesses expertise and excellence in data science while striving to be globally competitive.
How much power should the U.S. government have to compel technology companies to help it access their users’ encrypted information? Last year’s dramatic showdown between the FBI and Apple fizzled before the courts could shed light on the answer, but the contentious debate is bound to flare up again before long in Washington. What might the next round have in store?
Traditional GPR data analysis uses a mathematical method that strongly simplifies the presumed path of the wave through the ground, limiting the clarity of perceived underground features. In the newer approach used by the team, a technique called full-waveform inversion more accurately reconstructs the wave’s path, allowing for improved resolution.
As his fresh salad and sandwich business started taking off a few years ago, Stefan Kalb realized he had a problem. Kalb, co-founder of Seattle-based Molly’s, learned how difficult it was to predict customer order patterns. At the time, Molly’s had 30 different menu items and delivered orders to more than 200 different customers each week.
The root of the problem was all about how Molly’s made bulk orders for perishable foods — they were often not perfect. When orders should have been increased, they were decreased; when orders should have been decreased, they were increased. Sometimes, when an order needed to be changed, nothing was adjusted.
In response, Kalb channeled his entrepreneurial energy and built software that helped Molly’s purchase the right amount of bulk food to fulfill customer orders. The financial impact was so substantial that Kalb has turned that idea into a entirely new company.
One project at Berkeley Center for Cosmological Physics studies the reconstruction of the cosmic initial condition based on observations of the later-time universe.
The cosmic initial condition is the density fluctuation of the universe about 13.7 billion years ago, when the main form of energy in the universe mainly consisted of cosmic microwave background (CMB). Due to the finite speed of light, any direct measurements of the CMB, including space-based programs, such as Planck, WMAP, and COBE, and ground-based programs, such as ACT-Pole and PolarBear, can only observe a thin slice of the cosmic initial condition. For the rest of universe, we are only able to observe an evolved state. The closer to us the observed universe is to us physically, the the older it is.
Government Data Science News
Michael Eisen a professor of genetics at UC-Berkeley may run for the US Senate to take Diane Feinstein’s seat. Feinstein is in her 80s and has not yet announced whether she will run again. If you might be interested in running for office, 314 Action is a nascent group helping scientists run for office. University life has given many of us experience with small-p politics – we will see how well that translates into electoral success.
Switzerland launched the Swiss Data Science Center in Bern (plus offices in Lausanne and Zurich) with a coalition between Ecole Polytechnique Federale de Lausanne and ETH Zurich. The center will operate as a cloud-computing provider, champion Open Science practices, and support the wide range of multi-disciplinary research with which data science is currently affiliated. This organizational structure is unique in establishing the government in an active player. Already it seems to have added infrastructure provider to the center’s mission (cloud computing) where many university centers rely on Amazon Web Services and other for-profit cloud services.
Sage posted an infographic explaining how NSF, NIH and other research funding bills work their way through Congress.
This is a pedagogically groundbreaking textbook for deep learning. Elon Musk says “”Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.”. Yann Lecun says “…This is the first comprehensive textbook on the subject, written by some of the most innovative and prolific researchers in the field. This will be a reference for years to come.” Needless to say, it’s worth reading.
“In this post we are going to investigate the significance of Word2Vec for NLP research going forward and how it relates and compares to prior art in the field. In particular we are going to examine some desired properties of word embeddings and the shortcomings of other popular approaches centered around the concept of a Bag of Words (henceforth referred to simply as Bow) such as Latent Semantic Analysis.”
I have long been a fan of RethinkDB, finding it to hit a sweet spot: developer friendliness (I <3 ReQL!) with Jepsen-grade robustness. This is the same ethos that we aspire to at Joyent, and that RethinkDB was born proprietary but later made open source made it feel like even more of a kindred spirit: I have leapt the chasm twice (first at Sun with what became illumos and then later at Joyent, where we open sourced our entire stack), so I felt I personally understood some of the arduous path that RethinkDB had walked. And several years into using RethinkDB in production, my affection hasn’t waned: upgrades are the true test of one’s relationship with a body of software, and I found that mine with RethinkDB has been only strengthened with time.
There was just one small hiccup with RethinkDB, though it felt forgivable at the time: RethinkDB is open source, but licensed under the AGPL. Whatever your own feelings for the AGPL, it is indisputable that its vagueness coupled with its rarity and its total lack of judicial precedent makes risk-averse lawyers very nervous.
Starting with 2017, Rust is following an open roadmap process for setting our aims for the year. The process is coordinated with the survey and production user outreach, to make sure our goals are aligned with the needs of Rust’s users. It culminates in a community-wide discussion and ultimately an RFC laying out a vision.
Google Research Blog, Esteban Real, Vincent Vanhoucke, Jonathon Shlens and Stefano Mazzocchi,
from
“Today, in order to facilitate progress in video understanding research, we are introducing YouTube-BoundingBoxes, a dataset consisting of 5 million bounding boxes spanning 23 object categories, densely labeling segments from 210,000 YouTube videos. To date, this is the largest manually annotated video dataset containing bounding boxes, which track objects in temporally contiguous frames. The dataset is designed to be large enough to train large-scale models, and be representative of videos captured in natural settings.”