Daniel Pink is one of the world’s leading business thinkers and the bestselling author of Drive and To Sell Is Human. He recently joined Derek Thompson, Senior Editor at The Atlantic and author of the new book, Hit Makers, for a Heleo Conversation on the science of popularity. They discuss the necessary balance between the familiar and the new, the underestimated power of dark broadcasters, and the role of luck in shaping the world.
Startups such as Cambridge-based Takeoff Technologies, Alert Innovation of Tewksbury, and FP Robotics of New Hampshire have collectively raised millions of dollars to streamline the process of replenishing your cupboards. And a report last week in the New York Post asserted that e-commerce giant Amazon is also developing a robot-staffed grocery store that could operate with just 10 employees, though the company denied it. Amazon’s robot development team is headquartered in North Reading, the result of a 2012 acquisition.
Cambridge scientists have received two of the biggest funding grants ever awarded by Cancer Research UK, with the charity set to invest £40 million over the next five years in two ground-breaking research projects in the city.
Brown University, Brown Daily Herald, Jackson Wells
from
Ever wonder how much happiness a word contains? Data can be used to study aspects of our lives that people may have not previously thought possible — even the type of emotion that our words convey. Such applications are the focus of the conference sponsored by the University’s new Data Science Initiative.
Chris Danforth and Peter Dodds, professors at the University of Vermont, discussed the research and goals of their work at the University of Vermont’s Computational Story Lab in two of the University’s data science colloquia Feb. 2. The first lecture was given by Danforth and centered around the team’s flagship program, which they call “Hedonometrics.” The team gathered the 10,000 most frequently used words from scans of Google’s books project, tweets , lyrics and the New York Times, Danforth said. Then, using Amazon’s Mechanical Turk Project, people rated the words on a “happy-to-sad” scale.
Subway riders in New York City have become increasingly angry about the quality of service. Their fury is justified.
After a long period of improvement, the system’s reliability has dropped significantly, with delays more than doubling over the last five years, according to a review of data from the Metropolitan Transportation Authority.
Subway delays have jumped to more than 70,000 each month, from about 28,000 per month in 2012, according to the data. On some lines, trains arrive late to their final destination well over half the time.
Ford Motor Co. is investing $1 billion in a months-old startup founded by two pioneers in the nascent autonomous vehicle sector.
The Pittsburgh-based artificial intelligence company Argo AI will develop the brains — specifically, a virtual driver system — for the fully autonomous vehicles Ford has promised to bring to market in 2021. Founders Bryan Salesky and Peter Rander are former leaders of the self-driving car teams at Uber Technologies Inc. and Alphabet Inc.’s Google.
Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side.
However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation—an emphasis on neuromorphic devices and what Intel is openly calling “cognitive computing” (a term used primarily—and heavily—for IBM’s Watson-driven AI technologies). This is the first time to date we’ve heard the company make any definitive claims about where neuromorphic chips might fit into a strategy to capture machine learning, and marks a bold grab for the term “cognitive computing” which has been an umbrella term for Big Blue’s AI business.
Most of us probably aren’t nimble mechanics when it comes to diagnosing whatever troubles our cars may be having — even though we know those weird knocking sounds means something is up, but we’re not sure what. But what if we could use artificial intelligence to do the prognosticating for us instead?
With the aim of preventing serious breakdowns before they become a costly problem, Israeli startup 3DSignals is using artificial intelligence to ‘listen in’ on a machine’s performance and to warn users when repairs are needed or anticipated.
Open data and open-source analytics allows community stakeholders to mine data for actionable intelligence like never before.
The objective of this research is to take a first step in exploring the feasibility of forecasting neighborhood change using longitudinal census data in 29 Legacy Cities (Figure 2).
This report ranks jobs according to each job’s Glassdoor Job Score, determined by combining three factors: number of job openings, salary, and overall job satisfaction rating.
In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.
Gigaom, the leader in emerging technology research, today announced six finalists for its first GAIN AI start-up competition. The competition coincides with the Gigaom AI conference held in San Francisco, CA, February 15-16, 2017.
It looks like NLP researchers may soon be joining philosophers like Jacques Derrida and Michel Foucault to theorize about the relationship between power and language.
Last Friday, the NLP & Text-As-Data Seminar heard from Vinodkumar Prabhakaran, a postdoctoral fellow at Stanford University who is specializing in computational sociolinguistics.
One of his research projects focuses on the workplace. Today, 96% of all office communication, Prabhakaran explained, occurs through mediums like email. But although email may be more convenient, it has also resulted in more people speaking online in ways that they would not during face-to-face communication. For example, one may feel comfortable speaking more sharply over email than they usually would in person, thanks to the detached, quasi-anonymity of a digital screen.
Figshare, Ellery Wulczyn, Nithum Thain, Lucas Dixon
from
“We provide a corpus of discussion comments from English Wikipedia talk pages.” … “See our wiki for documentation of the schema and our research paper for documentation on the data collection and processing methodology.”
CreativeAI; Joseph Chee Chang, Saleema Amershi, Ece Kamar
from
“Revolt eliminates the burden of creating detailed label guidelines by harnessing crowd disagreements to identify ambiguous concepts and create rich structures (groups of semantically related items) for post-hoc label decisions.”
“Yahoo is announcing that it’s open-sourcing TensorFlowOnSpark, a piece of software it has created to make the Google-initiated TensorFlow open-source framework for deep learning compatible with its data sets that sit inside Spark clusters.”
The search for planets beyond our solar system is about to gain some new recruits.
A team that includes MIT and is led by the Carnegie Institution for Science has released the largest collection of observations made with a technique called radial velocity, to be used for hunting exoplanets. The huge dataset, taken over two decades by the W.M. Keck Observatory in Hawaii, is now available to the public, along with an open-source software package to process the data and an online tutorial.
PsyArXiv Preprints; Alexander Etz, Joachim Vandekerckhove
from
“We introduce the fundamental tenets of Bayesian inference, which derive from two basic laws of probability theory. We cover the interpretation of probabilities, discrete and continuous versions of Bayes’ rule, parameter estimation, and model comparison.”