The minute hand on the robot apocalypse clock just inched a little closer to midnight. DeepMind, the Google sister-company responsible for the smartest AI on the planet, just taught machines how to figure things out for themselves.
Robots aren’t very good at exploring on their own. AI that only exists to parse data, such as neural networks that decide whether something is a hotdog or not, have relatively little to concentrate on compared to the near-infinite number of things a physical robot has to figure out.
To solve this problem DeepMind built a new learning paradigm for AI-powered robots called ‘Scheduled Auxiliary Control (SAC-X).’ This new paradigm gives robots a simple goal like ‘clean up this playground’ and rewards it for completion.
The requirement for airline passengers to remove their shoes at the security gates – in the UK at least – could soon be consigned to history, after the UK government released £1.8m of funding to introduce a “safer and smoother travel experience” at airport terminals.
The Department for Transport (DfT) has announced it will invest the money in eight separate technologies aimed at reducing queues and improving efficiency. The shoe scanning equipment in question, which has been developed by UK company Security Screening Technologies, uses state-of-the-art imaging that can detect shoes for explosive materials. Should the unit be successfully introduced it could bring about big time savings, owing to the fact that the system learns with every scan and raises its detecting potential the more it is used. Wearers of footwear that don’t pass the test are then subject to secondary screening.
Woebot, the startup behind an automated psychotherapy bot of the same name, announced today that it has closed an $8 million round of series A funding.
The company’s bot is designed to help people cope with mental illnesses like depression through the application of cognitive behavioral therapy exercises. The bot is currently available on Facebook Messenger and a recently launched iOS app.
This funding shows the potential for artificial intelligence techniques in natural language processing and other arenas to help humans.
Daphne Koller, a leading artificial intelligence scientist, has left her role at Alphabet Inc.’s Calico after less than two years at the medical research lab focused on aging.
Google formed Calico in 2013 with the edict to find ways to extend human life, arguably the search giant’s most audacious endeavor outside of the internet. In August 2016, Calico recruited Koller as its first chief computing officer. She was hired to lead what Calico calls its “computational biology” efforts — applying machine learning data-analysis tools to medical information.
“I have decided to leave Calico to pursue other professional opportunities,” Koller said in an emailed statement. “I very much enjoyed my time at Calico, and have the greatest respect for the Calico team and their important and aspirational mission.”
Apple is continuing its advances into the health care space – look out Amazon, Warren Buffett and JP Morgan Chase – by turning iPhones into medical record storage devices. This will solve the known problem of extreme siloing among US hospitals. Whatever happens at one hospital or clinic stays at that hospital or clinic. This is totally counterproductive to providing good care over the life course. Keep in mind, though, Cellebrite has announced it can surpass the security features on all iPhone models but that it will engage in “vulnerability hoarding” to protect your secrets. I still think its a net-benefit to let patients carry their comprehensive care records to all the doctors they see, but I would be remiss if I didn’t mention that there is a privacy risk.
Yann LeCun, Peter Norvig, and Eric Horvitz did a redditAsk Me Anything about AI. The internet is still fun sometimes.
Swiss pharmaceutical heavyweight Roche is buying New York-based Flatiron Health for $1.9 billion, bringing their total investment to $2.1 billion. Flatiron has been working on precision medicine approaches based on having millions of electronic medical records. The data privacy protections are stronger in Switzerland than the US so it will be interesting to see if they will have to make amendments to their data pipeline as a Swiss company.
Facebook is confronting pressure to reveal what kind of political ads were purchased on its platform during the 2016 election season in the US. The only person who seems happy with the way they’ve addressed this pressure is President Trump. Facebook has announced it will undertake US-based address verification by sending postcards to those wishing to run political campaign ads. Why do I feel like it would be easy to have a sympathetic American or a visitor receive and return such postcards? I mean, the practice shifts the blame to aforementioned sympathetic Americans, but that’s about all it does. Foreign influence is likely to continue in the postcard age. It’s also deeply strange that they aren’t using oh, say, data science to detect ads that just don’t perform like the others. For one thing, it seems like it would be in their best interest to make those ad sales as quickly as possible without the nostalgia-laced snail mail step.
Charles Duhigg did a thorough piece of investigative journalism concluding that Google has been anti-competitive in the search space. The ethical question here is not about justice, it isn’t whether Google could fairly restrict access, but about progress. Duhigg notes, “Antitrust prosecutions are part of how technology grows. Antitrust laws ultimately aren’t about justice, as if success were something to be condemned; instead, they are a tool that society uses to help start-ups build on a monopolist’s breakthroughs without, in the process, being crushed by the monopolist.”
Duhigg’s journalistic compatriot Robert Wright worries that Google is now set to monopolize AI, which is a much harder argument to make (in my opinion), but has some steam (e.g. DeepMind, GoogleMaps, Siri) behind it.
Daphne Kollerhas leftGoogle where she was Chief Computing Officer on their Calico health project aimed to combat aging. She said all the right things upon her exit which means I have no idea what went wrong there.
Palantir is one of the most troubling AI companies in the US. They frequently contract on law enforcement and international threat detection projects. This week they were found to be conducting predictive policing in New Orleans without notifying the public or even most police officers. The company avoided public scrutiny by offering their services charitably (but were funded by the CIA’s venture arm). The only silver lining is that this type of egregious bad behavior should hasten calls for transparency and accountability in the use of data science and artificial intelligence. I hope.
So what does 2018’s SSAC research track tell us about the state of sports analytics? … Let’s start with the sports represented by the research papers and posters this year. With 2 papers and 6 posters—50% of all posters—, basketball is the most popular sport for research at 2018 Sloan. Most of the basketball applications focus on the NBA, with one project working with NCAA data (Sailofsky, ‘Drafting Errors’), and another with primary data of amateur players collected from GoPro cameras (Bertasius et al., ‘Learning an Egocentric Basketball Ghosting Model’)‘.
The National Science Foundation (NSF) announces three new Expeditions in Computing awards, each providing $10 million in funding over five years to multi-investigator research teams pursuing large-scale, far-reaching and potentially transformative research in computer and information science and engineering. This year’s awards aim to enable game-changing advances in real-time decision making, quantum computing and non-invasive biomedical imaging.
“The Expeditions projects being awarded today are not only taking on challenging research problems in computer and information science and engineering, but they are also offering the potential to yield tremendous benefits to multiple sectors of our society,” said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering. “We are delighted to be able to fund these projects, which represent the largest single investments in our portfolio.”
There’s a new entrant in the race to provide internet connectivity to the roughly four billion humans on the wrong side of the world’s digital divide.
Launching from stealth today with a fresh $13.5 million investment led by Andreessen Horowitz is Astranis — the developer of a novel satellite technology that aims to transmit data down to specific terrestrial locations with each satellite it launches.
That’s a significant shift from the way that like SpaceX and OneWeb are building their satellite networks. Both of those companies are launching satellites into low earth orbit — which means that their satellites orbit the earth every ninety minutes.
The United States is no laggard on investment and advances in artificial intelligence technologies, Steven Walker, director of the Defense Advanced Research Projects Agency, told reporters on Thursday, disputing assertions by top U.S. technology executives that China was racing ahead.
“I think I’d put our AI, our country’s efforts, up against anybody,” Walker said at an event hosted by the Defense Writers Group. DARPA “helped create the field in the early 1960s” and since then has consistently invested in the three waves of artificial intelligence technologies, Walker said.
DARPA is “investing pretty heavily” in so-called third-wave AI systems, where machines understand the context and the environment in which they operate and are able to explain their reasoning and decision making to human operators, Walker said. “These are very nascent efforts but they’re going to be important if you want the warfighter to trust the machine and help him or her make decisions.”
Microsoft Cognitive Services is home to the company’s hosted artificial intelligence algorithms. Today, the company announced advances to several Cognitive Services tools including Microsoft Custom Vision Service, the Face API and Bing Entity Search .
Joseph Sirosh, who leads the Microsoft’s cloud AI efforts, defined Microsoft Cognitive Services in a company blog post announcing the enhancements, as “a collection of cloud-hosted APIs that let developers easily add AI capabilities for vision, speech, language, knowledge and search into applications, across devices and platforms such as iOS, Android and Windows.” These are distinct from other Azure AI services, which are designed for developers who are more hands-on, DIY types.
The idea is to put these kinds of advanced artificial intelligence tools within reach of data scientists, developers and any other interested parties without any of the normal heavy lifting required to build models and get results with the myriad testing phases that are typically involved in these types of exercises.
It’s undeniable that artificial intelligence and machine learning have captured the public imagination in recent years. Powered by an exponential increase in computer chip processing (predicted by Moore’s Law), applications of AI and machine learning are leading to breakthroughs in almost every field. Self-driving cars, voice-powered personal assistants, product recommendations and credit card fraud detection are just a few examples.
It’s no surprise then that these technologies are being applied to the medical field broadly and to medical diagnosis, in particular. In fact, medical diagnosis has long been a target of AI tools. In the 1970s a researcher at Stanford University developed an early AI system called Mycin that attempted to capture the expertise of physicians and automate their decision-making through a computer program that could diagnose infectious diseases and recommend antibiotics. The results of this expert system compared favorably to that of human physicians, but the significant investment of human expertise required, and the narrow applications of expert systems, led eventually to disappointment and disillusionment.
However, recent advancements in machine learning algorithms that mimic the human brain (described as deep neural networks) are demonstrating impressive progress in automated medical image recognition.
Law enforcement — and criminal justice more broadly — must be evaluated on two separate criteria: pragmatic effectiveness and legal justice. On the first criterion, it’s important to note that there isn’t yet any clear evidence that the Palantir-New Orleans partnership works. Palantir would like to take credit for a New Orleans crime dip, but the data and the timing don’t necessarily support that. For now, the efficacy of machine-based crime prediction and protection must be treated as unproven at best.
Of course, as advocates of big data analysis would surely point out, it takes time for predictive technologies to be refined (or in the case of machine learning, to refine themselves). The more data, the better. Translating prediction into prevention isn’t necessarily simple either. Our conversation could proceed on the assumption that someday, predictive machine learning tools with access to enough data might indeed be able to predict crime better than existing police tools do. After all, crime is a form of human behavior just like any other, and algorithmic AI models are getting better and better at predicting plenty of human behaviors in other realms.
So that brings us to the question of justice: What, if anything, is inherently worrisome about machine predictions of crime? The most obvious worry is that computers could get their predictions wrong and therefore encourage police to target people for investigation and surveillance who aren’t in fact going to commit crimes.
MIT and SenseTime today announced that SenseTime, a leading artificial intelligence (AI) company, is joining MIT’s efforts to define the next frontier of human and machine intelligence.
SenseTime was founded by MIT alumnus Xiao’ou Tang PhD ’96 and specializes in computer vision and deep learning technologies. The MIT-SenseTime Alliance on Artificial Intelligence aims to open up new avenues of discovery across MIT in areas such as computer vision, human-intelligence-inspired algorithms, medical imaging, and robotics; drive technological breakthroughs in AI that have the potential to confront some of the world’s greatest challenges; and empower MIT faculty and students to pursue interdisciplinary projects at the vanguard of intelligence research.
Before last week, the official U.S. map of broadband access had accumulated a fair amount of dust. On February 23, though, the Federal Communications Commission’s cartography of connectivity got a long-awaited upgrade. But while the new broadband map is easier to click around, it still isn’t a reliable tool to gauge what internet options are available to homes or communities around the country.
Just ask one of the FCC’s commissioners what’s wrong with it.
“I looked up my house and can tell you with good authority it lists service that is not available at my location,” Democratic commissioner Jessica Rosenworcel wrote in a dissenting statement.
How and where to store, manage, and share increasingly large data sets challenges scientists across disciplines.
Like a library network for scientific data, the Data Observation Network for Earth (DataONE) links member data repositories to ensure open and secure access to well-described and easily discovered Earth observational data.
The network provides guidelines and tools for researchers to document and preserve their data and make them available for future users to expand studies across time periods and locations.
San Francisco, CA March 7, starting at 6:30 p.m., General Assembly (225 Bush Street, 5th Floor). “In this discussion, we’ll talk about practical ways to incorporate AI to deliver an explosive impact on any company’s bottom line.” [free, registration required]
San Jose, CA March 26-29. “More Than 8,000 Developers, Industry Experts to Discuss Future of Self-Driving Cars, HPC, Robotics, Healthcare, Cloud Computing and More, March 26-29 in Silicon Valley.” [$$$$]
Castiglione della Pescaia, Grosseto, Italy May 29, precedes AVI 2018 conference. “This workshop will bring together researchers with expertise in visualization, interaction design, and natural user interfaces. We aim to build a community of multimodal visualization researchers, explore synergies and challenges in our research, and establish an agenda for research on multimodal interactions for visualization.” Deadline for submissions is March 9.