Today, 95% of the global population has mobile-phone coverage, and the number of people who own a phone is rising fast (see ‘Dialling up’)1. Phones generate troves of personal data on billions of people, including those who live on a few dollars a day. So aid organizations, researchers and private companies are looking at ways in which this ‘data revolution’ could transform international development.
Some businesses are starting to make their data and tools available to those trying to solve humanitarian problems. The Earth-imaging company Planet in San Francisco, California, for example, makes its high-resolution satellite pictures freely available after natural disasters so that researchers and aid organizations can coordinate relief efforts. Meanwhile, organizations such as the World Bank and the United Nations are recruiting teams of data scientists to apply their skills in statistics and machine learning to challenges in international development.
But in the rush to find technological solutions to complex global problems there’s a danger of researchers and others being distracted by the technology and losing track of the key hardships and constraints that are unique to each local context. Designing data-enabled applications that work in the real world will require a slower approach that pays much more attention to the people behind the numbers.
For the last couple of years, building a successful startup has seemed as simple as picking an out of favor category like ketchup and turning the most mundane of condiments into a $100M+ exit! Why try to build a robot or AI company when you can just modify and repackage a topping?
But how should founders evaluate the markets for mattresses and men’s health? What heuristics should an investor use to weigh Hims and Hubble, or to compare AllBirds and Away? And what is the right kind of founder for this sort of startup? Do you look for the designer with an unimpeachable aesthetic sense? Or an MBA who’s run the numbers on every facet of the fashion industry?
It’s far from clear at this point, but I think there are a few emerging ground rules.
There is much more to a successful technology product than its code. Companies seeking to exploit artificial intelligence need employees who understand how machine learning works and how it can be applied in business. But people who can do both are hard to find.
Smith School of Business in Toronto is trying to fill that gap with North America’s — and it believes the world’s — first master of management in artificial intelligence (MMAI). This month, 40 students are beginning the programme, studying topics such as how to apply AI in finance and the ethical implications of the technology, intertwined with hands-on training in natural language processing and deep learning (the use of artificial neural networks in advanced pattern recognition).
Jay Rajasekharan is one of the MMAI programme’s first students. The qualified mechanical engineer is keen to work in consulting or product development, but his goal is to be ready for whatever the future may bring. “AI is such a fast-growing industry, and in five years we may have roles that do not even exist today,” he says.
As the machine learning industry grows, Austin continues to be on the cutting edge of this futuristic technology. We’ve rounded up some of the most exciting companies in the machine learning world; learn their names now so you can say you knew them when.
Unity Technologies on Monday released version 0.5 of its ML-Agents toolkit to make its Unity 3D game development platform better suited for developing and training autonomous agent code via machine learning.
Initially rolled out a year ago in beta, version 0.5 comes with a few improvements. There’s a wrapper for Gym (a toolkit for developing and testing reinforcement learning algorithms), support for letting agents make multiple action selections at once and for preventing agents from taking certain actions, and a refurbished set of environments called Marathon Environments.
According to [Karen] Adolph, the novelty of the project is the medium of data collection — video recordings, as opposed to written reports. Even protocols for experimenters — such as instruction on how home videotapes should be installed unobtrusively — will be recorded on video.
Following the data collection, videos will be annotated systemically, with particular attention paid to the infants’ and mothers’ speech, emotion and movement as a means of determining the impact of parental traits on infant behavior.
“The ability to talk and communicate changes how people interact with you and your opportunities for learning,” Adolph said. “The 12 month olds — half of them will be crawling and half of them will be walking. The ability to walk allows babies to move much faster, much farther, and to see more of the environment. At 18 months, babies are now talking a little bit.”
Following the study, the annotated videos will be shared securely with other researchers through Databrary, an online video-sharing library used at more than 400 institutions around the world. Researchers will be granted open access to both the videos and the data collection methods through this platform.
Deep in Antarctica, at the southernmost point on our planet, sits a 33-foot telescope designed for a single purpose: to make images of the oldest light in the universe.
This light, known as the cosmic microwave background, or CMB, has journeyed across the cosmos for 14 billion years—from the moments immediately after the Big Bang until now. Because it is brightest in the microwave part of the spectrum, the CMB is impossible to see with our eyes and requires specialized telescopes.
The South Pole Telescope, specially designed to measure the CMB, has recently opened its third-generation camera for a multiyear survey to observe the earliest instants of the universe.
New 5G networks are coming and big companies are spending big bucks to roll them out.
Ericsson is going to be providing T-Mobile with its latest 5G new radio hardware and 3GPP for a cool $3.5 billion.
As it moves from LTE Advanced networks to 5G, T-Mobile said it will use the Ericsson portfolio of products to expand its existing LTE capacity while readying the network for the 5G jump.
Switzerland has emerged as a leader in the research and development of drones. Insiders talk about a “Drone Valley” between the Swiss Federal Institutes of Technology in Lausanne and Zurich, which is home to 80 startups in the field. What are the factors driving their success? And how are we going to avoid chaos in the skies?
Let the computers do the legal busy work so attorneys can focus on complex problem solving for their clients. That’s the lucrative idea behind Atrium LTS, Twitch co-founder Justin Kan’s machine learning startup that digitizes legal documents and builds applications on top to speed up fundraising, commercial contracts, equity distribution and employment issues. For example, one of its apps automatically turns startup funding documents into Excel cap tables.
Automating expensive legal labor has led to a rapid rise to 110 employees and 250 clients for Atrium, including startups like Bird and MessageBird. Atrium only came of stealth a year ago with a $10.5 million party round before going into Y Combinator last winter. Today it announces it’s raised a $65 million round led by Andreessen Horowitz.
Nowhere are the realities of human-driven climate change more apparent than at Earth’s thawing poles. Arctic sea ice is vanishing, while melting ice sheets in Greenland and Antarctica are driving an acceleration in sea level rise. Yet for nearly a decade, NASA has lacked a dedicated satellite to measure how high the polar ice is piled—and how it is subsiding as ice melts or slides into the oceans.
That gap is set to close with the 15 September launch of the $1 billion Ice, Cloud and land Elevation Satellite (ICESat-2) from Vandenberg Air Force Base in California. ICESat-2 will bounce laser light off Earth’s surface, gauging changes in its elevation as small as the diameter of a pencil. Although the mission is a successor in name to ICESat-1, which ended in 2010, its multibeam laser instrument puts it in a different class, says Ted Scambos, a glaciologist at the National Snow & Ice Data Center in Boulder, Colorado. “Every season we’ll get a better map than ICESat-1 ever made.”
How are individual internet-users approaching personal privacy protection in this algorithm-rich environment?
“Such behaviors are poorly accounted for in existing technology design practices,” says Eric P. S. Baumer, assistant professor of computer science and engineering.
Baumer was recently awarded a grant by the National Science Foundation through its Division of Computer and Network Systems to study how people navigate a world in which data-collection is a continuous feature of their environment and how internet systems can be better designed to support “data literate” behaviors. The award is a collaborative grant with Andrea Forte, associate professor of information science at Drexel University.
The study seeks to fill a gap in knowledge about the various ways individuals approach personal privacy protection.
If you’ve ever taken out a loan, you’ve probably been asked for copies of documents that show your income and savings, like a recent paystub, a W-2 form, or a bank statement. And if you’ve ever had to file an insurance claim after an accident or fire, you may have been asked to submit receipts or invoices verifying the expenses you’re asking to have reimbursed.
The trouble for insurers and lenders is that it’s not necessarily easy to know if those forms people submit are genuine. There’s always a risk of unscrupulous customers using Photoshop or other editing tools to artificially boost their salaries or the costs of their replacement items.
A Bay Area startup called Inscribe, which recently participated in the Y Combinator accelerator program, is using digital forensics and machine learning techniques to help companies figure out when documents are forged and altered. The company is currently working with tech-savvy lenders to test its technology now, and as of September, those early clients should be able to upload documents on their own to have the system highlight areas of potential fraud concern.
Princeton CITP, Freedom to Tinker blog, Andrew Appel
from
In this November’s election, could a computer hacker, foreign or domestic, alter votes (in the voting machine) or prevent people from voting (by altering voter registrations)? What should we do to protect ourselves?
The National Academies of Science, Engineering, and Medicine have released a report, Securing the Vote: Protecting American Democracy about the cybervulnerabilities in U.S. election systems and how to defend them. The committee was chaired by the presidents of Indiana University and Columbia University, and the members included 5 computer scientists, a mathematician, two social scientists, a law professor, and three state and local election administrators. I served on this committee, and I am confident that the report presents the clear consensus of the scientific community, as represented not only by the members of the committee but also the 14 external reviewers—election officials, computer scientists, experts on elections—that were part of the National Academies’ process.
BrainChip described what it claims will be the first commercial accelerator for spiking neural networks (SNNs). Akida should sample in fall 2019, delivering nearly an order of magnitude more throughput/watt than a Movidius Myriad 2 at about the same price and accuracy.
In imaging applications, the chip is expected to process as many as 1,400 frames/second/watt using an 11-layer SNN. It will consume less than a watt and cost about $10, targeting computer vision as well as financial and cybersecurity analysis.
Minneapolis, MN October 3 starting at 8:30 a.m., Carlson School of Management. “At the intersection of healthcare and data analytics, Convene aims to navigate the tension that emerges when artificial intelligence meets human intelligence.” [registration required]
“The Marron Institute of Urban Management will offer grants of up to $50,000 over two years to Principal Investigators at NYU in order to provide seed funding for innovative scholarship and applied research projects that address pressing problems faced by cities and urban residents. The purpose of the grant program is to stimulate proposals for creative solutions to socio-economic, political, or physical issues that accompany urban development. Marron Institute seed awards are not designed to support already-funded or ongoing research projects.” Deadline for proposals is December 1.
LogDevice is a scalable distributed log storage system that offers durability, high availability, and total order of records under failures. It is designed for a variety of workloads including event streaming, replication pipelines, transaction logs, and deferred work journals.
In this post, I would like to elaborate on how agent-computing projects can be more convincing in the eyes of a policymaker. In my opinion, there are four major aspects that agent-based modelers can exploit for this purpose: (1) causality and detail, (2) scalability and response, (3) unobservability and counterfactuals, and (4) separating policy design from implementation.
Keystone Stream Processing Platform is Netflix’s data backbone and an essential piece of infrastructure that enables engineering data-driven culture. While Keystone focuses on data analytics, it is worth mentioning there is another Netflix homegrown reactive stream processing platform called Mantis that targets operational use cases.
One of the major issues for new potential analysts is difficulty in obtaining high quality data at scale. The same problem exists for teachers who are responsible for training new data scientists. The World Cup data, along with our continuing release of women’s football data, helps address this problem directly.