Sidewalk Labs, the mysterious urban innovation group of Google parent company Alphabet, has spun out a start-up with ambitious plans to rethink health care.
The startup, known as Cityblock and known internally at Sidewalk as “CareLab,” is focusing on low-income communities with serious health problems.
The company is working with the USC Center for Body Computing to offer seniors free rides, study their behavior, and learn how to build them better services.
AI also promises to make human labor smarter and more efficient, even something as traditional as small-scale farming. To that end, researchers have developed a smartphone-based program that can automatically detect diseases in the cassava plant—the most widely grown root crop on Earth—with darn near 100 percent accuracy. It’s a glimpse at a future in which farmers in the developing world trade the expertise of a handful of specialists for increasingly omnipresent and powerful technology.
The most impressive bit about the technology is that the neural network that powers it runs entirely on the smartphone, no cloud computing or hulking processors required, as the researchers detail in a preprint paper to be published in Frontiers in Plant Science. That’s thanks to TensorFlow, Google’s open source machine learning library, which gave rise to Inception v3, a slimmed-down network the researchers deployed. “Some neural networks require hundreds of millions of parameters, and just the file size you would need to store those is beyond what you could include in an app,” says Google’s Pete Warden, tech lead on TensorFlow Mobile. “This network only has around 25 million.”
CrowdFlower, the essential human-in-the-loop Artificial Intelligence platform for data science and machine learning teams, today announced a series of AI workshops created to help computer engineers learn the basics of machine learning platforms and build AI models for real world applications. The move by CrowdFlower addresses the growing shortage of AI skills and expertise by offering day-long hands-on training focused on areas such as deep learning, computer vision, transfer learning, and data annotation.
EPJ Data Science; Graham McNeill, Jonathan Bright and Scott A Hale
from
The emergence of large stores of transactional data generated by increasing use of digital devices presents a huge opportunity for policymakers to improve their knowledge of the local environment and thus make more informed and better decisions. A research frontier is hence emerging which involves exploring the type of measures that can be drawn from data stores such as mobile phone logs, Internet searches and contributions to social media platforms and the extent to which these measures are accurate reflections of the wider population. This paper contributes to this research frontier, by exploring the extent to which local commuting patterns can be estimated from data drawn from Twitter. It makes three contributions in particular. First, it shows that heuristics applied to geolocated Twitter data offer a good proxy for local commuting patterns; one which outperforms the current best method for estimating these patterns (the radiation model). This finding is of particular significance because we make use of relatively coarse geolocation data (at the city level) and use simple heuristics based on frequency counts. Second, it investigates sources of error in the proxy measure, showing that the model performs better on short trips with higher volumes of commuters; it also looks at demographic biases but finds that, surprisingly, measurements are not significantly affected by the fact that the demographic makeup of Twitter users differs significantly from the population as a whole. Finally, it looks at potential ways of going beyond simple frequency heuristics by incorporating temporal information into models. [full text]
White House Cyber Coordinator Rob Joyce is seeking agency and department input on potential replacements for what he calls a “flawed” Social Security number system.
“I feel very strongly that the Social Security number has outlived its usefulness,” said Joyce, who spoke at the Washington Post Cyber Summit on Tuesday. “We’ve called for the departments and agencies to bring forward their ideas.”
Today marks the 10 year anniversary of Amazon’s Dynamo whitepaper, a milestone that made me reflect on how much innovation has occurred in the area of databases over the last decade and a good reminder on why taking a customer obsessed approach to solving hard problems can have lasting impact beyond your original expectations.
It all started in 2004 when Amazon was running Oracle’s enterprise edition with clustering and replication. We had an advanced team of database administrators and access to top experts within Oracle. We were pushing the limits of what was a leading commercial database at the time and were unable to sustain the availability, scalability and performance needs that our growing Amazon business demanded.
We are excited to announce the release of Scoping the University RDM Service Bundle, the second report in OCLC Research’s four-part series exploring the realities of research data management. This report examines the RDM capacity acquired by four research universities in four different national contexts, highlighting key factors that shaped the contours of this capacity, and providing 13 takeaways that provide useful starting points for institutions as they consider their own RDM services.
The Realities of Research Data Management, an OCLC Research project, explores the context and choices research universities face in building or acquiring RDM capacity. Findings are derived from detailed case studies of four research universities: University of Edinburgh, University of Illinois at Urbana-Champaign, Monash University, and Wageningen University and Research. Future reports will focus on the incentives for acquiring RDM capacity, and sourcing and scaling RDM services.
The amount of information that a population shares is directly proportional to the quality of its democracy. And, as a corollary: the more viewpoints that get exposed, the greater the collective empathy and understanding.
That math has worked out well for Facebook for most of its history, as it convinced its users to share more information in the name of community and openness. It found its ultimate expression in the Arab Spring, when protestors around the Middle East connected over Facebook to have conversations they couldn’t in public. In retaliation, some of those threatened governments shut down the internet, only proving the point: good guys spread information, and bad guys try to stop it.
But as Facebook has grown, that equation has become less certain.
With the opening of its newest AI research lab in Montreal, Facebook is tapping into Montreal’s thriving AI ecosystem. The fourth Facebook AI Research (FAIR) lab will be led by Dr. Joelle Pineau, an expert in the field of reinforcement learning and co-director of the Reasoning and Learning Lab at McGill University’s School of Computer Science.
Left unsaid in the announcement is what this new partnership entails, and on whose chip Waymo is running the heavy-duty deep-learning algorithms.
Intel CEO Brian Krzanich took to his blog to tout Intel’s collaboration with Waymo on self-driving car technology.
The gist of his message is that Waymo’s new vehicles, self-driving Chrysler Pacifica hybrid minivans, “feature Intel-based technologies for sensor processing, general compute and connectivity, enabling real-time decisions for full autonomy in city conditions.”
On the heels of the federal government’s new guidance for industry and state governments on automated driving systems, emerging mobility companies are coming to the nation’s leading test facility to pilot their connected and automated vehicle technologies.
Five companies from across the country will test and develop mobility solutions with University of Michigan students as a part of U-M’s incubator—TechLab at Mcity.
TechLab is managed by U-M’s Center for Entrepreneurship, in partnership with Mcity, a public-private partnership led by U-M to accelerate advanced mobility vehicles and technologies.
TechCrunch has learned that Amazon has acquired Body Labs, a company with a stated aim of creating true-to-life 3D body models to support various b2b software applications — such as virtually trying on clothes or photorealistic avatars for gaming.
One source suggested the price-tag Amazon paid for Body Labs could be $100M+. However a second well-placed source suggested it’s closer to $70M than $100M — so we’re pegging it at between $50M and $70M.
Measuring sustainability efforts in the business world can be a difficult task. The private sector has become increasingly aware of the need to move towards an inclusive economy, which leaves companies, investors and governments asking for guidance on how to do just that.
To address this quandary, Yale University has launched the Yale Initiative on Sustainable Finance (YISF) in collaboration with the World Business Council for Sustainable Development (WBCSD). The initiative hopes to improve corporate reporting and help develop and disseminate new ideas broadly through research, workshops and roundtables.
Oracle came late to the cloud and it’s been playing catch-up in recent years trying to add a wide range of services that customers are going to be demanding from a cloud vendor. To that end, the company added artificial intelligence as a service to its dance card today at Oracle OpenWorld.
The company has been busy today with a flurry of announcements including a new autonomous database as a service and a shiny new blockchain service. The artificial intelligence service is an extension of these announcements.
When an earthquake hits, it sends a wave of energy through Earth’s crust. The wave’s speed depends on the crystalline structure and orientation of the minerals within the layers of rock. It also depends on the directionally dependent variation in its velocity called seismic anisotropy.
Measuring seismic anisotropy can help scientists predict how earthquakes will propagate, determine what minerals lie within Earth’s crust, and examine how the crust deformed over geological timescales. But researchers often lack detailed measurements of the anisotropic characteristics of a given region. Now, by comparing anisotropic measurements from geologic regions worldwide, researchers have compiled a data set that could improve these predictions
New York, NYWTF: What’s the Future and Why It’s Up to Us is Tim O’Reilly’s new book and the title of this presentation. October 11, starting at 3:30 p.m., Data & Society (36 W. 20th St). [rsvp required]
The Google AI Residency Program (formerly known as the Google Brain Residency Program) is a 12-month role designed to advance your career in machine learning research. Residents will work alongside distinguished scientists from various Research teams. The goal of the residency is to help residents become productive and successful AI researchers. Deadline for applications is January 8, 2018.
At last week’s Moore-Sloan Research Lunch Seminar, Djellel Difallah explains his data-driven approach to analyzing platforms like Amazon Mechanical Turk (Mturk) and Wikidata.
Teachable Machine is an experiment that makes it easier for anyone to start exploring how machine learning works. It lets you teach a machine using your camera – live in the browser, no coding required. It’s built with a library called deeplearn.js, which makes it easier for any web developer to get into machine learning, by training and running neural nets right in the browser.
“We’ve had a number of tickets recently asking about running Jupyter Notebooks on Legion/Grace. Until the architecture of the Jupyter Notebook changes this will never be a good/safe idea. This sparked a discussion which descended into an argument between James and myself on the internal Slack about whether it is appropriate to encourage new researchers to use Jupyter notebooks.”