Replacing the doctor with an intelligent medical robot is a recurring theme in science fiction, but the idea of individualised medical advice from digital assistants like Alexa or Siri, supported by self-surveillance smartphone data, no longer seems implausible. A scenario in which medical information, gathered at the point of care, is analysed using sophisticated machine algorithms to provide real-time actionable analytics seems to be within touching distance. The creation of data-driven predictions underpins personalised medicine and precision public health. Medical practice has so far been largely unchanged by the digital revolution that has disrupted so many other industries, but perhaps artificial intelligence (AI) will provide the improvements in medical care and research promised for so long.
The course of history is often hidden in government archives. Now statisticians have worked out how to extract the most significant events using data-mining techniques.
This year in self-driving car news started with Mayor Bill Peduto hoping for a better relationship with Uber and ended with policy recommendations from Carnegie Mellon University students on using autonomous vehicles to improve access to public transit.
In between, the city’s self-driving scene grew by two. Ford invested $1 billion into Pittsburgh self-driving startup Argo AI and Aurora Innovation launched, bringing the number of autonomous vehicle testers in the city up to five — Aptiv (formerly Delphi), Argo AI, Aurora Innovation, CMU and Uber.
But there’s still a way to go before we’re all multitasking as our cars drive themselves. For one thing, laws need to be created. And the technology needs to get there, too.
Nearly 20 years after file sharing upended the recorded music industry, Canadian musicians are looking to digital technology in a bid to recoup money they’ve long been leaving on the table.
SOCAN and Re:Sound – two Canadian licensing agencies that collect that money for musicians – spent 2017 building world-leading partnerships that will help them better scan audio and video content online and on the radio, ensuring copyright holders are making as much money as possible while keeping an eye out for future stars, too.
Mark Magellan, a writer and designer at IDEO U, puts it this way: “To tell a story that someone will remember, it helps to understand his or her needs. The art of storytelling requires creativity, critical-thinking skills, self-awareness, and empathy.”
All those traits are fundamentally human, but as artificial intelligence (AI) becomes more commonplace, even experts whose jobs depend on them possessing those traits — people like Magellan — foresee it playing a bigger role in what they do.
MinneAnalytics is partnering with Hamline University to sponsor analytics competitions at the high school level. Our mission is focused on promoting the data sciences, and what better way than to help build a pipeline of interested students? A pilot competition is being organized for April of next year, with a broader reach coming during the 2018/2019 school year.
As New York Mayor Bill de Blasio’s first term draws to a close, new laws passed in New York City in 2017 have made the metropolis an international trailblazer in open government data and algorithmic transparency.
One bill mandates more transparency for how New York uses algorithms in decision-making, creating a task force to examine the issue. The other bill gives the public more ability to hold the city accountable for the implementation of its landmark open data legislation.
While the algorithmic transparency bill that passed has flaws, it sets a new bar for open government in the 21st century. The New York City Liberties Union praised the passage of the final legislation as the “first in the nation” to recognize that algorithmic bias “must be subject to public scrutiny and a mechanism to remedy flaws and biases.”
You might not suspect that the success of the emerging field of precision medicine depends heavily on the couriers who push carts down hospital halls.
But samples taken during surgery may end up in poor shape by the time they get to the pathology lab — and that has serious implications for patients as well as for scientists who want to use that material to develop personalized tests and treatments that are safer and more effective.
Consider the story of a test that’s commonly used to choose the right treatment for breast cancer patients. About a decade ago, pathologists realized that the HER2 test, which looks for a protein that promotes the growth of cancer cells, was wrong about 20 percent of the time. As a result, some women were getting the wrong treatment. The trouble wasn’t with the test itself — problems arose because the samples to be tested weren’t handled carefully and consistently.
The Greenland Ice Sheet, like other ice sheets, is a system in motion, with the melted water playing a key role in how portions of the ice end up sliding off the land and into the ocean. This contributes to sea level rise, a global concern that scientists say is occurring at a more rapid rate than in the past.
But “we don’t really have a very good way of predicting, how quickly does an ice sheet lose mass? How much of the ice sheet will disappear in certain types of temperature conditions?” [Matt] Covington said.
Darmouth College, Department of Psychological and Brain Sciences, Contextual Dynamics Laboratory
from
“HyperTools is a library for visualizing and manipulating high-dimensional data in Python. It is built on top of matplotlib (for plotting), seaborn (for plot styling), and scikit-learn (for data manipulation).”
It’s an important question to answer. The market is so good for credible data scientists that you do have many options available to you. So, pick wisely. I co-authored a blog post on this subject that offers a few points to consider:
arXiv, Computer Science > Computation and Language; Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, Armand Joulin
from
Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.