NYU Data Science newsletter – June 22, 2015

NYU Data Science Newsletter features journalism, research papers, events, tools/software, and jobs for June 22, 2015

GROUP CURATION: N/A

 
Data Science News



Data from Google Trends

GitHub, Google News Team


from June 17, 2015

Download and play with key datasets from Google Trends, curated by the News Lab at Google team.

 

Computers Are Learning How To Treat Illnesses By Playing Poker And Atari | FiveThirtyEight

FiveThirtyEight


from June 20, 2015

You’ve got to know when to hold ’em. Know when to fold ’em. And know when to walk away because the game you’re playing has been rendered trivial by an advanced algorithm.

Poker has been solved. Michael Bowling, a computer science professor at the University of Alberta — along with co-authors Neil Burch, Michael Johanson and Oskari Tammelin — published findings to that effect earlier this month in the journal Science. For a specific poker game — heads-up limit hold ’em — a computer algorithm is now indistinguishable from perfect.

 

Listen to Microsoft’s research EVP explain the value deep learning, privacy and Bing — > S C A L E — Medium

Medium, S C A L E, Derrick Harris


from June 21, 2015

As some readers of this publication might know, I was a writer at Gigaom until March 9, 2015. My former co-worker Barb Darrow and I did a weekly podcast where we discussed the week’s enterprise and webscale technology news and interviewed prominent figures in those spaces. Last week, Barb (who now writes for Fortune) and I kicked off version 2.0 of our podcast, which is now called The Datacenter Show.

Here is the first episode, which is available on SoundCloud and will soon be available on iTunes. In this episode, we discuss Microsoft’s big reorganization and why Apache Spark is so important, and interview Microsoft EVP of Technology and Research Harry Shum.

 

Researchers sequence and assemble first full genome of a living organism using technology the size of smartphone

Ontario Institute for Cancer Research


from June 15, 2015

Researchers in Canada and the U.K. have for the first time sequenced and assembled de novo the full genome of a living organism, the bacteria Escherichia Coli, using Oxford Nanopore’s MinIONTM device, a genome sequencer that can fit in the palm of your hand.

The findings, which were published today in the journal Nature Methods, provide proof of concept for the technology and the methods lay the groundwork for using it to sequence genomes in increasingly more complex organisms, eventually including humans, said Dr. Jared Simpson, Principal Investigator at the Ontario Institute for Cancer Research and a lead author on the study.

 

Using big data could alert us to risks in the food supply chain

The Guardian, Guardian Sustainable Business


from June 16, 2015

As shoppers, we’ve become used to the reliable presence of brands in supermarkets. The idea of food scarcity and disruption to supplies doesn’t come into plans for our weekly food shop.

But the reality for many global food manufacturers is uncertainty. Chocolate production is one example. Some 40% of the world’s cocoa comes from the Ivory Coast, grown on farms with only a few hectares of cocoa trees. In China alone, US firm The Hershey Company estimates that sales of chocolate will grow 60% between 2014 and 2019 to a value of $4.3bn. This is partly thanks to a new-found love of chocolate among China’s growing middle classes.

But it’s not just chocolate. The problem is widespread, particularly with ingredients that only grow in specific climates, such as vanilla, tea, coffee and palm oil.

 

Computational Fact Checking from Knowledge Networks

PLOS One


from June 17, 2015

Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation.

 

Republicans lag behind on voter information – LA Times

Los Angeles Times


from June 19, 2015

… The blueprint called for a nationally synchronized technology platform to collect every piece of information obtained about voters by every Republican running for office, whether for city council or the U.S. Senate. The eventual presidential nominee would be endowed with reams of real-time data that could be used to target voters with unprecedented efficiency and precision.

But that promised innovation has run into the head winds of contract disputes, suspicions about data firms’ political loyalties and friction with the tea party. Voter information is being collected out in the field by a jumble of firms not always working in concert.

 

Joining The “Big Data Party,” Fred Hutch Hires Broad’s Top Techie

Xconomy


from June 16, 2015

The Fred Hutchinson Cancer Research Center in Seattle said today it has hired a new chief information officer, Matthew Trunnell, as the biomedical institution moves into an era of deep genomic sequencing and other information-heavy approaches to help find new, targeted treatments for cancer and other diseases.

A new president and director, D. Gary Gilliland, has been in place at the Hutch, as it’s known, for less than six months and is bringing big changes. Last week, the institution announced the hire of a new head of licensing and commercialization, Nicole Robinson, a crucial role at a place that hopes to reap financial and not just intellectual and societal rewards from its work in cancer and immunology.

 

Can We Design Trust Between Humans and Artificial Intelligence?

Fast Company, Co. Design


from June 19, 2015

For many years, interacting with artificial intelligence has been the stuff of science fiction and academic projects, but as smart systems take over more and more responsibilities, replace jobs, and become involved with complex emotionally charged decisions, figuring out how to collaborate with these systems has become a pragmatic problem that needs pragmatic solutions.

Machine learning and cognitive systems are now a major part many products people interact with every day, but to fully exploit the potential of artificial intelligence, people need much richer ways of communicating with the systems they use. The role of designers is to figure out how to build collaborative relationships between people and machines that help smart systems enhance human creativity and agency rather than simply replacing them.

 

Can Phone Data Detect Real-time Unemployment?

iconnect007


from June 15, 2015

If you leave your job, chances are your pattern of cellphone use will also change. Without a commute or workspace, it stands to reason, most people will make a higher portion of their calls from home, and they might make fewer calls, too.

Now a study co-authored by MIT researchers shows that mobile phone data can provide rapid insight into employment levels, precisely because people’s communications patterns change when they are not working.

 

Leave a Comment

Your email address will not be published.