Data Science newsletter – September 22, 2018

Newsletter features journalism, research papers, events, tools/software, and jobs for September 22, 2018

GROUP CURATION: N/A

 
 
Data Science News



Normal Catastrophes: Large-Scale Disasters Make Their Way into Risk Management Consciousness

Global Association of Risk Professionals, Katherine Heires


from

“Catastrophic risk is definitely on the rise,” says Howard Kunreuther, co-director of the Risk Management and Decision Processes Center, at the University of Pennsylvania’s Wharton School. In conducting research with more than 100 S&P-500 companies, from 2010 to 2011, and continuing with follow-up interviews through 2017, he found that virtually all are now paying attention to the possibility of catastrophic risk and adverse events. Ten or 15 years ago, they did not.

Such events are often related to, but are not limited to, weather and climate change. Their costs can be physical, financial and reputational, impacting individual companies or entire industries.

“Companies and their CEOs are really concerned about these issues,” says Kunreuther, co-author, with Wharton School colleague Michael Useem, of the recently published Mastering Catastrophic Risk: How Companies Are Coping with Disruption.


Ad industry leans into machine learning

Axios, Sara Fischer


from

A new study from one of the world’s biggest ad firms, its digital ad agency and a programmatic ad tech company suggests that digital ad campaigns optimized by machine learning tools outperformed campaigns managed by humans over the course of one month.


Adobe Will Buy Marketo For $4.75 Billion

AdExchanger, James Hercher


from

Adobe said Thursday it has agreed to acquire the B2B marketing platform Marketo for $4.75 billion.

It’s the biggest deal in Adobe’s history. And the cloud technology giant may be shelling out so much because Marketo’s B2B category strength could help as Adobe moves into media buying and ecommerce, à la TubeMogul and Magento, where the distinction between marketing silos has faded.


Brown awarded $3.5M to speed up atomic-scale computer simulations

Brown University, News from Brown


from

With a new grant from the U.S. Department of Energy, a Brown University-led research team will use machine learning to speed up atom-level simulations of chemical reactions and the properties of materials.

“Simulations provide insights into materials and chemical processes that we can’t readily get from experiments,” said Andrew Peterson, an associate professor in Brown’s School of Engineering who will lead the work.


City of New York Releases Annual Open Data Report, Featuring New York City Data at Work

City University of New York, Department of Information Technology & Telecommunications


from

The de Blasio administration today announced the release of its annual update on Open Data for All and the NYC Open Data Plan, which this year pulls back the curtain on digital government to show how the City collects data, how data powers city operations, and how publishing that data publicly creates value in communities across the five boroughs.

Over the last year, dozens of agencies have released datasets to the NYC Open Data Portal, bringing increased transparency to all aspects of the City’s operations, from the rental market to community projects and more.


How Can Social Science Become More Solutions-Oriented?

Kellogg Insight, Noshir Contractor and Duncan Watts


from

Noshir Contractor, a professor of behavioral sciences at the McCormick School of Engineering, as well as a professor of management and organizations at the Kellogg School and communication studies in the School of Communication at Northwestern, recently sat down with Duncan Watts, principal researcher at Microsoft and an expert in how social influence spreads on networks. They discussed whether social scientists are doing enough to solve problems in the world around us—and what researchers and businesses can do to push the field forward.


A model to predict Airbnb distribution in cities

SpringerOpen blog, Giovanni Quattrone


from

The distribution of Airbnb listings has been the topic of much discussion among citizens and policy-makers, particularly in major cities. In an article published in EPJ Data Science, Giovanni Quattrone and colleagues looked into the many factors determining the spacial penetration of Airbnb in urban centers and developed a model that aims to predict this distribution in other cities. Among others, the presence of creative communities emerges as an important factor in the adoption of the housing plaftform.


Compassionate intelligence – Can machine learning bring more humanity to health care?

Stanford Medicine magazine, Kris Newby


from

[Stephanie] Harman is a co-leader of a Stanford pilot program that aims to change that. Each morning she receives a priority report from an intelligent computer program that, every 24 hours, analyzes which patients under the care of the general medicine physicians would benefit from palliative care. The tool helps her spend more time with patients and less on record reviews and, most importantly, it leads to better endings.

This is one of many Stanford Medicine projects that combine artificial intelligence technologies with medical expertise to help doctors make faster, more informed and humane decisions. They hope it will help them spend less time in front of computer screens and more time doing what they love: caring for patients.


Footing The Bill For Climate Change: ‘By The End Of The Day, Someone Has To Pay’

NPR, Morning Edition, Colin Dwyer


from

[Hurricane] Florence isn’t a fluke. Scientists have predicted that climate change will continue to exacerbate massive storms, and that it will continue to cause conditions ripe for wildfires and other natural disasters.

As the risks of these disasters grow, the insurance industry is adapting with them — and consumer advocates, regulators and insurance researchers alike fear that the brunt of the bills will increasingly fall on the shoulders of low-income homeowners. [audio, 3:40]


Why America Should Embrace Market Surveillance in Sports Betting Before It’s Too Late

The New York Times, Dealbook blog, Scott Shechtman and Tony Sio


from

The cost to the financial industry of catching up with the bad behavior was significantly higher than if it had invested in defensive technologies at the start. The United States has taken years to start the Consolidated Audit Trail, and it is estimated to have cost the industry over $50 million in its first year.

Many systems for monitoring the integrity of sports betting around the world look at odds or prices across bookmakers. This type of approach was abandoned in financial markets over 15 years ago because it does not provide the granularity needed to properly police trading.


Over 2,000 European AI experts join hands to challenge US, China in artificial intelligence

South China Morning Post, Alice Shen


from

“European artificial intelligence is at a crossroads given the huge investments in the technology in the United States and China,” said the Confederation of Laboratories for Artificial Intelligence Research in Europe (Claire), which met for the first time in Brussels last week.

The alliance urges the European Commission to implement an AI strategy for the EU as a whole along the lines of the US National AI Research and Development Plan that was released in late 2016, and China’s Next Generation AI Development Plan that was issued the following year.


Beijing recruits Hong Kong artificial intelligence start-up SenseTime to lead tech drive

South China Morning Post, Tony Cheung and Su Xinqi


from

SenseTime, founded by Chinese University professor Sean Tang Xiaoou and other academics, will join a bevy of technology giants including Baidu and Tencent in spurring the development of next-generation artificial intelligence.

On Thursday, China’s Minister of Science and Technology Wang Zhigang announced his ministry would entrust SenseTime to establish an “open innovation platform for next-generation AI” on intelligent vision.


AI offers a unique opportunity for social progress

The Economist, Open Voices, Mustafa Suleyman


from

My message is simple. First, the progress of science and technology is about to go through the greatest acceleration of all time. And second, this is the greatest opportunity we have had for generations to advance the causes of social justice, equality and the reduction of human suffering.


On Contested Ground, SKA Looks to the Heavens

Undark magazine, Thomas Lewton


from

A vast new radio telescope system promises new astronomical insights. But on the ground in South Africa’s Karoo, it’s stirring up a complex history.


Big data’s crucial role for future of ecology and conservation

University of Oxford, Oxford Science blog


from

New research published in Nature Ecology & Evolution from the Department of Zoology at Oxford University aims to show how big data can be used as an essential tool in the quest to monitor the planet’s biodiversity.

A research team from 30 institutions across the world, involving Oxford University’s Associate Professor in Ecology, Rob Salguero-Gómez, has developed a framework with practical guidelines for building global, integrated and reusable Essential Biodiversity Variables (EBV) data products.

They identified a ‘void of knowledge due to a historical lack of open-access data and a conceptual framework for their integration and utilisation’. In response the team of ecologists came together with the common goal of examining whether it is possible to quantify, compile, and provide data on temporal changes in species traits to inform national and international policy goals.

 
Events



Speed Conference / Cornell Tech

James Grimmelman and Helen Nissenbaum


from

New York, NY September 28-29, Cornell Tech, Bloomberg Center (2 West Loop Road). [free, registration required]


Girls in Tech Present: Women in Sports & Technology

Girls in Tech Los Angeles


from

Los Angeles, CA October 4, starting at 7 p.m., University of Southern California. “A discussion of the career opportunities and challenges of being a woman working at the intersection of sports and technology.” [$$]

 
Deadlines



SysML Conference

Stanford, CA March 31-April 2, 2019 at Stanford University. Deadline for paper submissions is September 28.
 
Tools & Resources



GroundAI

Taha Raslan


from

“We couldn’t find a place where we could share our final project paper, get pinpointed feedback to further develop it, read other students work, and raised discussion about other academic papers. Thereafter, we came up with Ground AI.


Radix for R Markdown

RStudio Blog, JJ Allaire


from

Radix is “a new R Markdown format optimized for scientific and technical communication”


Technical update — Schema.org and Google Dataset Search

Dryad Data Repository, Dryad news and views


from

A core part of Dryad’s mission is to make our data available as widely as possible. Although most users find Dryad content through our website or via links from journal articles, many users also find Dryad content through search aggregators and other third-party services. For our content to be available to these external services, we follow the FAIR principle of Interoperability and make metadata available through a number of machine-readable mechanisms, including OAI-PMH, the DataONE API, and RSS.

This year, we added support for a new machine-readable mechanism, the Schema.org metadata format. This format was originally developed by representatives of major search engines, including Google, Bing, and Yahoo. It has recently been endorsed by a number of data repositories, including Dryad. The Schema.org metadata format allows us to embed machine-readable descriptions of data directly into the same web pages that users use to view Dryad content.


How to Become a Better Software Developer: A Handbook on Personal Performance

7pace Blog


from

How exactly do developers go from capable to good? From good to great?

This handbook is meant to be a guide to developer performance–how to understand it, measure it, and improve it.


How Small Data Can Become Big Data’s Secret Weapon

Nielsen, Perspectives, Chris Fosdick and Dimitar Antov


from

In our experience with multinational brands, many large companies have started to neglect these secondary techniques in the mad dash to exploit big data. That’s why we need to emphasize that both types of data have their place, especially when they work together. For example, the benefit of primary research is that it helps you uncover the “why” behind the “buy.” Through primary research, companies can probe specific areas of interest in order to understand everything from what makes their brand sticky to the weird and wonderful ways consumers are actually using their products.

Companies can then take what they learn from small sample sizes and apply it the much broader scope of big data. Tying these data sets together makes them superadditive—fancy math-speak for making the whole greater than the sum of its parts. Segmentation is a prime example of this. It’s also been a pillar of marketing strategy for decades. Big data doesn’t replace smart segmentation, but it can certainly make it more powerful.

 
Careers


Tenured and tenure track faculty positions

Computational Social Science Professor



New York University Abu Dhabi, Social Science Division; Abu Dhabi, United Arab Emirates
Full-time positions outside academia

Senior / Lead / Principal Data Engineer, ML / Deep Learning (Einstein)



Salesforce; San Francisco, CA

Research Engineer



Monterey Bay Aquarium Research Institute; Moss Landing, CA
Postdocs

Postdoctoral Fellow



Sage Bionetworks; Seattle, WA

Leave a Comment

Your email address will not be published.