NYU Data Science newsletter – November 11, 2015

NYU Data Science Newsletter features journalism, research papers, events, tools/software, and jobs for November 11, 2015

GROUP CURATION: N/A

 
Data Science News



Visual Tech and the Future of Apparel Shopping — Intervisual

Intervisual


from November 01, 2015

The panel’s topic was provocative: will computer vision exponentially increase clothing commerce? It required the two panelists, including our CEO and founder George Borshukov, to peer into the hard-to-predict future of how technology will reshape the way people shop for apparel and footwear.

Held earlier this year in New York City, the panel was part of a two-day summit organized by LDV Capital to explore how digital imaging and video technologies will empower or disrupt businesses and society.

Joining Borshukov on the panel was Alper Aydemir, co-founder of Volumental, the Swedish company whose technology powers 3D body scanning in stores. The editor of TechCrunch, Jonathan Shieber, moderated.

 

The Data Structures for Data Science Workshop: Exploring Common Data Structures Across Programming Languages and Packages

Berkeley Institute for Data Science, Stéfan Van Der Walt and Kyle Barbary


from November 09, 2015

On September 18 and 19, 2015, we brought together 60 community experts for the Data Structures for Data Science (DS4DS) Workshop. Data structures, the way in which (most often numerical) data is represented, are the foundation upon which computational tools, particularly those for data science, are built. Common data structures across programming languages and packages allow for interoperability and code reuse, yet many modern data structures are confined to a single language or computing environment. This workshop, a technical hands-on event, was an attempt at practically answering questions around the unification of data structures with specific consideration given to storage formats, libraries and implementations, exchange protocols, and in- versus out-of-core manipulation.

 

Bringing Julia from beta to 1.0 to support data-intensive, scientific computing

Gordon and Betty Moore Foundation


from November 10, 2015

As data-driven research becomes more mainstream, the need for efficient and powerful scientific computing tools increases. To help meet this need, the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative has granted Julia Computing $600,000 over the next two years to enable the Julia Language team to move their core open-source computing language and libraries into its first production version over the next two years.

 

People are fed up with taking surveys, and that’s bad for science

Quartz, Aamna Mohdin


from November 10, 2015

You should probably start filling out those annoying government surveys. Science depends on it.

Regular surveys of households are crucial sources of data for measuring unemployment, poverty, health insurance coverage, inflation, and much else besides. These statistics help direct government funds and are also used by social scientists for their own research, which can influence government policy.

But in the past few decades, response rates to crucial household surveys in the US has dropped, in what researchers describe in the jargon as “the problem of unit nonresponse.” A new study—“Household Surveys in Crisis,” published in Journal of Economic Perspectives—shows a marked increase in unit nonresponse to important surveys over time.

 

The United States API, and how Remix uses it

Medium, Dan Getelman


from November 10, 2015

Open data is here, and is being used to help build businesses and help cities make better decisions. For years, technologists have been clamoring for governments to open up more of their data. It’s now happening. Cities of all sizes, from San Francisco to Asheville have open data portals filled with machine-readable data. Los Angeles, San Diego, and Philadelphia, among others, have even created executive Chief Data Officer positions that are responsible for opening data across departments. But, now that the data is out there, how is it actually getting used?

I can tell you what we’re up to at Remix. We’re a quickly growing company that uses open data to help cities plan out better public transit. Here’s a couple of examples of how we take open data, and make it available for cities to make better decisions. I’ve also linked to sources and places you can find all the data we use, so be sure to follow the links!

 

EARL 2015 in Boston: R Conference write-up by an attendee

Mango Solutions, Ben Young


from November 10, 2015

A week ago I flew to Boston, Massachusetts for EARL 2015. This was my first business trip, and as such I was very excited. The conference, speakers, and attendees did not disappoint. Mango Solutions put on a great schedule of workshops and talks. I would recommend attending future EARLs to anyone using R professionally.

 

“Every data scientist in this country must take a data ethics class. Just because we can, doesn’t mean we should” – @DJ44 #FCNY @FastCompany

Twitter, DJ Patil


from November 10, 2015

.@JHanlon @Bill_Shapiro @FastCompany – Exactly! What we’ve called for is every training/edu program should have a data ethics curriculum.

 

Federal Crowdsourcing and Citizen Science Toolkit

USA.GOV


from August 11, 2015

Crowdsourcing and citizen science help federal agencies to innovate, collaborate and discover. In this toolkit, you will learn how to design and maintain projects. You can also read through case studies and access additional resources related to communities that practice crowdsourcing and citizen science.

 

The Future of Work: Empowering the Data-Driven Worker

Pacific Standard, Gina Neff


from November 10, 2015

… In the popular imagination, innovation means new computing technology. We now have an unprecedented ability to bring new types of data to the smallest of workplace decisions. Increases in computing power enable big data from the Internet of Things and powerful industry-specific software like the kind I study. Some warn that these technologies will lead to a new kind of Taylorism, where employers monitor every keystroke in hopes of ever greater productivity. It threatens to replace workers, dumb down their jobs, and link old forms of discrimination to new kinds of data. These are serious concerns that must be addressed for the future of good jobs. But they miss the point for growth.

 

The future of tech lies in data, privacy and robots

CIO


from November 09, 2015

The future of tech lies in artificial intelligence, data and privacy. That was the message delivered last week at EmTech, MIT Technology Review’s annual fall event held in Cambridge, Mass. … Yann LeCun, director of AI Research at Facebook, expanded on the topic of smart technology. LeCun was hired to head up and build an AI research group in what Pontin described as an “arms race” in Silicon Valley to determine how to best use artificial intelligence in the tech world. LeCun’s presentation, “Teaching Machines to Understand Us,” outlined the growing focus on creating a smarter user experience by helping computers learn to reason and, ultimately, understand us.

 

The Fortune 500 Teller

strategy+business


from November 09, 2015

… [Geoffrey] West’s research on cities, with Bettencourt, Lobo, and others, has been published in the most rigorous of peer-reviewed journals. But that has not prevented criticism from his peers. “What West, Bettencourt, and Lobo have been doing is finding empirical data-driven properties of cities,” says Steven Koonin, himself a physicist and director of the Center for Urban Science and Progress at New York University. “It is interesting, but what it misses in the next step of knowledge, epistemology, is how does this come about, from all the different components that make up a city.”

West concedes that his work with cities lacks the satisfying end points he found in biology, the clear answers to questions that start with Why. “In biology, you have natural selection, minimizing the energy needed to pump blood through your circulatory system,” he says. “In cities, what if anything is being optimized? My own view is that in some way it is optimizing greed. Everybody, in their own way, wants more. I think that [15 percent] is related to the fact that despite our better natures, we all want more.”

 

Fueling Innovation by Connecting Dots Between Health, Medicine and Technology

TEDMED Blog


from November 09, 2015

Increasingly, innovation sparks from creative connections across disciplines. Drawing from deep expertise in several branches of science, some of our TEDMED speakers employ their own interdisciplinary knowledge to create breakthrough technology that is advancing healthcare and our understanding of human potential.

Theoretical neuroscientist-cum-technologist Vivienne Ming is a co-founder of Socos, where machine learning and cognitive neuroscience combine to maximize students’ life outcomes. The company recently introduced Muse, an app that helps parents and caregivers support the development of young children. Drawing from decades of educational research, Muse aims to “give parents a superpower” by pulling rich detail from each child’s life – whether from the playground, classroom, or at home – and analyzes that data to track their linguistic and metacognitive development. Based on its findings, Muse then sends parents and caregivers daily individualized recommended questions and activities that would most successfully foster their child’s development.

At TEDMED, Vivienne will speak in our Human Explorations session about how we can best harness and maximize human potential.

 
CDS News



NYU’s Center for Data Science Newest GPU Center of Excellence

NVIDIA Blog


from November 10, 2015

New York University’s Center for Data Science is among the universities and institutions helping make these deep learning tasks, once considered esoteric, mainstream. It’s doing so by advancing two key areas of computer science: machine learning, and parallel and distributed systems. These enable application programmers to handle massive datasets easily and efficiently.

Founded by deep learning pioneer Yann LeCun, who’s also director of AI Research at Facebook, the Center for Data Science has forged a strong alliance with NVIDIA as we work to advance GPU-based deep learning, parallel computing research and education.

These efforts are a big part of the reason why we’ve just named the center a GPU Center of Excellence.
– See more at: http://blogs.nvidia.com/blog/2015/11/10/gpu-center-of-excellence/#sthash.i0ZrXqSM.dpuf

 

Profile: Nick Beauchamp

NYU Center for Data Science


from November 10, 2015

What did you study in school? How did you get to what you study now?

Like many data scientists, I took a somewhat winding path to arrive where I am now. Although I started college studying physics and math, I wound up double-majoring in philosophy and literature. I then went to grad school in English at Johns Hopkins, worked on a thesis that combined literature, philosophy of mind, and politics.

How did you get from English studies and politics to data science?

I think because of my background in literature, I was immediately interested in computational text analysis: inferring ideology from speech and the dynamics of opinion change. After doing my work in English at Johns Hopkins, I went to the Carter Center to work on election observation and electoral fraud analysis. That led me back to grad school in politics with a focus on quantitative methods at NYU.

 

Leave a Comment

Your email address will not be published.