NYU Data Science newsletter – January 28, 2016

NYU Data Science Newsletter features journalism, research papers, events, tools/software, and jobs for January 28, 2016

GROUP CURATION: N/A

 
Data Science News



9 “Laws” for Data Mining

Forbes, Meta S. Brown


from January 27, 2016

Analytics initiatives are not failure proof. In practice, they fail often. Bernard Marr, Forbes contributor and author of Big Data: Using SMART Big Data, Analytics and Metrics to Make Better Decisions and Improve Performance, predicts that “half of all big data projects will fail to deliver against their expectations.” Michael Schrage, a research fellow at MIT Sloan School’s Center for Digital Business, speaks of frustration that analytics investments are not yielding expected results.

Positive returns on analytics investment require management action. But many managers are reluctant to take action based on analytics, especially when the numbers don’t seem to match with their own gut understanding. They don’t know much about analytics and don’t really trust the process. Why trust something you don’t understand?

 

“Why IT Fumbles Analytics Projects” – Statistical Modeling, Causal Inference, and Social Science

Andrew Gelman


from January 27, 2016

Someone pointed me to this Harvard Business Review article by Donald Marchand and Joe Peppard, “Why IT Fumbles Analytics,” which begins as follows:

In their quest to extract insights from the massive amounts of data now available from internal and external sources, many companies are spending heavily on IT tools and hiring data scientists. Yet most are struggling to achieve a worthwhile return. That’s because they treat their big data and analytics projects the same way they treat all IT projects, not realizing that the two are completely different animals.

Interesting! I was expecting something pretty generic, but this seems to be leading in an unusual direction.

 

Harvard awarded £19m to build brain-inspired artificial intelligence

Wired UK


from January 25, 2016

Harvard University has been awarded $28 million (£19m) to investigate why brains are so much better at learning and retaining information than artificial intelligence. The award, from the Intelligence Advanced Projects Activity (IARPA), could help make AI systems faster, smarter and more like human brains.

While many computers have a comparable storage capacity, their ability to recognise patterns and learn information does not match the human brain. But a better understanding of how neurons are connected could help develop more complex artificial intelligence.

 

The Value of a Professional Network? — Medium

Medium, Dan Tunkelang


from January 26, 2016

In “The Startup of You”, LinkedIn co-founder Reid Hoffman advises to “strengthen your professional network by building powerful alliances and maintaining a diverse mix of relationships” and to “tap your network for information and intelligence that help you make smarter decisions.” It’s great advice?—?a 21st-century update of “it’s not what you know, it’s who you know.”

But I’m not aware of any scientific analysis that establishes the return on investment for developing your professional network. If you’re an outbound professional?—?such as a salesperson or recruiter?—?then you’ve learned from experience that having a broader network helps you do your job. For the rest of us, the value of a professional network may not be quite as obvious.

While I worked at LinkedIn, I advocated for data scientists to try to measure the value of a professional network, especially as part of LinkedIn’s work on the Economic Graph. I’m not aware of any scholarship in this area?—?from LinkedIn or anyone else?—?and I feel it’s an area ripe for research.

 

UW computer scientists are working on a way for you to talk to the dead

UW CSE News


from January 27, 2016

Advances in computing have disrupted many industries, from financial services and retail, to travel and real estate. Could psychic readings be next?

In a story posted on MyNorthwest.com, KIRO Radio reporter Rachel Belle foresees the day when you will be able to interact with a 3-D model of your dearly departed. And it will all be thanks to members of UW CSE’s GRAIL Group.

Belle is referring to research by CSE graduate student Supasorn Suwajanakorn and professors Ira Kemelmacher-Shlizerman and Steve Seitz in which they construct and animate 3-D models of celebrities from photos and videos.

 

A Google DeepMind Algorithm Uses Deep Learning and More to Master the Game of Go

MIT Technology Review


from January 27, 2016

Google achieves one of the long-standing “grand challenges” of AI by building a computer that can beat expert players at the board game Go.

 

Most of your Facebook friends couldn’t care less about you

Engadget


from January 25, 2016

Even if you have thousands of Facebook friends, you can probably only count on a handful in a pinch, according to a new study. The author, anthropologist Robin Dunbar, should know. He’s the guy who came up with Dunbar’s number, which shows that in the real world, people can only maintain about 150 stable relationships. For his latest research, Dunbar analyzed a UK study of 3,375 Facebook users between the ages of 18 and 65. On average, folks had 150 followers but said that they could only count on 4.1 of them during an “emotional crisis,” and only 13.6 ever express sympathy.

 

A Conversation With Marc Andreessen: AI, Robotics, Jobs and Accelerating The Future – CTOvision.com

CTOvision


from January 26, 2016

If you are an enterprise technologist or if you love thinking about the incredible future we can build for ourselves you no doubt already track the very interesting Marc Andreessen. His open sharing of views and context via his blog, on YouTube and his Twitter Feed are a great source of information for those seeking insights into the emerging tech scene and we recommend following him there for continuous context.

We had an opportunity to ask Marc for some clarifying thoughts on his views and present them in this two-part series. This first post is on AI, robotics and the future of jobs, the second dives into education, training and some concepts that may help us think through future uses of technology in our lives. A second post in this series will dive into some of the implications of future trends on education, training and our home lives.

 

Picking the brain of IRRI collaborating scientist Michael Purugganan

Rice Today


from January 28, 2016

There are scientists and then there are rock star scientists. What’s the difference? Unlike their counterparts in the entertainment industry, rock star scientists aren’t surrounded by an entourage. They do not make dramatic entrances nor do they strut around the lab wearing flashy gowns. They are actually regular scientists but with a little something extra. Writer Matt Hickman described it best:

It’s all about the charisma, a willingness to communicate and, at times, stir up a bit of controversy… challenging convention and getting people to wake up and acknowledge the world around them, to think.

Michael Purugganan, professor at New York University and collaborator with the International Rice Research Institute (IRRI), certainly fits the description.

 

Mapping regulatory elements

MIT News


from January 27, 2016

All the tissues in the human body are made from proteins, and for every protein, there’s a stretch of DNA in the human genome that “codes” for it, or describes the sequence of amino acids that will produce it.

But these coding regions constitute only about 1 percent of the genome, and scattered throughout the other 99 percent are sequences involved in regulating gene expression, or determining which coding regions will be translated into proteins. And when.

In the latest issue of Nature Biotechnology, researchers at MIT and Harvard Medical School describe a new technique for systematically but efficiently searching long stretches of the genome for regulatory elements. And in their first application of the technique, they find evidence that current thinking about gene regulation is incomplete.

 

The Godfather Of Interactive Computing: J. C. R. ‘Lick’ Licklider

Lifehacker Australia


from January 27, 2016

The American computer pioneer often known simply as “Lick” imagined many of the concepts that are now core to the way we use and interact with technology. He provided both ideas and funding for graphical computing, point-and-click interfaces, digital libraries and banking or shopping online. From IBM to the US military’s advanced research agency (DARPA) and MIT, his vision in the 1960s ultimately inspired the Internet and even parts of Unix. Here’s what you may not know about J.C.R Licklider, pioneer of cybernetics, psychoacoustics and artificial intelligence.

 

Stephen Wolfram Remembers Marvin Minsky

Medium, Backchannel


from January 27, 2016

I think it was 1979 when I first met Marvin Minsky, while I was still a teenager working on physics at Caltech. It was a weekend, and I’d arranged to see Richard Feynman to discuss some physics. But Feynman had another visitor that day as well, who didn’t just want to talk about physics, but instead enthusiastically brought up one unexpected topic after another.

That afternoon we were driving through Pasadena, California — and with no apparent concern to the actual process of driving, Feynman’s visitor was energetically pointing out all sorts of things an AI would have to figure if it was to be able to do the driving. I was a bit relieved when we arrived at our destination, but soon the visitor was on to another topic, talking about how brains work, and then saying that as soon as he’d finished his next book he’d be happy to let someone open up his brain and put electrodes inside, if they had a good plan to figure out how it worked.

Feynman often had eccentric visitors, but I was really wondering who this one was. It took a couple more encounters, but then I got to know that eccentric visitor as Marvin Minsky.

 
Deadlines



IEEE VIS 2016

deadline: subsection?

All conferences at IEEE VIS allow both single-blind (not anonymized) as well as double-blind (anonymized) submissions. Double-blind submissions are allowed for those authors who want to submit their work anonymously. Therefore, those authors should NOT include their name or institution on the cover page of the initial submission, and should make an effort to ensure that there is no revealing information in the text (such as obvious citations to authors’ previous work, or making acknowledgments to colleagues of long standing). Authors should also avoid posting their submitted manuscript on the web until the final notification date. To reiterate, the choice of complete anonymity (i.e., single or double-blind) is optional. Authors can reveal their names and affiliations in the first round of the review cycle if they choose not to anonymize their work.

Deadline for abstract submission is Monday, March 21. Full papers are due Thursday, March 31.

 
Tools & Resources



Dirt Simple HPC: Making the Case for Julia

The Next Platform


from January 26, 2016

Choosing a programming language for HPC used to be an easy task. Select an MPI version then pick Fortran or C/C++. Today there are more choices and options, but due to the growth of both multi-core processors and accelerators there is no general programming method that provides an “easy” portable way to write code for HPC. … This situation may be tolerable to the hardcore HPC applications developer, but it is a huge impediment to anyone wishing to get started in HPC.

 

NIH Author Manuscripts Available for Text Mining

maillist


from January 27, 2016

NIH-supported scientists have made over 300,000 author manuscripts available in PMC. Now NIH is making these papers accessible to the public in a format that will allow robust text analyses.

You can download the PMC collection of NIH-supported author manuscripts as a package in either XML or plain-text format at ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/manuscript/.

 

Leave a Comment

Your email address will not be published.