NYU Data Science newsletter – December 15, 2015

NYU Data Science Newsletter features journalism, research papers, events, tools/software, and jobs for December 15, 2015

GROUP CURATION: N/A

 
Data Science News



Microsoft beats Google, Intel, Tencent, and Qualcomm in image recognition competition | VentureBeat | Big Data | by Jordan Novet

VentureBeat


from December 10, 2015

Microsoft Research has taken first place in several categories at the sixth annual ImageNet image recognition competition. Technology from Microsoft was able to outperform systems from Google, Intel, Qualcomm, and Tencent, as well as entries from startups and academic labs, according to the results.

The winning system from Microsoft researchers Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun is called “Deep Residual Learning for Image Recognition.” (A paper detailing the system has just been published.)

 

Scientists Tend to Superspecialize – But There Are Ways They Can Change

Elsevier SciTech Connect


from December 10, 2015

… Amidst the calls for boundary-spanning collaboration, the fact is that most scientists work within institutional and professional contexts that overwhelmingly favor and reward deep specialization. Consider the names of departments and journals, how communications flow within rather than across unit boundaries, and how pay and grant monies are allocated. For some, the word “generalist” is pejorative, but collaborating across disciplines does not need to be a bad thing. In fact, in one survey of faculty, 70% agreed with the value of cross-disciplinary work.

Beyond structural determinants, what are the personal drivers that shape the depth versus breadth of researchers’ professional output? While investigating this question, Andrew Hess and I defined deep research as that which adds to our knowledge in highly specialized ways. We defined broad research as that which spans a greater variety of topics.

 

Stephen Wolfram Aims to Democratize His Software – The New York Times

The New York Times, Bits blog


from December 14, 2015

… Stephen Wolfram wants to make his technology and his software philosophy available to far more people, including newcomers to computing, like students and children. So he has decided to make a version of the Wolfram Language and development tools available as a free cloud service. To help, he also has published a book, “An Elementary Introduction to the Wolfram Language,” which is free to read online and can be ordered in a print version from the Wolfram website ($14.96) or Amazon ($16.70).

 

“Why do people contribute to the R?” – conclusions from a new PNAS article

Tal Galili, R-statistics blog


from December 14, 2015

tl;dr: People contribute to R for various reasons, which evolves with time. The main reasons appear to be: “fun coding”, personal commitment to the community, interaction with like-minded and/or important people – leading to higher self-esteem, future job opportunities, a chance to express oneself and enjoyable social inclusion.

 

Q&A with Cerebri, a Watson Ecosystem Partner – Watson Dev

IBM, Watson Dev blog


from December 09, 2015

Last year as part of Watson’s education initiative, 10 U.S. universities taught Watson technology classes. Throughout the semester classes built a live prototype and business case that culminated in a pitch competition at the Watson New York City headquarters in Silicon Alley. The winner, Cerebri, captivated the audience with their social services application that helps citizens quickly gather answers to their most pressing questions. For example, someone could use Cerebri to quickly receive information about the store closest to them that takes food stamps. As part of their win, they received $100,000 in startup funds and also, entry into the IBM Watson Ecosystem Partner Program.

We caught up with Cerebri CEO, Ryan Lund and CTO Thejas Prasad to discuss how they are bringing their product to market.

 

My takeaways from NIPS 2015

Dan Vanderkam, danvk.org


from December 11, 2015

I’ve just wrapped up my trip to NIPS 2015 in Montreal and thought I’d jot down a few things that struck me this year:

  • Saddle Points vs Local Minima
  • I heard this point repeated in a talk almost every day. In low-dimensional spaces (i.e. the ones we can visualize) local minima are the major impediment to optimizers reaching the global minimum. But this doesn’t generalize. In high-dimensional spaces, local minima are almost non-existent. Instead, there are saddle points: points which are a minimum in some directions but a maximum in others.

     

    Cruz campaign credits psychological data and analytics for its rising success

    The Washington Post


    from December 13, 2015

    The outreach to [64-year old Birdie] Harms and others like her is part of a months-long effort by the Cruz campaign to profile and target potential supporters, an approach that campaign officials believe has helped propel the senator from Texas to the top tier among Republican presidential candidates in many states, including Iowa, where he is in first place, according to two recent polls. It’s also a multimillion-dollar bet that such efforts still matter in an age of pop-culture personalities and ­social-media messaging.

     

    UW Announces Two New Positions in Support of Data-Intensive Discovery Initiative

    UW eScience Institute


    from December 14, 2015

    Shortly after being named University of Washington Provost, Anne Marie Cauce (now the UW’s president) asked eScience Founding Director Ed Lazowska and Steering Committee member Werner Stuetzle to advise on additional steps to undertake to ensure that the University of Washington was a leader in data-intensive discovery. Under the Provost’s Initiative in Data-Intensive Discovery, the Provost will consider providing 50% support on an ongoing basis for faculty hires who excel both in advancing data science methodology and in data-intensive discovery in some field.

     

    Recap: Workshop on the Future of Open Science and Publishing

    Berkeley Institute for Data Science


    from December 14, 2015

    On November 2, we hosted a workshop at BIDS to discuss the future of open science and publishing. Our goal was to explore how increasing calls for openness are likely to affect the landscape of academic research and scholarly communication in the coming decades. [videos]

     

    Looking for a postdoc in software / data curation

    Berkeley Library, madLibbing blog


    from December 11, 2015

    The UC Berkeley Library is offering a two-year post-doc for a promising applied scholar to work on software and data curation issues, possibly with a focus on social network data. We especially are interested in advancing our understanding and support for organizing, preparing, and preserving — for re-use — data from open source software projects, including the code with all of its gnarly revision and forking history, the documentation, check-in remarks and other metadata, and the communications of the social network that often is layered on top of the software development platform. Think of Github as a canonical example.

     

    Why OpenAI Matters

    Miles Brundage


    from December 12, 2015

    The announcement of OpenAI has justifiably gotten a lot of attention, both in the media and in the relevant expert community. I am currently flying home from NIPS 2015, a major machine learning conference, and right after OpenAI was announced, I noticed several people around me checking out the website on their phones and computers to learn more, and I expect that OpenAI’s information/recruiting session at NIPS later today will be extremely popular. Here, I will summarize some preliminary, sleep-deprived thoughts on why this is getting so much attention and why that is appropriate.

     

    Ten Deep Learning Trends at NIPS 2015

    Brad Neuberg, coding in paradise blog


    from December 13, 2015

    I attended the Neural Information Processing Systems (NIPS) 2015 conference this week in Montreal. It was an incredible experience, like drinking from a firehose of information. Special thanks to my employer Dropbox for sending me to the show (we’re hiring!)

    Here’s some of the trends I noticed this week; note that they are biased towards deep and reinforcement learning as those are the tracks I attended at the conference:

    1) Neural network architectures are getting more complex and sophisticated


    More reports from NIPS 2015:

  • My takeaways from NIPS 2015, Dan Vanderkam
  • Interesting things at NIPS 2015, John Langford
  •  

    Interesting things at NIPS 2015

    John Langford, Machine Learning (Theory) blog


    from December 14, 2015

    NIPS is getting big. If you think of each day as a conference crammed into a day, you get a good flavor of things. Here are some of the interesting things I saw.

  • Grammar as a foreign language. Essentially, attention model + LSTM + a standard dataset = good parser.
  •  

    Old economic models couldn’t predict the recession. Time for new ones.

    Santa Fe Institute, Christian Science Monitor


    from December 14, 2015

    … With the Eurozone Crisis still unfolding and financial panics now a regular occurrence on Wall Street, the next trillion-dollar meltdown might not be that far away. We justifiably spend billions trying to understand the weather, the climate, the oceans, and the polar regions. Why is the budget for basic research in economics, something that touches us all directly and daily, so paltry?

    We think this has something to do with the way economics research is traditionally done, and we have a better way: loading millions of artificial households, firms, and people into a computer and watching what happens when they are allowed to interact.

     
    Events



    Say hello to the Enigma conference



    USENIX Enigma is a new conference focused on security, privacy and electronic crime through the lens of emerging threats and novel attacks. The goal of this conference is to help industry, academic, and public-sector practitioners better understand the threat landscape. Enigma will have a single track of 30-minute talks that are curated by a panel of experts, featuring strong technical content with practical applications to current and emerging threats.

    Monday-Wednesday, January 25-27, 2016

     

    JupyterDay Chicago 2016



    We are pleased to announce our second JupyterDay Workshop, in Chicago on February 20th, 2016 from 8:30am-6:30pm.

    Saturday, February 20, at Civis Analytics in Chicago

    Also: Jupyter Days Boston on Thursday-Friday, March 17-18, at Harvard

     

    Leave a Comment

    Your email address will not be published.