Data Science newsletter – August 29, 2019

Newsletter features journalism, research papers, events, tools/software, and jobs for August 29, 2019

GROUP CURATION: N/A

 
 
Data Science News



Health informatics and health equity: improving our reach and impact

Journal of the American Medical Informatics Association; Tiffany C Veinot, Jessica S Ancker, Suzanne Bakken


from

Health informatics studies the use of information technology to improve human health. As informaticists, we seek to reduce the gaps between current healthcare practices and our societal goals for better health and healthcare quality, safety, or cost. It is time to recognize health equity as one of these societal goals—a point underscored by this Journal of the American Medical Informatics Association Special Focus Issue, “Health Informatics and Health Equity: Improving our Reach and Impact.” This Special Issue highlights health informatics research that focuses on marginalized and underserved groups, health disparities, and health equity. In particular, this Special Issue intentionally showcases high-quality research and professional experiences that encompass a broad range of subdisciplines, methods, marginalized populations, and approaches to disparities. Building on this variety of submissions and other recent developments, we highlight contents of the Special Issue and offer an assessment of the state of research at the intersection of health informatics and health equity.


New ecosystem platform launched to empower big data, AI researchers

Xinhua


from

The Global Association for Research Methods and Data Science (GRMDS) has launched a new ecosystem platform this week to empower big data and artificial intelligence (AI) researchers worldwide.

The platform aims to help researchers solve problems in data science through the application of methodologies developed by the GRMDS, a leading non-profit organization in data science.

Such methods include Research Methods Four Elements (RM4Es), namely Equation, Estimation, Evaluation of Models and Execution/Explanation, as well as ResearchMap, which maps out the necessary procedure for research projects.


Inside the World’s Biggest Hurricane Simulation Tank

Digital Trends, Dylan Furness


from

It’s a clear summer day in South Florida, but a storm rages inside the SUSTAIN Laboratory at the University of Miami’s (UM) Rosenstiel School of Marine and Atmospheric Science, where the world’s biggest hurricane simulation tank is in full swing. Category 5 strength wind and waves wallop a makeshift stilt house, pound at its foundation, as a suite of sensors collect data on the structure’s stability. Brian Haus, an ocean scientist and director of the facility, leans his whole body against the 3-inch thick acrylic as if he wants to be closer to the action. I’ve lived through a handful of hurricanes as a kid growing up in South Florida, but never one of this magnitude or from this point of view—never seen sawtooth waves consume a shoreline or heard the terrifying strength of 157-mph winds.


IBM gives artificial intelligence computing at MIT a lift

MIT News, MIT Quest for Intelligence


from

Nearly $12 million machine will let MIT researchers run more ambitious AI models.


Artificial intelligence’s potential, limits explored at Livermore lab event

San Francisco Chronicle, Erin Allday


from

2of4U.S. Energy Secretary Rick Perry speaks at an energy summit in Salt Lake City in May. On Monday, Perry attended a presentation on artificial intelligence at Lawrence Livermore National Laboratory in Livermore. (AP Photo/Rick Bowmer)Photo: Rick Bowmer / Associated Press
3of4Energy Secretary Rick Perry, left, speaks at a roundtable discussion among the government agencies, the private sector and academic communities on the importance of American leadership in artificial intelligence at the Lawrence Livermore National Laboratory in Livermore, Calif., Monday, Aug. 26, 2019. After the discussion, Perry and Sandy Photo: Rich Pedroncelli / Associated Press

Artificial intelligence, in the form of rapid collection and analysis of massive sets of biologic data, already has revolutionized medicine — by some estimates, the world’s fastest machines have processed more data in the past two years than in all of human history. But the technology needs to be much more powerful if it’s going to have real-world consequences for people with everything from traumatic brain injuries to cancer, kidney disease and Alzheimer’s.

That was the message Monday at a meeting of the country’s top scientists in brain and computational research at Lawrence Livermore National Laboratory, after which U.S. Energy Secretary Rick Perry signed an agreement with a major philanthropic foundation to promote formal partnerships between the Department of Energy and public and private institutions around the country.

Exactly what types of ventures the memorandum of understanding between the Energy Department and the Weill Family Foundation, based in New York, will support are not yet known. But the agreement could bolster a relatively new partnership between UCSF and scientists from the national labs in Livermore and Berkeley that is focused on processing enormous amounts of data from people with traumatic brain injuries, in hopes of developing better diagnostic equipment and even treatments.


The Amazon rainforest’s worst-case scenario is uncomfortably near

Vox, Umair Irfan


from

Wildfires and deforestation are pushing the Amazon rainforest toward a dieback scenario: an irreversible cycle of collapse.


What is “good enough” for automated fact-checking?

Medium, Emma Lurie


from

While fact-checkers are always busy, election season inevitably increases their workloads. As a result, many in the fact-checking community dream of the day where automated fact-checking systems display live fact-checks throughout important events like presidential debates.

But what is automated fact-checking? For the purposes of this post, automated fact-checking systems rely on computational methods to 1) decrease the lag time between a problematic statement (a claim) and a correction (a fact-check article) and to 2) increase the number of claims that are associated with fact-checks.


Morgan Stanley also has its very own proprietary programming language

eFinancialCareers, Sarah Butcher


from

Investment banks are nothing if not innovative. Decades ago, long before the likes of Java or Python were used widely, banks spotted gaps in the market and stepped in with their own programming languages devised to suit their particular needs. Goldman Sachs famously concocted ‘Slang’. Less famously, Morgan Stanley concocted A+. – And, like Slang, it’s still in use.

For those not famililar with A+, the language was developed at Morgan Stanley over 30 years ago by Arthur Whitney, a Canadian computer scientist who worked for the bank in the late 1980s. Whitney left Morgan Stanley in 1993 to co-found Kx Systems, the data analysis company now owned by First Derivatives, which runs the Kdb databases that underpin many algorithmic trading systems. A+ is seen as a precursor to K – the language behind Kdb/+Q, which was pioneered by Whitney after he left banking. However, while K and its successor Q are now used in thousands of applications globally, A+ doesn’t seem to have caught on much outside Morgan Stanley.


Top quants are leaving Deutsche Bank’s remaining equities strats team | eFinancialCareers

eFinancialCareers, Sarah Butcher


from

If you’re a strat at Deutsche Bank and you work on the bank’s “Jaguar” equities strats platform, you might be wondering what’s coming next. – After all, the bank is exiting the equities business and your services will ultimately be surplus to requirement.

This might be why two of Deutsche’s top equities strats are understood to have left the bank in New York City. – Insiders say Charlie Che resigned and that another exit is imminent.


Congress Plays Catch-Up on Artificial Intelligence at Work

Bloomberg Law, Jaclyn Diaz


from

How artificial intelligence is changing the workplace is starting to get the attention of congressional lawmakers at a time when some employment attorneys are sounding alarms about the need for legislation.

The House Education and Labor Committee plans to hold hearings on machine learning’s impact on workers and their jobs after Congress returns from recess in September.

However, while a hearing is usually a precursor to legislation, employers using AI-based tools and tech companies developing those programs probably don’t need to worry about new bills anytime soon. The focus on Capitol Hill remains on trying to understand what the effect of artificial intelligence on workers could be.


How a growth mindset is vital to academic success

World Economic Forum, Big Think, Paul Ratner


from

What one factor is responsible for getting good grades? Of course, there’s hard work, and having an engaging and qualified teacher. A well-designed curriculum is also important. But there’s another approach that can make a difference, finds a large new National Study of Learning Mindsets. Instilling a growth mindset can make a quick and lasting difference, discovered the scientists.

The national study looked at 12,000 ninth graders from 65 public high schools around the United States. The researchers saw that a helpful intervention can be made to encourage a growth mindset, the belief that intellectual abilities are not fixed solely by genetics but can be developed. You can essentially become smart. Such a mindset can lead to success not just in high school, which 20% of American students don’t finish on time, but college and in all aspects of life.


Deconstructing Google’s excuses on tracking protection

Princeont Center for Information Technology Policy, Freedom to Tinker blog, Jonathan Mayer and Arvind Narayanan.


from

1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting.

2) There is little trustworthy evidence on the comparative value of tracking-based advertising.

3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical.

4) Google is attempting a punt to the web standardization process, which will at best result in years of delay.

 
Events



2020 Behavioral Decision Research and Management (BDRM) conference

ESADE Business School


from

Barcelona, Spain June 16-18, 2020, at ESADE Business School. [save the date]


ACM CSCW 2019

ACM CSCW


from

Austin, TX November 9-13. [$$$]

 
Deadlines



4th Consumer Financial Protection Bureau research conference on consumer finance

Washington, DC December 12-13. Deadline for submissions is September 3.

Propose a Program

” IPAM Room 1200 Dedication 140425IPAM seeks program proposals from the mathematical, statistical, and scientific communities for long programs, workshops, and summer schools. Most program proposals are reviewed at IPAM’s Science Advisory Board meeting, held in November each year. If you would like to discuss your program ideas and prepare a proposal for IPAM’s consideration, you are encouraged to contact the IPAM Director. For all proposals, please include women and members of underrepresented minorities as organizers, speakers, and participants.”
 
Tools & Resources



Setup of an Eyetracking Study

Neilsen Norman Group, Kate Moran


from

Eyetracking equipment can track and show where a person is looking. To do so, it uses a special light to create a reflection in the person’s eyes. Cameras in the tracker capture those reflections and use them to estimate the position and movement of the eyes. That data is then projected onto the UI, resulting in a visualization of where the participant looked.


As More Apps Demand Data, Enterprises Need to Simplify Data Management and Protection

The New Stack, Rawlinson Rivera


from

New enterprise applications are hungry for data, and the industry’s current approach to providing that data has created security and management problems. The traditional way that companies feed these data-hungry applications involves making a lot of data copies. For example, when an enterprise deploys a new application, the IT team must identify the relevant data, make a copy of it, and store that copy in an environment that supports the application. That environment may be on-premises or in the cloud.

This seems like a simple and straightforward solution, but it’s contributing to an underlying problem that is growing worse as the number of applications and data copies increases. The greatest challenge is protecting and managing all of those copies and the compute and storage resources that each application requires.


Making Transformer networks simpler and more efficient

Facebook Artificial Intelligence, Sainbayar Sukhbaatar and Armand Joulin


from

To enable wider use of this powerful deep learning architecture, we propose two new methods. The first, adaptive attention span is a way to make Transformer networks more efficient for longer sentences. With this method, we were able to increase the attention span of a Transformer to over 8,000 tokens without significantly increasing computation time or memory footprint. The second, all-attention layer is a way to simplify the model architecture of Transformer networks. Even with a much simpler architecture, our all-attention network matched the state-of-the-art performance of Transformer networks. We believe that this work to improve the efficiency of Transformer networks is an important step toward wider adaptation.


The NOAO Data Lab version of the DESI imaging Legacy Survey DR8 went live today!

Twitter, NOAO Data Lab


from

“The eighth data release of the @desisurvey imaging Legacy Surveys contains approximately 1.6 billion unique sources […] covering a third of the sky”

 
Careers


Full-time positions outside academia

Research Positions



Space Telescope Science Institute; Baltimore, MD

Research Associate – Governance



Sage Bionetworks; Seattle, WA

Research Analyst, CUSE



The Brookings Instituteion; Washington, DC

Leave a Comment

Your email address will not be published.