Data Science newsletter – January 23, 2021

Newsletter features journalism, research papers and tools/software for January 23, 2021

GROUP CURATION: N/A

 

The science of mob thinking

Axios, Sara Fischer and Alison Snyder


from

The Capitol siege last week came as a shock to many Americans who had no idea how intense election denialism, and to an extent white supremacy, has been brewing in American society.

Why it matters: Research shows that this type of mob thinking has become stronger and more frequent as more news and information has moved online. Experts also suggest President Trump played a key role in weaponizing human tendencies to distrust people who look or act different.

The way people determine what’s true and what’s false, especially online, relies heavily on people trusting sources of information over substance, according to experts.


Intel launches RealSense ID for on-device facial recognition

VentureBeat, Kyle Wiggers


from

Intel today launched the newest addition to RealSense, its product range of depth and tracking technologies designed to give machines depth perception capabilities. Called RealSense ID, it’s an on-device solution that combines an active depth sensor with a machine learning model to perform facial authentication.

Intel claims RealSense ID adapts to users as physical features like facial hair and glasses change over time and works in various lighting conditions for people “with a wide range of heights or complexions.”


Drones over the Amazon

National Science Foundation


from

The Amazon basin, home to the largest rainforest in the world, plays a crucial role in maintaining the planet’s carbon budget, absorbing and storing billions of tons of carbon dioxide annually. But a tipping point looms — one that may turn this vital carbon sink into one of the largest sources of carbon dioxide on the planet.

By “smelling the forest,” a Harvard-led team of researchers funded by the U.S. National Science Foundation is attempting to measure how and when that change could happen. The scientists report their results in the journal Environmental Science: Atmospheres.


How the famed Arecibo telescope fell—and how it might rise again

Science, Daniel Clery


from

[Sravani Vadd] woke up to a full inbox. At 2:45 a.m., toward the end of her slot, an 8-centimeter-thick steel cable, one of 18 suspending a 900-ton instrument platform high above the dish, had pulled out of its socket at one end and fallen, slicing into the dish. “I was totally shocked. How could a cable break?” she says. Although she didn’t know it at the time, the photons she gathered from NGC 7469 would be the last ones Arecibo would ever scoop up.

The rest of the story is now well known. A second support cable snapped 3 months later, on 6 November, and the National Science Foundation (NSF), which owns the observatory, said attempting repairs was too dangerous: Arecibo would be dismantled. On 1 December, fate took control as more cables snapped and the platform, as heavy as 2000 grand pianos, came crashing down into the dish.


Americans lose trust in leaders and information

Axios, Sara Fischer


from

Americans are losing trust in leaders across every area of their lives — and the information coming from every source of their news, according to the 21st annual Edelman Trust Barometer, out Wednesday, which measures trust in institutions globally.

Why it matters: The sobering report shows that people crave facts more than ever, but most have bad habits and a growing distrust of everything from journalists to vaccines and contact tracing.

Details: Across every type of institution — media, government, business and NGOs — trust has fallen to historic lows, according to the report.


Reducing embodied carbon through AI and machine learning

pbc today (UK), Energy News


from

Computer scientists at the University of the West of England are developing software that uses AI and machine learning to help construction companies reduce the amount of embodied carbon in their building and infrastructure projects. Dr Lukman Akanbi explains how the project could help the UK achieve its net zero targets


COVIDU app helps model spread of COVID-19 on campuses

Harvard Gazette


from

A team of Harvard researchers that includes Gary King, the director of the Institute for Quantitative Social Science (IQSS), and Rochelle Walensky, the incoming director of the Centers for Disease Control and Prevention, have launched a new disease-modeling app that simulates what different transmission and mitigation scenarios can look like in university settings.

Called COVIDU, the app is an interactive tool that factors several important conditions — community transmission, external infection, testing cadence, student population, and other social settings unique to campus communities — in modeling the spread of COVID-19 on a hypothetical campus and estimating the likelihood of different potential outcomes.


NTU Singapore start-up commercialises AI that can detect leaks instantly in gas pipelines

EurekAlert! Science News, Nanyang Technological University


from

A sensor network powered by an artificial intelligence (AI) algorithm developed by scientists from Nanyang Technological University, Singapore (NTU Singapore) can accurately detect, in real-time, gas leaks and unwanted water seepage into gas pipeline networks.

Successful in field trials conducted on Singapore’s gas pipeline networks, the algorithm has been patented and spun off into a start-up named Vigti, which is now commercialising the technology. It has recently raised early start-up funding from Artesian Capital and Brinc, Hong Kong.


This Chinese Lab Is Aiming for Big AI Breakthroughs

WIRED, Business, Will Knight


from

In a low-rise building overlooking a busy intersection in Beijing, Ji Rong Wen, a middle-aged scientist with thin-rimmed glasses and a mop of black hair, excitedly describes a project that could advance one of the hottest areas of artificial intelligence.

Wen leads a team at the Beijing Academy of Artificial Intelligence (BAAI), a government-sponsored research lab that’s testing a powerful new language algorithm—something similar to GPT-3, a program revealed in June by researchers at OpenAI that digests large amounts of text and can generate remarkably coherent, free-flowing language. “This is a big project,” Wen says with a big grin. “It takes a lot of computing infrastructure and money.”

Wen, a professor at Renmin University in Beijing recruited to work part-time at BAAI, hopes to create an algorithm that is even cleverer than GPT-3.


Google is investigating another top AI ethicist

The Verge, Jon Porter


from

Google is investigating artificial intelligence researcher Margaret Mitchell, who co-leads the company’s Ethical AI team, and has locked her corporate account, Axios reports. The news comes a little over a month after another prominent AI ethicist, Timnit Gebru, said she was fired by the company. Mitchell’s account has now reportedly been locked for “at least a few days” but she hasn’t been fired, according to a tweet from Gebru. Mitchell did not immediately respond to a request for comment.

In a statement given to Axios, Google said it was investigating Mitchell after its systems detected an account had “exfiltrated thousands of files and shared them with multiple external accounts.” According to an Axios source, Mitchell had been using a script to go through her messages, finding examples of discriminatory treatment of Gebru. Last week, Mitchell tweeted to say she was documenting “current critical issues from [Gebru’s] firing, point by point, inside and outside work.”


California Company Settles FTC Allegations It Deceived Consumers about use of Facial Recognition in Photo Storage App

U.S. Federal Trade Commission, Press Releases


from

A California-based developer of a photo app has settled Federal Trade Commission allegations that it deceived consumers about its use of facial recognition technology and its retention of the photos and videos of users who deactivated their accounts.

As part of the proposed settlement, Everalbum, Inc. must obtain consumers’ express consent before using facial recognition technology on their photos and videos. The proposed order also requires the company to delete models and algorithms it developed by using the photos and videos uploaded by its users.

“Using facial recognition, companies can turn photos of your loved ones into sensitive biometric data,” Andrew Smith, Director of the FTC’s Bureau of Consumer Protection, said. “Ensuring that companies keep their promises to customers about how they use and handle biometric data will continue to be a high priority for the FTC.”


New Intel CEO Making Waves: Rehiring Retired CPU Architects

AnandTech, Ian Cutress


from

We’re following the state of play with Intel’s new CEO, Pat Gelsinger, very closely. Even as an Intel employee for 30 years, rising to the rank of CTO, then taking 12 years away from the company, his arrival has been met with praise across the spectrum given his background and previous successes. He isn’t even set to take his new role until February 15th, however his return is already causing a stir with Intel’s current R&D teams.

News in the last 24 hours, based on public statements, states that former Intel Senior Fellow Glenn Hinton, who lists being the lead architect of Intel’s Nehalem CPU core in his list of achievements, is coming out of retirement to re-join the company. (The other lead architect of Nehalem are Ronak Singhal and Per Hammerlund – Ronak is still at Intel, working on next-gen processors, while Per has been at Apple for five years.)


Technologists Use Facial Recognition on Parler Videos

VICE, Joseph Cox


from

Technologists have used facial recognition techniques on the large archive of Parler videos filmed from the January 6 Capitol riots, Motherboard has learned. In some cases they have been able to track individual faces across different videos, pinpointing where a person was at specific points in time, potentially even if they did not use Parler themselves.

The news signals how archivists, hackers, and hobbyists continue to mine the Parler data for what they believe may be useful insights to provide law enforcement. It also highlights the fraught issue of facial recognition, which can often be inaccurate and require manual analysis to review, and more generally demonstrates the democratization of facial recognition.


Elephant populations surveyed from space using artificial intelligence

BBC Science Focus Magazine, Jason Goodyer


from

In this study the team used an automated artificial intelligence system created by Dr Olga Isupova, a computer scientist at the University of Bath, to analyse high-resolution images of the elephants as they moved through forests and grasslands captured by the commercially run Worldview-3 observation satellite. They found that their system was able to pick out the animals with the same accuracy as human analysts.

Though the combination of satellite imagery and deep learning has been previously used to identify marine animals, the study marks the first time the technique has been used to monitor animals moving through a diverse, heterogeneous landscape that includes areas of open grassland, woodland and scrub.


Connecting users to quality journalism with AI-powered summaries

London School of Economics, Polis blog, Journalism Ai Collab


from

In the massive flood of information that meets the modern media consumer on digital platforms, it can often be hard to spot the real editorial gems. News is ubiquitous and quality journalism runs the risk of getting lost in the abundance of content. At the same time, the thirst for outstanding reporting is greater than ever. All respectable news outlets put great effort into producing well-crafted pieces of particularly high journalistic value. Those are the unforgettable stories that include unique voices or perspectives, that add deeper analysis and context or that excel in truly captivating storytelling. To put it simply: the very best of our journalism.

The hypothesis for this study is that AI can play a role in increasing the visibility and use of these high-value stories. We will explore how AI-powered, automated summaries can be used in the modern newsroom. Our main approach is rooted in the idea of ‘structured journalism’ that aims at atomising existing content and using automated repackaging to create new journalistic products.

SPONSORED CONTENT

Assets  




The eScience Institute’s Data Science for Social Good program is now accepting applications for student fellows and project leads for the 2021 summer session. Fellows will work with academic researchers, data scientists and public stakeholder groups on data-intensive research projects that will leverage data science approaches to address societal challenges in areas such as public policy, environmental impacts and more. Student applications due 2/15 – learn more and apply here. DSSG is also soliciting project proposals from academic researchers, public agencies, nonprofit entities and industry who are looking for an opportunity to work closely with data science professionals and students on focused, collaborative projects to make better use of their data. Proposal submissions are due 2/22.

 


Tools & Resources



Managing the Unintended Consequences of Your Innovations

Harvard Business Review, Nitin Nohria and Hemant Taneja


from

Venture capitalists love disruptive startups that can scale quickly and keep regulators at bay. But as society becomes more aware of the unintended consequences of new enterprises, especially ones that create technology that becomes a part of our daily life, the people who launch these ventures must become more attentive to and proactive about identifying potential unintended consequences, in order to mitigate them early.


Featured Dataset: “NYC COVID Test Sites”

Twitter, Qri and chriswhong


from

Testing site locations, hours of operation & contact info scraped from NYC Health+Hospitals. Would benefit from geocoding & mapping!


Robotic Process Automation with JupyterLab

Jupyter Blog, Martin Renou


from

Most typically, RPA developers will use a mixed approach between textual programming and performing actions manually. The resulting programs are typically called software robots. Therefore, interactive computing tools like Jupyter are a natural environment for RPA, as the interactive nature of Jupyter allows for quick iterations and trial-and-errors when developing such robots.

While many RPA tools are commercial software, Robot Framework and the tooling developed by Robocorp provide an open-source RPA programming language, with a high-level syntax, extensible with Python plugins. It has a rich ecosystem of libraries and tools that are developed as separate projects.


Careers


Full-time, non-tenured academic positions

Application for Professional Specialist – Research on Policing Reform and Accountability (RoPRA)



Princeton University, School of Public and International Affairs; Princeton, NJ

Community Coordinator



London School of Economics, Journalism Ai Collab; London, England

Leave a Comment

Your email address will not be published.