Kate Starbird, a University of Washington professor, discussed the distribution of misinformation and “fake news” in the media at a Shorenstein Center event on Thursday.
Starbird, an assistant professor of human centered design and engineering, spent nine months probing Twitter and Reddit to study the prevalence and incentive behind propagating misinformation and its effect on politics and society. She said misinformation is much more prevalent and influential than many people would expect.
Over the course of nine months, Starbird and her team conducted research concerning fake news using her so-called “Grounded, Interpretive, Mixed Method.” She began on Twitter, searching for phrases commonly associated with crisis events. Then, she would filter the millions of resulting tweets by fine-tuning her search with specific fake news terms, like “false flag” and “crime actor.”
… I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.
I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.
First, A.I. needs to reflect more of the depth that characterizes our own intelligence. Consider the richness of human visual perception. It’s complex and deeply contextual, and naturally balances our awareness of the obvious with a sensitivity to nuance. By comparison, machine perception remains strikingly narrow.
The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.
Over the last year, “fake news” has gone from being a niche concern that charlatans exploited for profit, to a code red existential threat to the fabric of society—or something in between. But our scientific understanding of how and why false stories spread is still limited. Researchers at MIT’s Media Lab are diving in to correct that blind spot and for anyone looking to point a finger, we have some bad news.
A new paper published in on Thursday is the largest ever longitudinal study of the spread of false news online. Much of the scientific work that’s been done to assess fake news and its spread through social networks has focused on the study of individual rumors. There’s little research to point to that comprehensively evaluates the differences in the spread of true and false news across a variety of topics, or that examines why false news may spread differently than the truth. Soroush Vosoughi and his colleagues took a look at 126,000 rumor cascades tweeted by three million people more than 4.5 million times in order to better understand the qualities that go into an effectively viral news story.
Lawmakers worry declining federal research and development dollars could stunt the development of new artificial intelligence tools and set the country back in the race for global dominance in the emerging technology.
“The future of U.S. innovation is at stake—this should be a cause of concern for everyone,” said Rep. Robin Kelly, D-Ill. “This administration’s science, immigration and education policies are all working together to reduce the U.S. lead in AI technologies.”
Kelly and other lawmakers voiced their concerns about shrinking government R&D funds Wednesday in the second of three House Oversight IT subcommittee hearings on government’s role in developing and implementing artificial intelligence. In addition to supporting basic R&D, the government must also work to strengthen America’s STEM workforce and demystify the public perception of artificial intelligence, federal technology experts told the panel.
University of California-San Diego, UC San Diego Health
“It seems like every time you turn around, someone is talking about the importance of artificial intelligence and machine learning,” said Trey Ideker, PhD, University of California San Diego School of Medicine and Moores Cancer Center professor. “But all of these systems are so-called ‘black boxes.’ They can be very predictive, but we don’t actually know all that much about how they work.”
Ideker gives an example: machine learning systems can analyze the online behaviors of millions of people to flag an individual as a potential “terrorist” or “suicide risk.” “Yet we have no idea how the machine reached that conclusion,” he said.
For machine learning to be useful and trustworthy in health care, Ideker said, practitioners need to open up the black box and understand how a system arrives at a decision.
Yet here she is, inside a sun-filled classroom at Lindblom Math & Science Academy on the city’s South Side, throwing around tech-industry terms like “ideation” and working with friends to design her first mobile app.
It’s all part of the introductory computer-science course that every student in Chicago must now take in order to graduate.
“I’m still not really that into technology,” said Klyce, 15. “But this is actually my favorite class now.”
These days, fast radio bursts (FRBs) are one of the hottest mysteries in astronomy.
FRBs are intense extragalactical point-sources of radiation at radio wavelengths lasting only milliseconds and of unknown origin; only a few dozen have ever been detected. An FRB might result from a cataclysmic event on a neutron star, or something else entirely, but each FRB’s release of energy in milliseconds equals the energy output of the sun over several years. As there is no pattern to where they flare up in the sky, detecting them is a matter of having a wide angle ‘lens’ on your radio telescope, with a very short shutter time, and luck.
Very few telescopes have those capabilities, but the recent upgrade of the Westerbork Synthesis Radio Telescope (WSRT) took care of this. Each of the system’s 12 active large parabolic dishes got a new radio receiver that acts as the equivalent of the CCD-chip in an optical camera.
‘We were lucky that the design of the Westerbork telescope allowed us to put these receivers in,’ says Joeri van Leeuwen, astrophysicist at ASTRON, the Netherlands Institute for Radio Astronomy.
A new study by three MIT scholars has found that false news spreads more rapidly on the social network Twitter than real news does — and by a substantial margin.
“We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” says Sinan Aral, a professor at the MIT Sloan School of Management and co-author of a new paper detailing the findings.
“These findings shed new light on fundamental aspects of our online communication ecosystem,” says Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab and director of the Media Lab’s Laboratory for Social Machines (LSM), who is also a co-author of the study. Roy adds that the researchers were “somewhere between surprised and stunned” at the different trajectories of true and false news on Twitter
“Becoming active in the R community can have many benefits and submitting an abstract to one of the Enterprise Applications of the R Language (EARL) Conferences is the perfect platform to get started!” … Deadline for abstract submissions to US Roadshow is April 30.