Facial recognition. What do you think of when you hear that term? How do these systems know your name? How accurate are they? And what else can they tell you about someone whose image is in the system?
These questions and others led the Partnership on AI (PAI) to begin the facial recognition systems project. During a series of workshops with our partners, we discovered it was first necessary to grasp how these systems work. The result was PAI’s paper “Understanding Facial Recognition Systems,” which defines the technology used in systems that attempt to verify who someone says they are or identify who someone is.
Researchers were able to trick a Tesla Inc. vehicle into speeding by putting a strip of electrical tape over a speed limit sign, spotlighting the kinds of potential vulnerabilities facing automated driving systems.
Technicians at McAfee Inc. placed the piece of tape horizontally across the middle of the “3” on a 35 mile-per-hour speed limit sign. The change caused the vehicle to read the limit as 85 miles per hour, and its cruise control system automatically accelerated, according to research released by McAfee on Wednesday.
More than 40% of U.S. federal agencies and departments have experimented with AI tools, but only 15% currently use highly sophisticated AI, according to analysis by Stanford University computer scientists published today in “Government by Algorithm,” a joint report from Stanford and New York University.
“This is concerning because agencies will find it harder to realize gains in accuracy and efficiency with less sophisticated tools. This result also underscores AI’s potential to widen, not narrow, the public-private technology gap,” the report reads.
University of California, Office of Scholarly Communication
The Public Library of Science (PLOS) and the University of California (UC) today announced a two-year agreement that will make it easier and more affordable for UC researchers to publish in the nonprofit open access publisher’s suite of journals. By bringing together PLOS, one of the world’s leading native open access publishers, and UC, which accounts for nearly 10 percent of all U.S. publishing output, the pilot breaks new ground in the global movement to advance open access publishing and empower more authors to share their research with the world.
“Scientific research is increasingly an international endeavor, often at its best when it crosses conceptual, disciplinary, and technological boundaries,” said Keith Yamamoto, Vice Chancellor for Science Policy and Strategy and Professor of Cellular Molecular Pharmacology at UC San Francisco, and a member of the PLOS Board of Directors. “Building that global continuum of discovery demands open, efficient, and rapid distribution of information. This agreement shows that key institutional stakeholders — universities and publishers — can work cooperatively to develop sustainable models that serve science, scientists, and trainees.”
Los Angeles moved this week to dismiss nearly 66,000 marijuana convictions, years after the state voted to legalize the drug.
The county is working with a not-for-profit technology organization, Code for America, to use algorithms to identify eligible cases within decades-old court documents.
“The dismissal of tens of thousands of old cannabis-related convictions in Los Angeles county will bring much-needed relief to communities of color that disproportionately suffered the unjust consequences of our nation’s drug laws,” said Jackie Lacey, the LA district attorney, in a statement on Thursday.
The Wi-Fi Alliance a few weeks ago said it will launch the Wi-Fi 6E brand to classify Wi-Fi devices operating in the 6-gigahertz spectrum, which the Federal Communications Commission plans to soon release. As traditional Wi-Fi devices operate in the 2.4GHz and 5GHz airwaves, I initially hated the idea of 6E because I felt like it would further confuse consumers, forcing them to figure out if their Wi-Fi 6 products would talk to their 6E products or not.
But in fact, consumers buying 6E devices are likely to see an improvement over those using Wi-Fi 6 devices. And with Broadcom announcing it will have a Wi-Fi 6 E-capable chip available by the end of the year, they will soon get the chance to experience the difference.
Last year, Facebook CEO Mark Zuckerberg called for governments to work with online platforms to create and adopt new regulation for online content, noting, “It’s impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services — all with their own policies and processes — we need a more standardized approach.”
Today, we’re publishing a white paper setting out some questions that regulation of online content might address.
“The battle for industrial data starts now,” Thierry Breton, the EU Commissioner for the Internal Market told Reuters less than a week ago. Breton, the French ex-CEO of IT multinational Atos and telecoms giant Orange, is under no illusion: the Silicon Valley giants have won hands down the race to colonise the internet with platforms fuelled and augmented by their users’ personal data. China has also built on that model, harnessing its national tech firms to power through a state-run version of surveillance capitalism with little or no regard for citizens’ privacy.
The EU’s claim to relevance amounts to having styled itself as the world’s tech referee. It has been forcing companies to comply with stringent data protection regulations (GDPR), merrily fining big tech for antitrust violations, and periodically scolding various honchos – read: Mark Zuckerberg – for not doing enough about privacy, disinformation, and terrorism. Problem is: being a thorn in the side cannot be a long-term strategy. As Guntram Wolff, an economist from Brussels think-tank Bruegel pithily put it: “referees don’t win.”
Achieving meaningful machine learning with microcontroller-level devices is not an easy task. Memory, a key criteria for AI calculations, is often severely limited, for example. But data science is advancing quickly to reduce model size, and device and IP vendors are responding by developing tools and incorporating features tailored for the demands of modern machine learning.
As developers realize IoT systems need more intelligence deployed to the edge to overcome latency, performance data privacy/security and bandwidth challenges, we explore the pursuit of that smarter edge: the what, why and where.
TinyML Takes Off
As a sign of this sector’s rapid growth, the TinyML Summit (a new industry event held earlier this month in Silicon Valley) is going from strength to strength. The first summit, held last year, had 11 sponsoring companies whereas this year’s event had 27, with slots selling out much earlier, according to the organizers, who also said that membership for their global monthly meetups for designers has grown dramatically.
Iif you feel like a button isn’t doing anything, there’s a pretty good chance it’s been permanently deactivated. As congestion has increased and the systems to manage it have become more advanced over the years, cities have moved away from using crosswalk buttons at all. In 2018, for example, CNN reported that only around 100 of New York City’s 1000 buttons were still functioning. Since actually removing the buttons from crosswalks would be a costly endeavor, cities have opted to leave them intact, just waiting to be pummeled by impatient pedestrians who don’t know any better.
X was once seen as a punchline in Silicon Valley (and on Silicon Valley). Today, its self-driving cars have logged 10 million miles on public roads, and operate an autonomous ride-sharing service in Arizona. Loon’s balloons provide internet access to communities in rural Peru and Kenya. Wing, X’s drone delivery effort, is carrying food and medicines to customers in Australia. Still, as Alphabet continues to be buffeted by employee protests and leadership changes – in December 2019, founders Larry Page and Sergey Brin stepped down, handing the company to Google CEO Sundar Pichai – X is facing renewed scrutiny to prove that its moonshots are more than just an indulgence, or expensive PR stunts. X celebrates its tenth anniversary in 2020. When will its bets pay off?
The social aspect of music listening is something one could be forgiven for forgetting even exists within contemporary music streaming. The neoliberal, endless personalization of Spotify’s Discover Weekly playlists and YouTube’s recommendations algorithm can make it feel like all music listeners are happily trapped within their own filter bubbles. This isn’t exactly true but the perception, in this case, isn’t too far off from reality. The mobile experience of Apple Music, Spotify, Tidal and YouTube Music (not regular YouTube) are all built in a way where unless one seeks out playlists created by friends, it’d be easy to be completely oblivious to their own musical tastes.
What little fragments of leftover social features exist are primarily afterthoughts at this point.
The rapid development of artificial intelligence technologies around the globe has led to increasing calls for robust AI policy: laws that let innovation flourish while protecting people from privacy violations, exploitive surveillance, biased algorithms, and more.
But the drafting and passing of such laws has been anything but easy.
“This is a very complex problem,” Luis Videgaray PhD ’98, director of MIT’s AI Policy for the World Project, said in a lecture on Wednesday afternoon. “This is not something that will be solved in a single report. This has got to be a collective conversation, and it will take a while. It will be years in the making.”
Twitter is experimenting with adding brightly colored labels directly beneath lies and misinformation posted by politicians and other public figures, according to a leaked demo of new features sent to NBC News.
Twitter confirmed that the leaked demo, which was accessible on a publicly available site, is one possible iteration of a new policy to target misinformation. The company does not have a date to roll out any new misinformation features.
In this version, disinformation or misleading information posted by public figures would be corrected directly beneath a tweet by fact-checkers and journalists who are verified on the platform and possibly by other users who would participate in a new “community reports” feature, which the demo claims is “like Wikipedia.”
Berkeley, CA March 3, starting at 8:30 a.m., Zellerbach Hall. “In addition to a main stage that includes the likes of Amazon’sTye Brady, UC Berkeley’sStuart Russell, Anca Dragan of Waymo, Claire Delaunay of Nvidia, James Kuffner of Toyota’s TRI-AD and a surprise interview with Disney Imagineers, we’ll also be offering a more intimate Q&A stage featuring speakers from SoftBank Robotics, Samsung, Sony’s Innovation Fund, Qualcomm, Nvidia and more.” [$$$]
The National Science Foundation and the Simons Foundation Division of Mathematics and Physical Sciences “will jointly sponsor up to two new research collaborations consisting of mathematicians, statisticians, electrical engineers, and theoretical computer scientists. Research activities will be focused on explicit topics involving some of the most challenging questions in the general area of Mathematical and Scientific Foundations of Deep Learning.” Deadline for letter of intent is March 20.
Ottawa, ON, Canada November 1-4. The conference “grew out of the increasing work around games and play emerging from the ACM annual conference on Human Factors in Computing Systems (CHI) as well as smaller conferences such as Fun and Games and Gamification.” Deadline for submissions is April 7.
This post was stimulated by Al Downie’s recent article, Research Data Management as a national service. Thanks to Al for this stimulating thought piece, and also to Andy Turner for bringing JISC’s Open Research Hub into the conversation in his comment on Al’s post. I’ll come back to the Open Research Hub later, and start by picking up on Al’s thread.
“When I’m working with Jupyter notebooks, I often want to work with them from within a virtual environment. The general best practice is that you should always use either virtual environments or Docker containers for working with Python, for reasons outlined in this post, or you’re gonna have a bad time. I know I have.”