Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.
Science Advances; Christopher W. Tessum, David A. Paolella, Sarah E. Chambliss, Joshua S. Apte, D. Hill and Julian D. Marshall
from
Racial-ethnic minorities in the United States are exposed to disproportionately high levels of ambient fine particulate air pollution (PM2.5), the largest environmental cause of human mortality. However, it is unknown which emission sources drive this disparity and whether differences exist by emission sector, geography, or demographics. Quantifying the PM2.5 exposure caused by each emitter type, we show that nearly all major emission categories—consistently across states, urban and rural areas, income levels, and exposure levels—contribute to the systemic PM2.5 exposure disparity experienced by people of color. We identify the most inequitable emission source types by state and city, thereby highlighting potential opportunities for addressing this persistent environmental inequity.
A multimillion dollar gift from Robert “Bobby” Kotick, CEO of Activision Blizzard, will establish a multidisciplinary esports program at the University of Michigan School of Information.
Esports are organized video game competitions played for spectators. The contribution lays the groundwork for an esports minor at U-M by 2022 to help prepare students for careers in the burgeoning esports industry.
Kotick’s $4 million gift will fund a professor to lead the development of the program, combining best-in-class research and instruction in computer science, sports management and user experience, among other disciplines. Under Kotick’s 30-year leadership, Activision Blizzard has become one of the top global developers and publishers of interactive entertainment, best known for iconic franchises including Call of Duty, Candy Crush and World of Warcraft.
Researchers at CMU and McGill University have adapted an algorithm to identify similarities across escort ads, making it easier for law enforcement to identify human traffickers.
It turns out there are still more mysteries to uncover about the Dead Sea Scrolls.
The latest discovery, made with the help of artificial intelligence, is that the artifacts were likely transcribed by two different writers, despite the fact that all the handwriting looks similar.
“We will never know their names. But after 70 years of study, this feels as if we can finally shake hands with them through their handwriting,” Mladen Popović, a bible studies professor and a member of the three-person team from the University of Groningen in the Netherlands behind the study, said a statement. “This opens a new window on the ancient world that can reveal much more intricate connections between the scribes that produced the scrolls.”
Yahoo Answers is not what most people would call a good source of information. On Monday morning, the top questions on its homepage, as decided by its users, included whether the Democratic Party would eventually initiate some kind of genocide, whether Prince Harry and Meghan Markle were really in love, why small dogs were “the most aggressive seeming,” and “What’s the last thing that entered your nose by mistake?”
Still, when Yahoo made the unceremonious announcement earlier this month that the site would be wiped from the face of the web on May 4, with little explanation beyond the fact that “it has become less popular,” there was a general outcry and a wave of nostalgia. The Verge gathered up “the best” material from Yahoo Answers’ 16 years of operation, including such classics as “Is it illegal to kill an ant????????!?” and “Is there a spell to become a mermaid that actually works?” BuzzFeed eulogized a website that “died as it lived, needlessly and stupidly.” Twitter was crowded with screenshots; one popular email newsletter started a series of commemorative illustrations. “Yahoo’s still out there doing what they do best: deleting an unimaginable amount of internet history with 30 days’ notice,” tweeted Andy Baio, a web developer who worked at the company from 2005 to 2007.
In light of recent escalation of attacks and hate crimes against minorities, BRIDGE (Building Research on Inequality and Diversity to Grow Equity) at Rice University has initiated the Systemic Racism & Racial Inequality Seed Grant to explore racial inequalities and racism.
Rice D2K Lab’s teaching professors, Arko Barman and Su Chen, and assistant professor Brielle Bryan (Department of Sociology), have been awarded $50,000 for their project titled “Systemic Racial Biases in Traffic Stops & Their Financial Impact on Persons of Color.”
If you haven’t had time to read the AI Index Report for 2021, which clocks in at 222 pages, don’t worry—we’ve got you covered. The massive document, produced by the Stanford Institute for Human-Centered Artificial Intelligence, is packed full of data and graphs, and we’ve plucked out 15 that provide a snapshot of the current state of AI.
Deeply interested readers can dive into the report to learn more; it contains chapters on R&D, technical performance, the economy, AI education, ethical challenges of AI applications, diversity in AI, and AI policy and national strategies.
After a search engine finds a page the next step is to read it and understand it. How well does this work in practice? Again, relatively few websites expect Google to manage this on their own. Instead they provide copious metadata to help Google understand what a page is about and how it sits relative to other pages.
Google gave up at some point trying to work out which of two similar pages is the original. Instead there is now a piece of metadata which you add to let Google know which page is the “canonical” version. This is so they know which one to put in the search results, for example, and don’t wrongly divvy up one page’s “link juice” into multiple buckets.
Google also gave up trying to divine who the author is. While Google+ was a goer, they tried to encourage webmasters to attach metadata referring to the author’s Google+ profile. Now that Google+ has been abandoned they instead read metadata from Facebook’s OpenGraph specification, particularly for things other than the main set of Google search results (for example in the news stories they show to Android users). For other data they parse JSON-LD metadata tags, “microformats” and probably much more.
The Globe and Mail,Fenwick McKelvey and Jonathan Roberge
from
Last week’s federal budget committed $443.8-million over the next 10 years to renew the Pan-Canadian Artificial Intelligence Strategy. By chance, the budget coincided with the European Union’s release of proposed AI regulation. Comparing the two shows that Canada is playing a risky game by avoiding a robust, rights-based approach to AI governance.
AI is now firmly part of how society is governed, and the EU approach is a clear interventionist legal framework meant to address AI’s complexity, unpredictability and autonomous behaviour. Their approach bans certain applications of AI, notably most uses of facial recognition in public space, stipulates high-risk activities and then calls for better codes of conduct and assessment tools for low- or moderate-risk uses.
The prohibitions on AI are welcome and sensible. For many Canadians worried about AI after watching the hit docudrama The Social Dilemma, the EU’s prohibition on AI intended to manipulate people’s behaviour or exploit their vulnerabilities will sound eminently sensible.
AI has long been quietly embedding itself into higher education in ways like these, often to save money — a need that’s been heightened by pandemic-related budget squeezes.
Now, simple AI-driven tools like these chatbots, plagiarism-detecting software and apps to check spelling and grammar are being joined by new, more powerful – and controversial – applications that answer academic questions, grade assignments, recommend classes and even teach.
The newest can evaluate and score applicants’ personality traits and perceived motivation, and colleges increasing are using these tools to make admissions and financial aid decisions.
As the presence of this technology on campus grows, so do concerns about it. In at least one case, a seemingly promising use of AI in admissions decisions was halted because, by using algorithms to score applicants based on historical precedence, it perpetuated bias.
A first-of-its-kind agreement between Oak Ridge National Laboratory and General Motors could speed up the car manufacturer’s building of autonomous vehicles and increase onboard computing capacity.
The Department of Energy’s lab has licensed its artificial intelligence software system, the Multinode Evolutionary Neural Networks for Deep Learning, to GM for use in vehicle technology and design.
The AI system, known as MENNDL, uses evolution to design optimal convolutional neural networks – algorithms used by computers to recognize patterns in datasets of text, images or sounds. General Motors will assess MENNDL’s potential to accelerate advanced driver assistance systems technology and design.
In recent years, the company has started incorporating machine learning into various products, from drug discovery tools to its laboratory information management systems, which labs use to boost productivity by tracking data associated with samples, experiments, lab workflows and instruments.
As Thermo Fisher Scientific’s VP of global data science, analytics and financial solutions, Larry Kushnir oversees the company’s development and integration of machine learning and other data science technology into its products. As part of a sprawling multinational organization with a dizzying variety of complex products, we asked Kushnir how his team — a microcosm of the wider company — maintains its focus on their core values during their day-to-day work.
Part of that focus, he said, comes from the real-world applications of his team’s work with machine learning. As he put it, the company “has a vision that definitively makes the world better.”
SPONSORED CONTENT
The eScience Institute’s Data Science for Social Good program is now accepting applications for student fellows and project leads for the 2021 summer session. Fellows will work with academic researchers, data scientists and public stakeholder groups on data-intensive research projects that will leverage data science approaches to address societal challenges in areas such as public policy, environmental impacts and more. Student applications due 2/15 – learn more and apply here. DSSG is also soliciting project proposals from academic researchers, public agencies, nonprofit entities and industry who are looking for an opportunity to work closely with data science professionals and students on focused, collaborative projects to make better use of their data. Proposal submissions are due 2/22.
Twitter, The Institute for Ethical AI & Machine Learning, Gradient Flow
from
Ben Lorica and Assaf Araki provide some thoughts on terminology in the machine learning and data ecosystem, specifically focusing on defining the trending concept of DataOps and MLOps in industry.
While JavaScript is not a replacement for the rich Python machine learning landscape (yet), there are several good reasons to have JavaScript machine learning skills. Here are four.