Knowledge@Wharton, Peter Cappelli and Prasanna Tambe
from
In their paper, “Artificial Intelligence in Human Resources Management: Challenges and a Path Forward,” the professors show how limited data, the complexity of HR tasks, fairness and accountability pose problems for digital HR. The study, which was co-authored by Valery Yakubovich, professor at ESSEC Business School and senior fellow at the Wharton Center for Human Resources, also looks at how to remedy those problems. Cappelli and Tambe spoke about their research with Knowledge@Wharton. [audio, 27:56]
Around the world, countries and corporations are rethinking their relationship with encryption. In the wake of terrorist attacks, legislation in India and Australia has sought to give law enforcement access to encrypted communications, in moves that could threaten the security of encryption around the world. In the United States, Apple has staked its reputation on protecting encrypted communications even when they belong to terrorists — while Facebook pledged this year to shift the company to private messaging.
The moves have exposed obvious tensions between free speech and safety. In an effort to move the discussion forward, the Stanford Internet Observatory today held a conference in which tech platforms, government agencies, nongovernmental organizations, civil rights activists, and academics met to hash it out. I was among a handful of journalists who attended the event, and I came away mostly encouraged that all sides are determined to find a workable balance — even though it seemed clear that each group would strike that balance somewhat differently.
The increasing value of developers has meant that, like traditional SaaS buyers before them, they also better intuit the value of their time and increasingly prefer businesses that can help alleviate the hassles of procurement, integration, management, and operations. Developer needs to address those hassles are specialized.
They are looking to deeply integrate products into their own applications and to do so, they need access to an Application Programming Interface, or API. Best practices for API onboarding include technical documentation, examples, and sandbox environments to test.
APIs tend to also offer metered billing upfront. For these and other reasons, APIs are a distinct subset of SaaS.
Former Stanford math professor and Google research Jack Poulson is leading a group of activists, Google employees and academics in opposing former Alphabet chairman and Google CEO Eric Schmidt’s invitation as one of several keynote speakers at an artificial intelligence ethics conference hosted by the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Citing Schmidt’s questionable “ethical conduct” as grounds for his disinvitation, Poulson has also called into question HAI leadership’s close relationships with large tech companies. But HAI has upheld Schmidt’s invitation and pushed back on Poulson’s critiques about the prevalence of powerful Silicon Valley figures at the Institute.
Ever since Russia was accused of hacking and meddling in the 2016 U.S. presidential election, online disinformation campaigns have become a staple of democratic elections around the world.
A team of Canadian academics and data scientists, however, is now working to prevent Canada from joining the list of countries that have fallen prey to these deliberate attempts at democratic distortion.
“We want to bring more transparency to how information is spread and consumed during the election, and the impact it’s having on voter behaviour,” explains McGill University professor Taylor Owen.
A planned new data science bachelor’s degree program at the University of Arkansas, Fayetteville has the support of Gov. Asa Hutchinson and business leaders.
The University of Arkansas board of trustees approved the degree program Friday for a start date of fall 2020, but still pending is a review by the state Division of Higher Education.
he Gulf of Mexico’s tiny, overworked fleet of research vessels is finally getting a flagship.
The National Science Foundation announced last week that it would pay for a $106 million boat to help scientists explore some of the critical issues affecting the Gulf, including climate change, hurricanes, fisheries’ health and oil pollution. At 199 feet long, it would be the largest vessel devoted to scientific research in the Gulf region.
And though we tend to see machines and algorithms as “race neutral,” Ruha Benjamin, a professor of African-American Studies at Princeton University, says they are programmed by humans and can end up reinforcing bias rather than removing it from policing and criminal justice. At the same time, Sharad Goel, a professor of Management Science and Engineering at Stanford University, is developing a risk assessment tool that accounts for different sources of bias. He thinks there is a way to use AI as a tool for more equal outcomes in criminal justice.
These guests join Ira to talk about how AI is guiding the decisions of police departments and courtrooms across the country—and whether we should be concerned. [audio, 34:45]
For the last year, the flagship magazine and website of the Institute of Electrical and Electronics Engineers — IEEE Spectrum — has been cataloging humanoids, drones, exoskeletons, quadrupeds, and other kinds of robots. It’s a fun collection, and like in any other catalog, each robot profile includes photos, videos, curious facts and technical specifications. Based on users’ votes, IEEE Spectrum also compiles rankings of today’s Top Rated, Most Wanted, and Creepiest Robots.
There’s a much-read copy of the Mueller Report on Sinan Aral’s kitchen table, and the part that’s especially dog-eared is the part about the Russian government’s “sweeping and systematic” interference in the 2016 U.S. presidential election. Aral is a professor of marketing and analytics at MIT, and on the principle that “you cannot manage what you do not measure,” he and colleague Dean Eckles have presented, in Science magazine, a kind of “send me in, coach!” plan for social scientists to help protect elections from being gamed and fiddled with.
But while Facebook says it will demand more openness from political advertisers, and the nation’s election officials are on notice that the attacks on their systems will only get worse, it’ll take concerted political and public will and muscle and cooperation for social science to get what they need to figure out how bad the meddling into democracy is, and therefore how democracy can try to counter it.
As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we’re dealing with molecules — and sometimes actual forms of life — the risks can be much greater.
Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it’s a relatively simple operation by today’s standards, it’ll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they’re running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient’s immune system.
How are you enjoying the International Year of the Periodic Tables so far? Yes, tables – we should probably have been using the plural all along. Since Dmitri Mendeleev (and others) first sketched out the periodic relationships between the elements in the 1860s, it has been estimated that around a thousand different tables have appeared in print – and that’s before considering all those on the internet. Even the T-shirts handed out at the opening ceremony in January (I grabbed one, naturally) offered a new version, courtesy of the European Chemical Society, with the elements colour-coded and given different-sized boxes according to their abundance and availability.
Mostly these tables embody careful deliberation about what to put where, which information to prioritise, which message to convey. But two recent papers have shown that it is now possible to rediscover the table empirically, from the way it is implicitly embedded within the milieu of chemistry.
Both methods use machine learning (ML): the standard form of most artificial-intelligence algorithms at present, in which relationships and correlations between variables are deduced by combing through data. These schemes can often identify connections invisible to humans, because we can’t generally process that much data and because the correlations may exist in high-dimensional spaces that we cannot visualise.
For generations, it was a basic tenet of donating sperm: Clinics could forever protect their clients’ identities.
But, increasingly, donor anonymity is dead.
The rise of consumer genetic tests — which allow people to connect with relatives they never knew they had, including some who never intended to be found in the first place — is forcing sperm donation clinics to confront the fact that it is now virtually impossible to guarantee anonymity to their clients. Instead, sites like 23andMe and Ancestry.com are giving customers the genetic clues they need to identify biological parents on their own.
San Francisco, CA November 20. “EGG SF puts the spotlight on real-world use cases, hands-on experiences, and hot topics like machine learning interpretability, bias, and fairness from the humans at top Bay Area organizations who are making AI happen.” [$$]
Copenhagen, Denmark September 23, starting at 3 p.m. “The early mentor to Mark Zuckerberg & Facebook investor, Roger McNamee, will highlight the serious damage Facebook has inflicted to society across virtual and physical space and what can be done to try and stop it.” [tickets required]
San Francisco, CA October 15, starting at 6 p.m. “We’re excited to kick off the W&B Deep Learning Salon! The idea behind the DL salon is simple – we’re building some wicked cool projects with some wicked cool people.” [rsvp required]
Vancouver, BC, Canada December 14. “In this workshop we aim to bring together researchers from machine learning, NLP, and neuroscience to explore and discuss how computational models should effectively capture the multi-timescale, context-dependent effects that seem essential for processes such as language understanding.” Deadline for paper submissions is September 18.