For years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and it has enjoyed a lot of success as a result. Now, AI is starting to return the favor.
Although not explicitly designed to do so, certain artificial intelligence systems seem to mimic our brains’ inner workings more closely than previously thought, suggesting that both AI and our minds have converged on the same approach to solving problems. If so, simply watching AI at work could help researchers unlock some of the deepest mysteries of the brain.
“There’s a real connection there,” said Daniel Yamins, assistant professor of psychology. Now, Yamins, who is also a faculty scholar of the Stanford Neurosciences Institute and a member of Stanford Bio-X, and his lab are building on that connection to produce better theories of the brain – how it perceives the world, how it shifts efficiently from one task to the next and perhaps, one day, how it thinks.
In July 2014, a team of four Swedish and Polish researchers began using an automated program to better understand what people posted on Facebook.
The program, known as a “scraper,” let the researchers log every comment and interaction from 160 public Facebook pages for nearly two years. By May 2016, they had amassed enough information to track how 368 million Facebook members behaved on the social network. It is one of the largest known sets of user data ever assembled from Facebook.
“We’re concerned about how easy it was to collect this,” said Fredrik Erlandsson, one of the researchers and a lecturer at the Blekinge Institute of Technology in Sweden. Last December, he and his colleagues published a research paper in the journal Entropy detailing how their methods of trawling social media sites could be replicated.
EurekAlert! Science News, Hong Kong Polytechnic University
from
The Hong Kong Polytechnic University (PolyU) establishes today the University Research Facility in Big Data Analytics (UBDA), the first university-wide research facility in big data analytics among universities in Hong Kong. Equipped with big data expertise in PolyU and the most advanced computing infrastructure and tools today, UBDA is expected to foster cross-disciplinary research collaborations in PolyU, establish a strong partnership with industries on big data analytics applications, and promote big data education in Hong Kong.
An approach to artificial intelligence that embraces uncertainty and ambiguity could paradoxically help make future virtual assistants less confused.
Gamalon, an AI startup based in Cambridge, Massachusetts, developed the new technique for teaching machines to handle language, and several businesses are now testing a chatbot platform that uses it.
The approach lets a computer hold a more meaningful and coherent conversation by providing a way to deal with the multiple meanings that an utterance might convey. If a person says or types something ambiguous, the system makes a judgment about what was most likely meant.
Harvard Business Review, Andrew Burt and Samuel Volchenboum
from
Imagine that the next time you see your doctor, she says you have a life-threatening disease. The catch? A computer has performed your diagnosis, which is too complex for humans to understand entirely. What your doctor can explain, however, is that the computer is almost always right.
If this sounds like science fiction, it’s not. It’s what health care might seem like to doctors, patients, and regulators around the world as new methods in machine learning offer more insights from ever-growing amounts of data.
Complex algorithms will soon help clinicians make incredibly accurate determinations about our health from large amounts of information, premised on largely unexplainable correlations in that data.
This future is alarming, no doubt, due to the power that doctors and patients will start handing off to machines. But it’s also a future that we must prepare for — and embrace — because of the impact these new methods will have and the lives we can potentially save.
Loyola University Maryland will offer a bachelor’s in science in data science of undergraduates starting the fall of 2018.
The major is an interdisciplinary program that consists of 15 courses across various departments, including mathematics, statistics, business, and computer science. Students in the program will gain analytic knowledge that will prepare them for careers as business analysts, domain-specific managers, data mining analysts, and business intelligence specialists.
Company Data Science News
Oculus Researchwill now be called Facebook Reality Labs which…flat out cracks me up. Reality is invented at Facebook where it can only be viewed through special goggles also invented at Facebook? Wouldn’t that be…actually, that would just be an enhanced version of facebook where peripheral distractions have to fight harder for users’ attention. I am sure that is not the desired interpretation, Yann LeCun. I should note that I am always here as a one person focus group should you ever need such a thing.
Speaking of Lecun, he wasn’t thrilled with last week’s article in the NYTimes (covered in last week’s newsletter) about the potential academia-to-industry brain drain problem. The article was kicked off by Facebook’s announcement that they were opening AI labs in Pittsburgh and Seattle which Lecun pointed out was to allow top researchers to keep their ties to academe. He notes that Facebook has a history of allowing their top AI talent to go 80/20 or 50/50 or 20/80 (or 90/10…) with academia. Sharing top talent is better than outright losing it.
Google Research shall henceforth be known as Google AI. Because when your business model is being the bigliest data, apparently all of your research emphasizes “implementing machine learning techniques in nearly everything.” See, I would have thought there might be space for research that doesn’t involve machine learning? Or that a smart Googler with a healthy sense of precision might have pointed out that AI is not synonymous with machine learning? But I do not work for the big G so all I can do is report that they are rebranding from “research” to “AI.”
Speaking of Google’s foray into AI, Microsoft is promoting hardware (chips) called FPGAs instead of Google’s silicon product designed specifically for AI, according to Wired. Two tech giants fighting to define the future is interesting, but not surprising. What surprised me is that Microsoft’s AI collaborator Nestle has a health division. The two companies will use computer vision techniques to determine the severity of acne. Um, wait what?
Michael Correll of Tableau went to the CHI conference and was terrified by seven of the talks, starting with the keynote by Christian Ridder of OKCupid. But as much as we all hate the idea that a vast matchmaking apparatus is handled by a company that seems rather insensitive to current cultural and social trends, the part of his review I liked the most came from alt.CHI. Artists Kieran Brown and Ben Swift held a data seance, powering a ouija board with a neural network: “People held their hands over the Ouija board, the lights flickered, the works.” OMG yes.
Amazon has a fraud problem, as unveiled by BuzzFeed. People are getting paid to write fake reviews for products that they do *receive*, but may not have wanted or tested. They make single digit dollars per review, are reimbursed by the company for the cost of the product, and can potentially resell the items. Humans. Humans are the problem, not data science.
Securus Technologies is a company you’ve never heard of that is buying data from the four major mobile phone operators in the US and selling it to law enforcement who can then track anyone for any reason. True, law makers are fighting over that “for any reason” clause, but the point is that tracking individuals does not require mastery of cybertechnologies. It requires a badge from just about any local police department. [Definitely click through to see the picture of Sheriff Cory Hutcheson, posing with a beer, a slimy smirk, and a peacock’s tail-shaped display of US currency.]
Palantir is still reigning king of creepy surveillance contracts with police departments. The Intercept wrote up a new survey of Los Angelenos that found, “2 percent of residents who responded to the survey reported being stopped by police between 11 and 30 times a week or more, while 76 percent of respondents reported never being stopped at all.” Can you imagine being stopped 11-30 times a week? Given the amount of time that would entail and the number of police who shoot the wrong people, being stopped that often is not a minor inconvenience.
Bloomberg Technology, Marie Mawad and Ania Nussbaum
from
NASA and Amazon.com Inc. are tapping experts in France to figure out how to coordinate drone traffic, bolstering the country’s role as a hub for evolving regulation of unmanned aircraft.
While Amazon hired a team in a Paris suburb, NASA headed closer to plane-maker Airbus SE’s home in Toulouse, calling on drone designer Delair-Tech to test prototypes for air traffic management software. It’s a key part of convincing regulators unmanned vehicles are safe to fly higher and further out of sight from their operators, such as while delivering goods.
My most recent paper submission (preprint available) is about improving the verifiability of computer-aided research, and contains many references to the related subject of reproducibility. A reviewer asked the same question about all these references: isn’t this the same as for experiments done with lab equipment? Is software worse? I think the answers are of general interest, so here they are.
Koray Kavukcuoglu is the Director of Research at DeepMind, where previously he was a research scientist and led the deep learning team. Before joining DeepMind, he was a research staff member at NEC Labs America in the machine learning department.
One morning in late January, Jake picked up the box on his desk, tore through the packing tape, unearthed the iPhone case inside, snapped a picture, and uploaded it to an Amazon review he’d been writing. The review included a sentence about the case’s sleek design and cool, clear volume buttons. He finished off the blurb with a glowing title (“The perfect case!!”) and rated the product a perfect five stars. Click. Submitted.
Jake never tried the case. He doesn’t even have an iPhone.
Jake then copied the link to his review and pasted it into an invite-only Slack channel for paid Amazon reviewers. A day later, he received a notification from PayPal, alerting him to a new credit in his account: a $10 refund for the phone case he’ll never use, along with $3 for his trouble — potentially more, if he can resell the iPhone case.
Jake is not his real name. He — along with the four other reviewers who spoke to BuzzFeed News for this story — wanted to remain anonymous for fear Amazon would ban their accounts.
Am I a data mine, or am I a data factory? Is data extracted from me, or is data produced by me? Both metaphors are ugly, but the distinction between them is crucial. The metaphor we choose informs our sense of the power wielded by so-called platform companies like Facebook, Google, and Amazon, and it shapes the way we, as individuals and as a society, respond to that power.
If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership. Who owns me (as a site of valuable data), and what happens to the economic value of the data extracted from me? Should I be my own owner — the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine (and everyone else’s) and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.
St. Petersburg, FL June 3-7. “The workshop will focus on modern research methods — recent developments in methodology and statistics, and new techniques made possible by modern technology — that are more conducive to reproducibility. In particular, we will discuss how such methods can and cannot be applied in behavioral and cognitive science and how their use can be facilitated or encouraged.” [application required]
“Machine learning and other AI technologies are changing business, but how do we know what’s marketing hype and what’s real? New businesses offering commercial applications face questions about what works in marketing, product, and management of data.”
“We [Boston University School of Law, Technology & Policy Research Initiative] have designed a study to answer some of these questions by finding out what tools startup firms are using and what kind of applications they are developing, what benefits they are delivering to customers, and how they handle data, privacy, and customer training.”
Jennifer Jacquet, Assistant Professor of Environmental Studies, analyzes the language used by science journalists to characterize the state of the ocean
In our third and final installment examining how artificial intelligence (AI) can transform marketing automation and audience engagement, we’ll discuss the power that machine learning offers marketers in driving highly adaptive, personalized messaging experiences when there are hundreds or thousands of different options.
“We announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.”