Neuberger Berman has announced the formation of a research partnership with the University of Waterloo to study and develop data-driven techniques for investment management.
The research partnership brings together Neuberger Berman’s Toronto-based quantitative investment professionals from Neuberger Berman Breton Hill with researchers at Waterloo led by Professors George Labahn, Yuying Li, and Peter Forsyth from the David R. Cheriton School of Computer Science in the Faculty of Mathematics.
The Chronicle of Higher Education, Goldie Blumenstyk
Now, with the help of a company called Degree Analytics, a few colleges are beginning to use location data collected from students’ cellphones and laptops as they move around campus. Some colleges are using it to improve the kind of advice they might send to students, like a text-message reminder to go to class if they’ve been absent.
Others see it as a tool for making decisions on how to use their facilities. St. Edward’s University, in Austin, Tex., used the data to better understand how students were using its computer-equipped spaces. It found that a renovated lounge, with relatively few computers but with Wi-Fi access and several comfy couches, was one of the most popular such sites on campus. Now the university knows it may not need to buy as many computers as it once thought.
A National Bureau of Economic Research study released this month shows US filings related to machine learning, the technology driving the current AI boom, increasing rapidly. “We’ve seen a huge explosion of patenting activity in AI and machine learning, and I see this exponential growth continuing,” says Michael Webb, a Stanford researcher and coauthor of the study.
In 2010, there were 145 US patent filings that mentioned machine learning, the study says. In 2016, there were 594—a figure that’s incomplete, since the US Patent and Trademark Office only makes filings public 18 months after they have been registered. (Webb and his colleagues gathered their data in February.) Patent filings mentioning neural networks, a machine-learning technique, climbed to 485 in 2016, from 94 in 2010.
The University at Buffalo Center for Computational Research (CCR) is expanding its supercomputing capability, thanks to two grants totaling $2 million.
The center, which conducts high-performance computing on the Buffalo Niagara Medical Campus, was awarded a $1 million grant from the National Science Foundation (NSF) and a $1 million Regional Economic Development Council grant from Empire State Development.
The center will use the awards to purchase advanced computing equipment that will more than triple its computing power, enabling it to better support new and existing businesses in advanced manufacturing, the life sciences and other industries, as well as UB’s research and educational programs.
University of North Carolina, Eshelman School of Pharmacy
An artificial-intelligence approach created at the University of North Carolina at Chapel Hill can teach itself to design new drug molecules from scratch.
ReLeaSE is an algorithm and computer program developed at the UNC Eshelman School of Pharmacy. It comprises two neural networks: a teacher and a student. The teacher knows the syntax and linguistic rules behind the vocabulary of chemical structures for about 1.7 million known biologically active molecules, said K. H. Lee Distinguished Professor Alexander Tropsha, Ph.D., one of the creators of the new AI system.
“After learning the molecular alphabet and the rules of the language, the student starts creating new ‘words’, or molecules,” Tropsha said. “If the new word-molecule is realistic and has the desired meaning, the teacher approves. If not, the teacher disapproves, forcing the student to avoid bad words and create good ones.”
On July 9th Professor Yoshua Bengio spoke on Computing Hardware for Emerging Intelligent Sensory Applications at the University of Toronto’s Bahen Center, as part of the Natural Sciences and Engineering Research Council of Canada (NSERC) Strategic Network talk series. Professor Bengio is a world-renown AI researcher and head of the Montreal Institute for Learning Algorithms.
Bengio introduced three conceptual approaches to hardware-friendly deep learning:
Machine Learning is (or should be) a core component of any marketing program now, especially in digital marketing campaigns. The following insightful quote by Dan Olley (EVP of Product Development and CTO at Elsevier) sums up the urgency and criticality of the situation: “If CIOs invested in machine learning three years ago, they would have wasted their money. But if they wait another three years, they will never catch up.” This statement also applies to CMOs.
Data scientists must overcome several challenges before deep learning can find widespread adoption.
First, they need to find and process massive datasets for training. While this is not a problem for consumer applications where large amounts of data are easily available, copious amounts of training data are rarely available in most industrial applications.
The Washington Post, Elizabeth Dwoskin and Tony Romm
Facebook has shut down a sophisticated disinformation operation on its platform that engaged in divisive messaging ahead of the U.S. midterm elections, the company said Tuesday, an escalation of what a top executive described as an “arms race” to manipulate the public using its tools.
Facebook said it discovered 32 false pages and profiles that were created between March 2017 and this May, which lured 290,000 people with ads, events and regular posts on topics such as race, fascism and feminism — and sought to stir opposition to President Trump. The company informed law enforcement before it deleted the profiles Tuesday morning. It also notified lawmakers of the activity this week, and said it would notify the real Facebook users who were swept up in the operation.
One of the most popular pages had links to the Internet Research Agency (IRA), the Kremlin-backed organization of Russian operatives that flooded Facebook with disinformation around the 2016 election, Facebook said.
Bloomberg, Hyperdrive, Tom Randall and Mark Bergen
For the past year, Kyla Jackson has been one of the only teenagers in the world who gets a ride to high school from a robot.
When she’s ready to start her day, Kyla summons a self-driving car using the Waymo app on her phone. Five minutes later a Chrysler Pacifica run by the autonomous vehicle arm of Google’s parent company, Alphabet Inc., stops at her home in Chandler, Arizona. She slides open the door, fastens her seat belt, and hits a blue button above her head to set the car in motion. It’s a minivan covered in goofy-looking sensors, but it’s the coolest ride at her school.
The network theory of attitudes is a simple idea that has profound implications, if true. This is the theory that when two people hold the same attitude, they are linked together as a result, like two people holding strings to the same balloon. When different people are holding the same bunch of attitudes, they are grouped together and different groups are distinguishable by the different sets of attitudes they are holding. If you take two groups and get them to agree on something (like opposition to immigration), you can sew the groups together. When you agitate disagreement, you can create schisms that rip them apart. So when cleverly crafted social media campaigns get us to agree or disagree with posts on social media, they are subtly shifting our group allegiances.
This is a radical way to think about attitudes for two reasons. Firstly, we usually think of attitudes as things that are uniquely ours. We think they’re personal and that we can decide for ourselves what they’re going to be. But if attitudes are things that bind us together or tell our groups apart, we must accept that they ripple through networks. And when they do, “our” attitudes are expressions of our group membership in the same way that the movements of starlings in a murmuration are individual expressions of the motion of the flock.
The act of paying for stuff is undergoing a great transformation. The networks of machines and code that let you move your imaginary money from your bank account to a merchant are changing—the gadget that takes your card, the computer that tracks a restaurant or store’s inventory, the cards themselves (or their dematerialized abstractions inside your phone). But all this newness must remain compatible with systems that were designed 50 years ago, at the dawn of the credit-card age. This combination of old and new systems, janky and hacky and functional, is the standard state of affairs for technology, despite the many myths about how the world changes in vast leaps and revolutions.
For millions who can’t hear, lip reading offers a window into conversations that would be lost without it. But the practice is hard—and the results are often inaccurate (as you can see in these Bad Lip Reading videos). Now, researchers are reporting a new artificial intelligence (AI) program that outperformed professional lip readers and the best AI to date, with just half the error rate of the previous best algorithm. If perfected and integrated into smart devices, the approach could put lip reading in the palm of everyone’s hands.
“It’s a fantastic piece of work,” says Helen Bear, a computer scientist at Queen Mary University of London who was not involved with the project.
Bloomberg, Graphics, Dave Merrill and Lauren Leatherby
There are many statistical measures that show how productive the U.S. is. Its economy is the largest in the world and grew at a rate of 4.1 percent last quarter, its fastest pace since 2014. The unemployment rate is near the lowest mark in a half century.
What can be harder to decipher is how Americans use their land to create wealth. The 48 contiguous states alone are a 1.9 billion-acre jigsaw puzzle of cities, farms, forests and pastures that Americans use to feed themselves, power their economy and extract value for business and pleasure.
New York University has announced Smart Sparrow as the winner of its first-ever Algorithm for Change competition, designed to surface new applications for artificial intelligence and machine learning in education. In partnership with ACTNext by ACT and Arizona State University, Smart Sparrow was awarded the top prize for Foundations of Science, a set of instructor-facing courseware tools that will leverage machine learning and artificial intelligence to support students struggling in college-level science courses.
NYU’s Center for Social Entrepreneurship solicited submissions for artificial intelligence (AI), machine learning (ML), and augmented reality (AR) solutions that help low income, underrepresented minority, and first-generation students get to and through college. Smart Sparrow was selected from a group 70 submissions, and then joined nine finalists to pitch their ideas before a panel of judges at New York University.
“The Discovery Research Program matches hundreds of undergraduates every year with research partners from non-profits, start-ups, academic institutions, and more. We encourage partners from all backgrounds to apply.”
Washington, DC October 28-31. “The National Press Foundation is offering a four-day training program for journalists to explore all facets of artificial intelligence. During the training, journalists will learn the basics of how AI works, its impact on jobs and the economy, the ethics of using AI, how it is advancing worldwide, and what’s coming next in this rapidly changing science.” All expenses paid. Deadline to apply is September 9.
“In order to support academic work that addresses our challenges and opportunities while producing generalizable knowledge, Facebook is pleased to offer three $50K research grants.” Deadline for applications is October 30.
The manuscript includes a lot of material not in the blog. The last seven chapters are all new, covering combinatorial (semi-)bandits, non-stationary bandits, ranking, pure exploration, Bayesian methods, Thompson sampling, partial monitoring and an introduction to learning in Markov decision processes. Those chapters that are based on blog posts have been cleaned up and often we have added significant depth. There is a lot of literature that we have not covered. Some of these missing topics are discussed in extreme brevity in the introduction to Part VII. It really is amazing how large the bandit literature has become and we’re sorry not to have found space for everything.
In this post, we will examine the case with sensors temperature monitoring. For example, we have several sensors (1,2,3,4,…) in our device. Their state is defined by the following parameters: date (dd/mm/year), sensor number, state (1 — stable, 0 — critical), and temperature (degrees Celsius). The data with the sensors state comes in streaming, and we want to analyze it. Streaming data can be loaded from the different sources. As we don’t have the real streaming data source, we should simulate it. For this purpose, we can use Kafka, Flume, and Kinesis, but the simplest streaming data simulator is Netcat.
Microsoft, Cortana Intelligence and Machine Learning Blog, Wilson Lee
“In this blog post, we wish to introduce Conference Buddy, an end-to-end intelligent application that is infused with both conversational and pre-built AI capabilities. Conference Buddy showcases one simple example of how AI can help facilitate more effective Q&A experiences at conferences and presentations with large audiences.”