Today, Congresswoman Jacky Rosen (NV-03) released the following statement after introducing H.R. 3316, the “Code Like A Girl Act,” a bipartisan bill to encourage young girls under the age of 10 to explore careers in computer science.
“When I started my career as a computer programmer, I was one of very few women in a male-dominated industry,” said Congresswoman Rosen. “Despite the progress we’ve made, fewer than 1 in 5 computer science graduates are women. This disparity is depriving our country of talented minds that could be working on our most challenging problems. Given the ever increasing importance of computer science in today’s economy, it’s critical we find ways to break down barriers and level the playing field for women everywhere.
China has outlined plans to become a world-leader in artificial intelligence by 2025, laying down a challenge to U.S. dominance in the sector amid heightened international tensions over military applications of the technology.
China released a national AI development plan late on Thursday, aiming to grow the country’s core AI industries to over 150 billion yuan ($22.15 billion) by 2020 and 400 billion yuan ($59.07 billion) by 2025, the State Council said.
Big companies and cutting-edge start-ups often dominate the driverless car headlines. But the rapid acceleration of the technology would not be possible without the passionate work of academics around the world. We bring you seven universities that stand out from the rest.
The Open Philanthropy Project awarded a grant of $2.4 million over four years to the Montreal Institute for Learning Algorithms (MILA) to support technical research on potential risks from advanced artificial intelligence (AI).
Within Facebook’s cavernous Building 20, about halfway between the lobby (panoramic views of the Ravenswood Slough) and the kitchen (hot breakfast, smoothies, gourmet coffee), in a small conference room called Lollapalooza, Joaquin Candela is trying to explain artificial intelligence to a layperson.
Engineers have created a new deep-learning software capable of assessing complex audience reactions to movies using the viewer’s facial expressions. Developed by Disney Research in collaboration with Yisong Yue of Caltech and colleagues at Simon Fraser University, the software relies on a new algorithm known as factorized variational autoencoders (FVAEs).
After decades in the wilderness, AI has swaggered back onto center stage. Cheap computer power and massive datasets have given researchers alchemical powers to turn algorithms into gold, and the deep pockets (and marketing prowess) of Silicon Valley’s tech giants haven’t hurt either.
But despite warnings from some that the creation of super-intelligent AI is just around the corner, those working in computational coal mines are more realistic. They point out that contemporary AI programs are extremely narrow in their abilities; that they’re easily tricked, and simply don’t possess those hard-to-define but easy-to-spot skills we usually sum up as “common sense.” They are, in short, not that intelligent.
The question is: how do we get to the next level? For Demis Hassabis, founder of Google’s AI powerhouse DeepMind, the answer lies within us. Literally.
The Simons Foundation has assembled an international group of theoretical physicists to tackle one of the biggest unsolved mysteries in science: What exactly went down at the dawn of the universe around 13.8 billion years ago.
Called the Origins of the Universe, the initiative will focus in part on yielding theoretical predictions that will inform future studies of the early cosmos and provide a deeper understanding of the fundamental physics governing the universe.
National Observer, The Canadian Press, Michelle McQuigge
from
A team of Canadian researchers and robotics experts say they’ve developed cost-effective technology that would allow power wheelchairs to drive themselves.
Toronto-based Cyberworks Robotics and the University of Toronto have applied the same principles at work in self-driving cars, saying using similar types of sensors on motorized wheelchairs can allow the mobility aids to dodge obstacles and travel routes without assistance from the user.
Red Hat, Open Source Stories, Brent Simoneaux and Casey Stegman
from
Shortly after the dinner, OpenAI was born, with Brockman and Sutskever at the helm.
Brockman would focus on building the team and getting the culture right. Sutskever would focus on their research agenda. In a short period, they would raise more than $1 billion in funding.
Over the past 20 years, technologies based on university research have launched entire new industries, cured fatal diseases and even put new foods on your grocery store shelves. Since 1996, these technologies have contributed an estimated $1.3 trillion and 4.2 million jobs to the American economy. In 2015, Florida’s state universities spun out 48 start-ups and achieved a multitude of scientific breakthroughs in health, engineering, agriculture and basic sciences.
The partnership between America’s research universities, industry and the federal government is the envy of the world. But a proposal by the federal Office of Management and Budget to severely cut the reimbursement government agencies make to universities for shared research costs threatens to destroy it.
As artificial intelligence gains exposure in media and public discourse, so too does the demand for spaces focused on studying its systems and their ramifications. Last weekend, one such space was brought to the ground through AINow, a research initiative (and soon-to-be NY-based research center!) Co-Founded by Kate Crawford and Meredith Whittaker and dedicated to addressing the social implications of machine learning and artificial intelligence. This year, the symposium was hosted at the an equally forward-thinking MIT Media Lab. Attendance alone wasn’t sufficient, each guest came with the instruction to think about the proposed prompt: “What issue does this community most need to address within the next 12 months”?
The digital-advertising industry is looking to stamp out bogus ad inventory, like websites that claim to be premium brands but are actually sites the average person hardly ever visits.
Google, with help from some media giants, is taking the lead. The company is pushing an industry initiative called ads.txt that’s aimed at wiping out fraud that’s dubbed ‘spoofing’ by the industry.
Nancy Kanwisher, a cognitive neuroscientist, has spent her career pinning down how the human brain responds to visual inputs such as faces. As part of that work, Kanwisher asks volunteers—usually college students at the Massachusetts Institute of Technology (MIT) in Cambridge, where she works—to lie in an MRI machine that records their brain activity while they do a task, such as viewing a photo. Although such studies reveal information that can be relevant to diseases such as autism, they do not test treatments.
But a few weeks ago, Kanwisher and colleagues in related behavioral research fields—from cognitive psychology to vision science—were dismayed to learn that the U.S. National Institutes of Health (NIH) in Bethesda, Maryland, could soon deem their studies to be clinical trials. That designation would impose a raft of new requirements on studies that have already passed ethics review, such as following different standards for funding applications, and reporting results on clinicaltrials.gov, a public database.
While alpha-male, alpha-seeking stock pickers in actively managed funds struggle to justify their fees by outperforming a steadily growing market, low-fee, index-tracking passive funds have become all the rage, along with algorithmically driven robo-advisers (most of which assign users to a passive fund). As a result, $326 billion poured out of actively managed funds in 2016 — more than the outflow during the 2008 crisis — while $429 billion flowed into passively managed assets, according to a recent report from investment research firm Morningstar.
Physicists are capitalizing on a direct connection between the largest cosmic structures and the smallest known objects to use the universe as a “cosmological collider” and investigate new physics.
The 3-D map of galaxies throughout the cosmos and the leftover radiation from the Big Bang — called the cosmic microwave background (CMB) — are the largest structures in the universe that astrophysicists observe using telescopes. Subatomic elementary particles, on the other hand, are the smallest known objects in the universe that particle physicists study using particle colliders.
Aliens could be hiding on almost any of the Milky Way’s roughly 100 billion planets, but so far, we haven’t been able to find them (dubious claims to the contrary notwithstanding). Part of the problem is that astronomers don’t know exactly where to look or what to look for. To have a chance of locating alien life-forms — which is like searching for a needle that may not exist in an infinitely large haystack — they’ll have to narrow the search.
Astronomers hoping to find extraterrestrial life are looking largely for exoplanets (planets outside Earth’s solar system) in the so-called “Goldilocks zone” around each star: a distance range in which a planet is not too hot and not too cold, making it possible for liquid water to exist on the surface. But after studying our own world and many other planetary systems, scientists have come to believe that many factors other than distance are key to the development of life. These include the mix of gases in the atmosphere, the age of the planet and host star, whether the host star often puts out harmful radiation, and how fast the planet rotates — some planets rotate at a rate that leaves the same side always facing their star, so one hemisphere is stuck in perpetual night while the other is locked into scorching day. This makes it a complex problem that scientists can start to tackle with powerful computers, data and statistics. These tools — and new telescope technology — could make the discovery of life beyond Earth more likely.
Claude Shannon wasn’t just a brilliant theoretical mind — he was a remarkably fertile, fun, practical, and inventive one as well. There are plenty of mathematicians and engineers who write great papers. There are fewer of them who, like Shannon, are also jugglers, unicyclists, gadgeteers, first-rate chess players, codebreakers, expert stock-pickers, and amateur poets.
He worked on the top-secret transatlantic phone line connecting FDR and Winston Churchill during World War II and co-built what was arguably the world’s first wearable computer. He learned to fly airplanes and played the jazz clarinet. He rigged up a false wall in his house that could rotate with the press of a button, and he once built a gadget whose only purpose when it was turned on was to open up, release a mechanical hand, and turn itself off. Oh, and he once had a photo spread in Vogue magazine.
Think of him as a cross between Albert Einstein and the Dos Equis guy.
London, England September 21. This workshop will bring together researchers from three broad AI areas, namely natural language processing (NLP), machine learning (ML) and logic-based symbolic AI, to discuss and explore opportunities for cross-fertilisation centered around NLP. Deadline for submissions is September 10.
Medium, Airbnb Engineering & Data Science, Robert Chang
from
“In this post, I will describe how these tools worked together to expedite the modeling process and hence lower the overall development costs for a specific use case of LTV modeling — predicting the value of homes on Airbnb.”
Last year we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning over 6000 object categories, designed to be a useful dataset for machine learning research. The initial release featured image-level labels automatically produced by a computer vision model similar to Google Cloud Vision API, for all 9M images in the training set, and a validation set of 167K images with 1.2M human-verified image-level labels.
Today, we introduce an update to Open Images, which contains the addition of a total of ~2M bounding-boxes to the existing dataset, along with several million additional image-level labels.
The dataset contains 11 hand gestures from 29 subjects under 3 illumination conditions and is released under a Creative Commons Attribution 4.0 license.
“GPyTorch is a Gaussian Process library, implemented using PyTorch. It is designed for creating flexible and modular Gaussian Process models with ease, so that you don’t have to be an expert to use GPs.”
Tree based learning algorithms are quite common in data science competitions. These algorithms empower predictive models with high accuracy, stability and ease of interpretation. Unlike linear models, they map non-linear relationships quite well. Common examples of tree based models are: decision trees, random forest, and boosted trees.
There is long-standing debate among text and data miners: whether sifting through full research papers, rather than much shorter and simpler research summaries, or abstracts, is worth the extra effort. Though it may seem obvious that full papers would give better results, some researchers say that a lot of information they contain is redundant, and that abstracts contain all that’s needed. Given the challenges of obtaining and formatting full papers for mining, stick with abstracts, they say.
In an attempt to settle the debate, Søren Brunak, a bioinformatician at the Technical University of Denmark in Kongens Lyngby, and colleagues analyzed more than 15 million scientific articles published in English from 1823 to 2016. After creating two databases of those articles—one of full-text and one of abstracts—the researchers directly compared the results of mining either.
In this post, we discuss the challenges of preparing the Fragile Families data for modeling, as well as the rationales for the methods we chose to address them. Our code is open source, and we hope other Challenge participants find it a helpful starting point.