Systems today can add a new appointment to your calendar but not engage in a back-and-forth dialogue with you about how to juggle a high-priority meeting request. They are also unable to use contextual information from one skill to assist you in making decisions from another, such as checking the weather before scheduling an afternoon meeting on the patio of a nearby coffee shop.
The next generation of intelligent assistant technologies from Microsoft will be able to do this by leveraging breakthroughs in conversational artificial intelligence and machine learning pioneered by Semantic Machines.
The team unveiled its vision for the next leap in natural language interface technology today at Microsoft Build, an annual conference for developers, in Seattle, and announced plans to incorporate this technology into all of its conversational AI products and tools, including Cortana.
In January, Facebook invited university faculty to respond to a call for research proposals on AI System Hardware/Software Co-Design. Co-design implies simultaneous design and optimization of several aspects of the system, including hardware and software, to achieve a set target for a given system metric, such as throughput, latency, power, size, or any combination thereof. Deep learning has been particularly amenable to such co-design processes across various parts of the software and hardware stack, leading to a variety of novel algorithms, numerical optimizations, and AI hardware.
U.S. Senators Rob Portman (R-OH), Brian Schatz (D-HI), Cory Gardner (R-CO), and Kamala Harris (D-CA) reintroduced the Artificial Intelligence (AI) in Government Act, legislation that would improve the use of AI across the federal government by providing access to technical expertise and streamlining hiring within the agencies. Federal agencies are also directed to develop governance plans to promote government uses of AI that benefit the public while establishing best practices for identifying and mitigating bias and other negative unintended consequences.
Abani Patra, founding director of the Institute for Computing and Data Sciences at the University of Buffalo, has been appointed the inaugural director of the Data Intensive Studies Center (DISC) at Tufts.
Since 2014, Patra has guided and shaped the growth of the University of Buffalo institute into a multifaceted, high-performance computing, data, and visualization resource.
Wyatt Cole is one of six team members building the impairment-detecting headset.
He said the current methods of assessing impairment, such as blood or urine tests, have been fraught with some controversy because they can’t show the level of a person’s impairment, since drugs and alcohol affect people differently.
“What our device seeks to do is to actually create an objective measurement through the use of brain waves,” he said.
The project is based on a previous study that showed a specific brain signal differed when people were intoxicated by cannabis.
MIT researchers have devised a method for assessing how robust machine-learning models known as neural networks are for various tasks, by detecting when the models make mistakes they shouldn’t.
Convolutional neural networks (CNNs) are designed to process and classify images for computer vision and many other tasks. But slight modifications that are imperceptible to the human eye — say, a few darker pixels within an image — may cause a CNN to produce a drastically different classification. Such modifications are known as “adversarial examples.” Studying the effects of adversarial examples on neural networks can help researchers determine how their models could be vulnerable to unexpected inputs in the real world.
Smart speakers, like Amazon Echo, Google Home and Apple HomePod, are spreading rapidly, and it is now common to hear people asking such assistants to provide weather forecasts or traffic updates, or to play audiobooks or music from streaming services. But because a smart speaker can act only on what it hears, it has little understanding of objects and people in its vicinity, or what those people might be up to. Having such awareness might improve its performance—and might also let users communicate with these digital servants by deed as well as word. Several groups of researchers are therefore working on ways to extend smart speakers’ sensory ranges.
One such effort is led by Chris Harrison and Gierad Laput of Carnegie Mellon University, in Pittsburgh, Pennsylvania. On May 6th, at a conference in Glasgow, Dr Harrison and Mr Laput unveiled their proposal, which they call SurfaceSight, to give smart speakers vision as well as hearing. Their chosen tool is lidar, a system that works, like radar, by bouncing a beam of electromagnetic waves off its surroundings and measuring how quickly those waves return. That information, run through appropriate software, builds up an image of what the beam is pointing at. If, as many radars do, a lidar then revolves, it can sweep the beam around to create a 360° picture of its surroundings.
Impact requires collaboration and partnerships with like-minded individuals and organizations, each bringing a unique set of skills and assets. The challenges Vulcan’s Impact Team takes on are wicked problems. These are systemic problems requiring an interdisciplinary approach to effect change. They demand collaboration, partnership and perseverance — not silver bullets – to create lasting change.
The range of skills of our staff and the multi-faceted approach we take to our mission areas from policy, science, technology and much more are hallmarks of the Vulcan way to moving the needle on the grand challenges we seek to tackle. Likewise, the types of collaborations and partnerships we seek vary according to the problem being addressed, the outcomes sought, and the solutions being applied.
Wildlife Conservation is one pillar of wicked problems Vulcan undertakes to move in the right direction.
No, the “ban” is only for SF police and government agency surveillance applications. You can still do face recognition in private companies.
2 replies 0 retweets 0 likes
The users posting photos to Ever, a mobile and desktop app similar to Flickr and Photobucket, had a choice. If they opted into facial recognition, the app’s software could analyze photo subjects’ faces, which meant it could group photos, let users search photos by the people in them, suggest tags, and make it easier to find friends and family using the app.
For users, this is tidy and convenient. For Ever, it’s lucrative: NBC News reported last week that Ever licenses its facial-recognition system, trained on user photos, to law-enforcement agencies and the U.S. military. As more people opt in to facial recognition, the system grows more advanced. Ever did not respond to requests for comment from The Atlantic, but privacy advocates are outraged.
Users are “effectively being conscripted to help build military and law enforcement weapons and surveillance systems,” says Jake Laperruque, the senior counsel at the Project on Government Oversight. Had users been explicitly informed about the military connection, he says, they may have chosen not to enable facial recognition.
China’s largest funder of basic science is piloting an artificial intelligence tool that selects researchers to review grant applications, in an attempt to make the process more efficient, faster and fairer. Some researchers say the approach by the National Natural Science Foundation of China is world-leading, but others are sceptical about whether AI can improve the process.
A hacked message in a streamed song makes Alexa send money to a foreign entity. A self-driving car crashes after a prankster strategically places stickers on a stop sign so the car misinterprets it as a speed limit sign. Fortunately these haven’t happened yet, but hacks like this, sometimes called adversarial attacks, could become commonplace—unless artificial intelligence (AI) finds a way to outsmart them. Now, researchers have found a new way to give AI a defensive edge, they reported here last week at the International Conference on Learning Representations.
The work could not only protect the public. It also helps reveal why AI, notoriously difficult to understand, falls victim to such attacks in the first place, says Zico Kolter, a computer scientist at Carnegie Mellon University, in Pittsburgh, Pennsylvania, who was not involved in the research. Because some AIs are too smart for their own good, spotting patterns in images that humans can’t, they are vulnerable to those patterns and need to be trained with that in mind, the research suggests.
Executives lured by the siren-song of AI need to understand both the possibilities and risks endemic in AI and data. Even at the dawn of humans interacting with AI through mediums like voice and chat, there are many documented failures of AI attempting to speak and understand human language. Here, we’ll highlight three recent, high-profile examples from Microsoft, Google, and Amazon, and show how AI leaders can learn from these mistakes to implement programs that safeguard their AI initiatives.
Like many other experts, I grow increasingly alarmed by the slow progress in the development, implementation, and acceptance of data standards for clinical trials. As the clock ticks, the capital costs and the human costs of inaction are mounting. No matter where my work takes me, I always return to the urgent need for data standards.
I know that “data standards” may not sound as enticing as “precision medicine.” Such standards aren’t the obvious fulcrum upon which the modernization of our health system pivots. But doctors, researchers, and other health care professionals in the U.S. and around the world struggle daily with non-standard, messy data, wasting their time and causing delays and affecting safety for patients.
San Jose, CA September 9-12. “Focuses on how your company can identify opportunities and develop strategies to implement AI systems now. It’s the best place to focus on the intersection of technical content and industry application.” [$$$$]
Brooklyn, NY June 8, at NYU Magnet (2 Metrotech Center). “CC Fest is an opportunity for students and teachers to engage in creative coding. Come spend a day making interactive and engaging digital art, animation, games.” [rsvp required]
Davis, CA August 22-23 at University of California-Davis. “The Fourth Annual Water Data Summit will feature leaders from California’s most innovative public agencies, companies and academic institutions who are using data to better manage water resources throughout the state.” Deadline for abstracts submissions is May 31.
Ecologist Thomas Crowther knew that scientists had already collected a vast amount of field data on forests worldwide. But almost all of those data were sequestered in researchers’ notebooks or personal computers, making them unavailable to the wider scientific community. In 2012, Crowther, then a postdoctoral researcher at Yale University in New Haven, Connecticut, began to e-mail and cold-call researchers to request their data. He started to assemble an inventory, now hosted by the Global Forest Biodiversity Initiative, an international research collaboration, that contains data on more than 1 million locations. Data are stored in CSV files (plain-text files that contain a list of data) on servers at Crowther’s present laboratory at the Swiss Federal Institute of Technology in Zurich and on those of a collaborator at Purdue University in West Lafayette, Indiana; he hopes to outsource database storage to a third-party organization with expertise in archiving and access.
After years of courting and cajoling, Crowther has persuaded about half of the data owners to make their data public. The other half, he laments, say that they support open data in principle, but have specific reasons for keeping their data sets private. Mainly, he explains, they want to use their data to conduct and publish their own studies.
Crowther’s database challenges reflect the current state of science: partly open, partly closed, and with unclear and inconsistent policies and expectations on data sharing that are still in flux.
In this blog, we introduce the advanced HyperLogLog functionality of the open-source library spark-alchemy and explore how it addresses data aggregation challenges at scale. But first, let’s explore some of the challenges.