The National Security Agency hopes that its GenCyber camps inspire young people to pursue work in cybersecurity, a field in which three hundred thousand jobs remain unfilled in the U.S. alone.
The Department of Defense is poised to spend nearly $1 billion on artificial intelligence in the next year.
The Pentagon’s proposed budget for fiscal 2020 includes some $927 million for AI, as well as machine learning, according to Ainikki Riikonen, a research assistant for the Technology and National Security Program at the Center for a New American Security.
This includes $208 million earmarked for the Joint Artificial Intelligence Center, which was created in 2018. The Center’s initial efforts have delivered “a very mature, insightful high-level view” of issues surrounding AI, said Ian McCulloh, chief data scientist at Accenture Federal Services.
AI encompasses hardware, software, people and processes. With nearly a $1 billion bankroll, Defense Department leaders and the intelligence community are now looking for the best ways to leverage this emerging capability most effectively.
When embedded systems get cheap and capable enough to run machine learning, I’m convinced we’re going to end up with trillions of devices doing useful jobs in the world around us. Those benefits are only half the story though, so I’ve been doing my best to think through what the unintended consequences of a change this big might be. Unlike Lehrer’s Werner von Braun, I do think it is our job to care where the rockets come down. One challenge will be preserving privacy when almost everything is capable of recording us, and I discussed some ideas on tackling that in a previous post, but another will be the sheer amount of litter having that many devices in the environment will generate.
Right now my best guess is that there are 250 billion embedded systems active in the world, with 40 billion more being shipped every year. Most of them are truly embedded in larger systems like washing machines, toasters, or cars. If smart sensors using machine learning become ubiquitous, we could easily end up with ten times as many chips, and they’re more likely to be standalone packages that are “peeled and stuck” in our built and natural environments. They will either use batteries or energy harvesting, and have lifetimes measured in years, but inevitably they will reach their end of life at some point. What happens then?
Earlier this summer, Vaughn Cooper, an evolutionary biologist at the University of Pittsburgh, was busy promoting a new secondary school curriculum for teaching evolution to scientists and educators. He and his colleagues had published the program in Evolution: Education and Outreach in April, and they were eager to spread the word before the start of the upcoming school year. So when Cooper received an email from a colleague who couldn’t access his manuscript’s supplementary files because of broken hyperlinks, he was frustrated by the news.
The supplementary documents contained important information, such as the experimental protocols for students that his team had tested. This was not the first time that he’d come across issues with these types of files. “I’ve had multiple instances from multiple publishers where the supplementary material has gone missing,” he says, adding that this has occurred with both his papers and others’.
Cooper went to Twitter to vent his frustration. In response, other scientists noted that they, too, had experienced similar problems. “I am afraid this is not uncommon,” tweeted Peter Murray-Rust, a chemist at the University of Cambridge. “Many (not all) journals generally regard supplementary data as a pain in the neck.”
“We provide the equivalent of a data team for the price of an analyst,” explains Narrator co-founder and director of engineering Star. “Within the first month, our clients get an infinitely scalable data system.”
Led by chief executive officer Elsamadisi, a former senior data engineer at WeWork, the Narrator founding team is made up entirely of alums of the co-working giant. The building blocks of Narrator’s subscription-based data modeling tool were developed during Elsamadisi’s WeWork tenure, where he was tasked with making sense of the company’s disorganized trove of data.
There are powerful forces changing the quantitative communities around the world. A careful combination of heavy competition in asset management, a desperate need to stand-out in an overcrowded field, and readily available datasets and coding libraries, the new class on the block feels familiar, but very different at the same time.
We start with the ‘AI’ start-ups taking the world by storm. The recipe is becoming standard: we take a team with nimble fresh-faced technologists, add a splash of marketing professional with snappy one fresh name (usually unrelated to its business), and bake well with plentiful cheap VC capital. We hope that a unicorn rises. The pitches ring similar: here is the market (it’s big), here is what we do to disrupt it (it’s amazing), this is how we will change the world (boom!).
But what was meant to be the beginning of the end of for legacy businesses (‘a fintech threat’), morphed into a ‘fintech alliance’ as these companies pivoted to enablement rather than competition, and ended up selling their technologies to incumbents. It turns out that getting paying customers was harder than building technology. For asset management, this meant intelligent automation, alternate data groups, rather than better investment decisions. For example, Robo-Advisors didn’t necessarily promise to be a more insightful investor, but to make access to capital markets quicker and cheaper for investors.
This is a marked change to the previous wave of quantitative sciences that began in the 1980s, and may show the signs of things to come.
The University of Toronto Mississauga is forging a robotics research and teaching cluster that – as part of U of T’s new university-wide Robotics Institute initiative – will help “take robotics to the next level.”
That’s how Jessica Burgner-Kahrs describes what’s in store for robotics innovation at U of T Mississauga in the years ahead.
“With the great resources and the support we have, great research can flow from here,” says Burgner-Kahrs, an associate professor at U of T Mississauga’s department of mathematical and computational sciences and an expert in continuum robotics who has emerged as the de facto leader of the U of T Mississauga robotics cluster.
Mathematicians, computer engineers and scientists in related fields should take a Hippocratic oath to protect the public from powerful new technologies under development in laboratories and tech firms, a leading researcher has said.
The ethical pledge would commit scientists to think deeply about the possible applications of their work and compel them to pursue only those that, at the least, do no harm to society.
Hannah Fry, an associate professor in the mathematics of cities at University College London, said an equivalent of the doctor’s oath was crucial given that mathematicians and computer engineers were building the tech that would shape society’s future.
Another technology used in education is face recognition. Facial recognition is widely used in China. Not only on the streets and to determine the amount of toilet paper people receive in public toilets, but increasingly also in schools and classrooms. First of all, students need to scan their faces to access their campus. They no longer need wallets or ID passes, as they can identify themselves or pay in the canteen with their face.
Once they are in the classroom, a camera tracks whether students are paying attention or not. Recently, a Chinese high school in Hangzhou started experimenting with facial-recognition technology that scans students every 30 seconds. The information is shared with the teacher parents and school leaders so that everyone instantly knows who pays attention or not. However, whether the students like it remains to be seen.
As businesses use an increasing variety of marketing software solutions, the goal around collecting all of that data is to improve customer experience. Simon Data announced a $30 million Series C round today to help.
The round was led by Polaris Partners . Previous investors .406 Ventures and F-Prime Capital also participated. Today’s investment brings the total raised to $59 million, according to the company.
Jason Davis, co-founder and CEO, says his company is trying to pull together a lot of complex data from a variety of sources, while driving actions to improve customer experience. “It’s about taking the data, and then building complex triggers that target the right customer at the right time,” Davis told TechCrunch. He added, “This can be in the context of any sort of customer transaction, or any sort of interaction with the business.”
Princeton University, Princeton Institute for Computational Science and Engineering
from
For the third consecutive summer, high energy physics graduate students, postdocs and instructors from across the United States, as well as from India, Italy and Switzerland, gathered at Princeton University to attend the school on Tools, Techniques and Methods for Computational and Data Science for High Energy Physics or CoDaS-HEP, held this year July 22-26.
If you want to know what climate change will look like, you need to know what Earth’s climate looked like in the past — what air temperatures were like, for example, and what ocean currents and sea levels were doing. You need to know what polar ice caps and glaciers were up to and, crucially, how hot the oceans were.
“Most of the Earth is water,” explains Peter Huybers, a climate scientist at Harvard University. “If you want to understand what global temperatures have been doing, you better understand, in detail, the rates that different parts of the ocean are warming.”
… Researchers at the University of Washington have used machine learning to develop a new system that can monitor factory and warehouse workers and tell them how risky their behaviors are in real time. The algorithm divides up a series of activities — such as lifting a box off a high shelf, carrying it to a table and setting it down — into individual actions and then calculates a risk score associated with each action.
UC Berkeley neuroscientists have created interactive maps that can predict where different categories of words activate the brain. Their latest map is focused on what happens in the brain when you read stories.
Herndon, VA October 27, starting at 7:30 a.m. “This will be a great opportunity to network among various specializations within the GEOINT Community and to learn about the important initiatives being conducted by fellow Working Groups. Learn more about USGIF Working Groups and gain fresh insight on your respective projects.” [$$]
Minneapolis, MN November 13, precedes National Communication Associate conference. “Those interested in participating in the “speed poster” session should submit a one-page abstract describing (a) their research, (b) its relevance to the preconference theme, and (c) what they hope to contribute to and/or gain from the dialogue among scholars.” Deadline for submissions is September 1.
“Scientists who are interested in conducting experiments at the Advanced Imaging Center will be required to submit a brief application. Prior to proposal submission, applicants are strongly encouraged to contact the AIC for technical consultation.” Deadline for submissions is November 1.
“The xView2 Challenge focuses on automating the process of assessing building damage after a natural disaster, an analytical bottleneck in the post-disaster workflow. To enable the Challenge and stimulate applied research in the computer vision community, we are releasing one of the largest and highest-quality publicly available dataset of high-resolution satellite imagery annotated with building locations and damage scores before and after natural disasters.” Deadline for submissions is November 22.
As developers, we are often faced with decisions that will affect the entire architecture of our applications. One of the core decisions web developers must make is where to implement logic and rendering in their application. This can be a difficult, since there are a number of different ways to build a website.
Our understanding of this space is informed by our work in Chrome talking to large sites over the past few years. Broadly speaking, we would encourage developers to consider server rendering or static rendering over a full rehydration approach.
In order to better understand the architectures we’re choosing from when we make this decision, we need to have a solid understanding of each approach and consistent terminology to use when speaking about them. The differences between these approaches help illustrate the trade-offs of rendering on the web through the lens of performance.
“Sadly, we can’t have face-to-face conversations with everyone who is interested in AI. So, to help us bridge that gap, we’re now launching DeepMind: The Podcast, a new series that we hope will answer these questions and more, while also giving listeners an inside look at how AI research is done at an organisation like DeepMind.”