Austin Business Journal, David Allison and Colin Pope
from
Austin and Atlanta share the best odds of landing Amazon’s HQ2, says Irish betting site PaddyPower.
Back in October, PaddyPower put Atlanta’s odds of landing Amazon’s $5 billion second headquarters at 2-to-1, beating Austin (3-to-1) and Boston (6-to-1).
Unfortunately, we have some sad news. Mapzen will cease operations at the end of January 2018. Our hosted APIs and all related support and services will turn off on February 1, 2018.
Looking to the immediate future in 2018, we see data centers affected in the following three ways regarding machine-to-machine communication:
Laying the foundation for 5G: Yes, it will happen in data centers, too. All the devices that must communicate with each other and humans will drive a massive amount of fiber, especially as we look to 5G’s arrival in the next 5 to 10 years.
Low latency: Machines can process information nearly as fast as they receive it. Humans can’t. In the data center especially, decision making is almost instantaneous, and it needs a strong network backbone.
Higher density and speed: Deploying copious amounts of fiber is a best-case solution. But it’s not always feasible. The most efficient scenario is to deploy high-density fiber from the get-go to allow fast machine-to-machine conversations. A modular high-speed platform that can support multiple equipment generations is the best option.
The Canadian government will soon hire an Ottawa-based company specializing in social media monitoring and artificial intelligence to forecast potential spikes in suicide risk.
A contract with Advanced Symbolics Inc., an AI and market research firm, is set to be finalized next month.
Working with the company to develop its strategy, the federal government will define “suicide-related behaviour” on social media and “use that classifier to conduct market research on the general population of Canada,” according to a document published to Public Works website.
The image is one of the faux celebrity photos generated by software under development at Nvidia, the big-name computer chip maker that is investing heavily in research involving artificial intelligence.
At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.
Adrienne Fairhall, a computational neuroscientist at the University of Washington, reflects on how the field has evolved and on the biggest issues yet to be solved.
Consultants who partner with data scientists can use the power of big data to help sell their expert advice. Find out pitfalls to avoid in this partnership
… Those of you who have been reading my series of blog posts on the future of Robotics and Artificial Intelligence know that I am more sanguine about how fast things will deploy at scale in the real world than many cheerleaders and fear mongers might believe. My predictions here are tempered by that sanguinity.
Some of these predictions are about the public perception of AI (that has been the single biggest thing that has changed in the field in the last three years), some are about technical ideas, and some are about deployments.
“At the time, everybody was saying, ‘Well, you can’t attribute a single event to climate change,'” [Myles Allen] said in an interview with E&E News. “And this prompted me to ask, ‘Why not?'”
So he drafted his commentary as the floodwaters inched closer to his kitchen door. He wrote that it might not always be impossible to attribute extreme weather events to climate change — just “simply impossible at present, given our current state of understanding of the climate system.” And if researchers were ever able to make that breakthrough, he mused, the science could potentially influence the public’s ability to blame greenhouse gas emitters for the damages caused by climate-related events.
His hunch held true. Nearly 15 years later, extreme event attribution not only is possible, but is one of the most rapidly expanding subfields of climate science.
Child protective agencies are haunted when they fail to save kids. Pittsburgh officials believe a new data analysis program is helping them make better judgment calls.
When it comes to artificial intelligence and jobs, the prognostications are grim. The conventional wisdom is that A.I. might soon put millions of people out of work — that it stands poised to do to clerical and white collar workers over the next two decades what mechanization did to factory workers over the past two. And that is to say nothing of the truckers and taxi drivers who will find themselves unemployed or underemployed as self-driving cars take over our roads.
But it’s time we start thinking about A.I.’s potential benefits for society as well as its drawbacks. The big-data and A.I. revolutions could also help fight poverty and promote economic stability.
Poverty, of course, is a multifaceted phenomenon. But the condition of poverty often entails one or more of these realities: a lack of income (joblessness); a lack of preparedness (education); and a dependency on government services (welfare). A.I. can address all three.
First, even as A.I. threatens to put people out of work, it can simultaneously be used to match them to good middle-class jobs that are going unfilled. Today there are millions of such jobs in the United States. This is precisely the kind of matching problem at which A.I. excels. Likewise, A.I. can predict where the job openings of tomorrow will lie, and which skills and training will be needed for them.
It is claimed the network, a so-called reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.
The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.
Environmental Research Letters; Ethan D Coffel, Radley M Horton and Alex de Sherbinin
from
As a result of global increases in both temperature and specific humidity, heat stress is projected to intensify throughout the 21st century. Some of the regions most susceptible to dangerous heat and humidity combinations are also among the most densely populated. Consequently, there is the potential for widespread exposure to wet bulb temperatures that approach and in some cases exceed postulated theoretical limits of human tolerance by mid- to late-century. We project that by 2080 the relative frequency of present-day extreme wet bulb temperature events could rise by a factor of 100–250 (approximately double the frequency change projected for temperature alone) in the tropics and parts of the mid-latitudes, areas which are projected to contain approximately half the world’s population. In addition, population exposure to wet bulb temperatures that exceed recent deadly heat waves may increase by a factor of five to ten, with 150–750 million person-days of exposure to wet bulb temperatures above those seen in today’s most severe heat waves by 2070–2080. Under RCP 8.5, exposure to wet bulb temperatures above 35 °C—the theoretical limit for human tolerance—could exceed a million person-days per year by 2080. Limiting emissions to follow RCP 4.5 entirely eliminates exposure to that extreme threshold. Some of the most affected regions, especially Northeast India and coastal West Africa, currently have scarce cooling infrastructure, relatively low adaptive capacity, and rapidly growing populations. In the coming decades heat stress may prove to be one of the most widely experienced and directly dangerous aspects of climate change, posing a severe threat to human health, energy infrastructure, and outdoor activities ranging from agricultural production to military training.
It was a typical November day in New York City. The year: 1959. Robert Dunlop, 50 years old and photographed later as clean-shaven, hair carefully parted, his earnest face donning horn-rimmed glasses, passed under the Ionian columns of Columbia University’s iconic Low Library. He was a guest of honor for a grand occasion: the centennial of the American oil industry.
Over 300 government officials, economists, historians, scientists, and industry executives were present for the Energy and Man symposium – organized by the American Petroleum Institute and the Columbia Graduate School of Business – and Dunlop was to address the entire congregation on the “prime mover” of the last century – energy – and its major source: oil. As President of the Sun Oil Company, he knew the business well, and as a director of the American Petroleum Institute – the industry’s largest and oldest trade association in the land of Uncle Sam – he was responsible for representing the interests of all those many oilmen gathered around him.
Four others joined Dunlop at the podium that day, one of whom had made the journey from California – and Hungary before that. The nuclear weapons physicist Edward Teller had, by 1959, become ostracized by the scientific community for betraying his colleague J. Robert Oppenheimer, but he retained the embrace of industry and government. Teller’s task that November fourth was to address the crowd on “energy patterns of the future,” and his words carried an unexpected warning.
Johns Hopkins All Children’s Hospital of St. Petersburg has received national recognition from Hospitals & Health Networks magazine and has been named one of the Most Wired hospitals in the U.S. by the publication.
The magazine’s list reflects a 698-participant survey that represents over 2,000 hospitals in the country. Through the survey, a hospital’s leveraging of information technology to improve performance of infrastructure, business and administrative management, quality and safety and clinical integration are examined. The survey is well-known in the health care field as a benchmark study and leading barometer for IT standards among the nation’s hospitals.
Chuck Ganapathi, CEO at Tact, an AI-driven sales tool that uses voice, type and touch, says with our devices changing, voice makes a lot of sense. “There is no mouse on your phone. You don’t want to use a keyboard on your phone. With a smart watch, there is no keyboard. With Alexa, there is no screen. You have to think of more natural ways to interact with the device.”
“This course is focused on the question: How do we do matrix computations with acceptable speed and acceptable accuracy?” … “The course is taught in Python with Jupyter Notebooks, using libraries such as Scikit-Learn and Numpy for most lessons, as well as Numba (a library that compiles Python to C for faster performance) and PyTorch (an alternative to Numpy for the GPU) in a few lessons.”
arXiv, Computer Science > Learning; Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko
from
The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.
Ever since neural artistic transfer algorithm was published by Gatys, we have been seeing plenty of pictures being turned into artwork.
The algorithm uses a feed-forward network to apply the ‘style’ of a painting to a given picture. We also saw an impressive approach for non-artistic neural style transfer, where “non-paintings” or everyday objects can be tiled as style image to create art. Later on, improvements were made in this area to develop a fast neural style transfer approach by Johnson et al. This paved way to many mobile applications, the notable one being Prisma, which allows users to create an artwork within seconds out of a picture they took with their phone.
However, most of the artwork generated by these applications have mainly used pictures as the content image.
The project I volunteered for was with UNICEF who wanted to tell the story on the situation of refugees and migrant children worldwide. When I first explored the data I was shocked to learn that nearly 31,000,000 children have migrated across borders or had been forcibly displaced due to things such as violence or insecurity. When I reflected on why the size of the issue was so surprising to me it informed the approach I took to communicate the story.
International Journal of Digital Curation; Chung-Yi Hou, Heather Soyka, Vivian Hutchison, Isis Sema, Chris Allen, Amber Budden
from
In the case of Data Observation Network for Earth (DataONE), DataONE’s extensive collaboration with individuals and organizations has informed the development of multiple educational resources. Through these interactions, DataONE understands that the process of creating and maintaining educational materials that remain responsive to community needs is reliant on careful evaluations. Therefore, the impetus for a comprehensive, customizable Education EVAluation instrument (EEVA) is grounded in the need for tools to assess and improve current and future training and educational resources for research data management.
In this paper, the authors outline and provide context for the background and motivations that led to creating EEVA for evaluating the effectiveness of data management educational resources.
In theory, your skin—your largest organ—serves to keep water sealed within your body. But even if your skin is usually dewy and perfect, it probably flakes on the job a bit during the winter months.
International Journal of Digital Curation; Peter Darch
from
Online citizen science projects involve recruitment of volunteers to assist researchers with the creation, curation, and analysis of large datasets. Enhancing the quality of these data products is a fundamental concern for teams running citizen science projects. Decisions about a project’s design and operations have a critical effect both on whether the project recruits and retains enough volunteers, and on the quality of volunteers’ work. The processes by which the team running a project learn about their volunteers play a critical role in these decisions. Improving these processes will enhance decision-making, resulting in better quality datasets, and more successful outcomes for citizen science projects. This paper presents a qualitative case study, involving interviews and long-term observation, of how the team running Galaxy Zoo, a major citizen science project in astronomy, came to know their volunteers and how this knowledge shaped their decision-making processes. This paper presents three instances that played significant roles in shaping Galaxy Zoo team members’ understandings of volunteers.