On Monday, millions of Americans woke up to a startling surprise: data that was set free would once again be locked away. According to a pop-up that appeared on the EPA Open Data website, its days are numbered—the message stated that the site would cease to exist as of April 28, 2017. It turns out this isn’t quite true, but the site isn’t exactly safe, either.
Georgia Institute of Technology, College of Computing
from
When neuroimaging took gigantic leaps forward in the 1970s and 80s with the introduction of magnetic resonance imaging (MRI) and computed tomography (CT), it was a sign of just how closely advances in medicine or diagnostics correlate to the technological advances within the field.
Suddenly, researchers were able to more safely observe and document the brain in live subjects, opening them to a world of study that was previously unattainable. There was a massive increase in understanding about things like medical conditions or effects of alcohol and drugs on the brain.
That kind of technological advancement, one that drastically moves the needle of study in the field forward, hasn’t been as prominent in the field of behavioral psychology. It’s a challenge that many researchers in the Georgia Institute of Technology’s School of Interactive Computing (IC) are trying to overcome.
The burden of substance abuse disorders can fall heavily on the families and friends of those who battle addictions. But society also pays a great deal through increased crime. Treatment programs can reduce those costs.
For at least two decades, we’ve known substance use and crime go hand in hand. More than half of violent offenders and one-third of property offenders say they committed crimes while under the influence of alcohol or drugs.
Researchers with the Centers for Disease Control and Prevention recently estimated that prescription opioid abuse, dependence and overdoses cost the public sector $23 billion a year, with a third of that attributable to crime. An additional $55 billion per year reflects private-sector costs attributable to productivity losses and health care expenses.
A team of Rice engineering students recently took top honors and a $5,000 prize for its development of a potential digital cure for epilepsy.
Epilepsy is a neurological disorder characterized by unpredictable, recurrent seizures that can pose a risk to a patient’s safety. When undergoing a seizure, the brain is considered to be in an “ictal” state. Team Ictal Inhibitors‘ goal was to develop a neurostimulator that stimulates the brain to prevent the onset of seizures.
Building on the key tenets of precision medicine, Stanford Medical School believes algorithms will transform healthcare into an industry that is more predictive than reactive.
In fact, the school is so invested in the promise of data analytics and artificial intelligence, it launched a new department 18 months ago that focuses specifically on biomedical data, Lloyd Minor, M.D., dean of the Stanford University School of Medicine, told the Wall Street Journal, adding that there is a “huge demand for data scientists” in the healthcare industry.
It’s all part of a transition within the last decade to find ways that data can predict illnesses and prevent disease—an approach Minor refers to as “precision health,” a twist on genomics-based precision medicine initiatives.
A major upgrade to Hyak, the UW’s on-site shared cluster supercomputer, is helping to address the growing needs of the UW research community by offering.
A new computer modeling study from Los Alamos National Laboratory is aimed at making epidemiological models more accessible and useful for public-health collaborators and improving disease-related decision making.
“In a real-world outbreak, the time is often too short and the data too limited to build a really accurate model to map disease progression or guide public-health decisions,” said Ashlynn R. Daughton, a graduate research assistant at Los Alamos and doctoral student at University of Colorado, Boulder. She is lead author on a paper out last week in Scientific Reports, a Nature journal. “Our aim is to use existing models with low computational requirements first to explore disease-control measures and second to develop a platform for public-health collaborators to use and provide feedback on models,” she said.
Scientists at the Centre for Genomic Regulation (CRG) in Barcelona, Spain, have developed a workflow management system that prevents irreproducibility when analyzing large genomics datasets with computers.
Nextflow contributes to establishing good scientific practices and provides an important framework for those research projects where the analysis of large datasets are used to take decisions, for example, in precision medicine.
arXiv, Physics > Physics and Society; Anthony J. Webster, Richard H. Clarke
from
Climate change is widely expected to increase weather related damage and the insurance claims that result from it. This will increase insurance premiums, in a way that is independent of a customer’s contribution to the causes of climate change. Insurance provides a financial mechanism that mitigates some of the consequences of climate change, allowing damage from increasingly frequent events to be repaired. We observe that the insurance industry could reclaim any increase in claims due to climate change, by increasing the insurance premiums on energy producers for example, without needing government intervention or a new tax. We argue that this insurance-led levy must acknowledge both present carbon emissions and a modern industry’s carbon inheritance, that is, to recognise that fossil-fuel driven industrial growth has provided the innovations and conditions needed for modern civilisation to exist and develop. A tax or levy on energy production is one mechanism that would recognise carbon inheritance through the increased (energy) costs for manufacturing and using modern technology, and can also provide an incentive to minimise carbon emissions, through higher costs for the most polluting industries. The necessary increases in insurance premiums would initially be small, and will require an event attribution (EA) methodology to determine their size. We propose that the levies can be phased in as the science of event attribution becomes sufficiently robust for each claim type, to ultimately provide a global insurance-led response to climate change.
As part of our mission to support and promote better science through support of the open source scientific software community, NumFOCUS champions technical progress through diversity. NumFOCUS recognizes that the open source data science community is currently highly homogenous. We believe that diverse contributors and community members produce better science and better projects. NumFOCUS strives to help create a more diverse community through initiatives and programming devoted to increasing participation by and inclusion of underrepresented people.
To support this effort, we are excited to announce that NumFOCUS has received a generous grant from the Moore Foundation.
Banks are highly regulated and usually unable to offer fast, customer-oriented service. But banks have been the backbone of modern economies and count with large sales and customer service forces. Fintechs, on the other hand, are flexible and generally successful at focusing on specific segments with unmet needs. Most banks see AI & machine learning as a way to to reduce costs. They are targeting promising fintech companies as a means to expand globally.
Microsoft will build computers even more sleek and beautiful than Apple’s. Robots will 3-D-print cool shoes that are personalized just for you. (And you’ll get them in just a few short days.) Neural networks will take over medical diagnostics, and Snapchat will try to take over the entire world. The women and men in these pages are the technical, creative, idealistic visionaries who are bringing the future to your doorstep. You might not recognize their names—they’re too busy working to court the spotlight—but you’ll soon hear about them a lot. They represent the best of what’s next.
I’ve heard that in the future computerized AIs will become so much smarter than us that they will take all our jobs and resources, and humans will go extinct. Is this true?
That’s the most common question I get whenever I give a talk about AI. The questioners are earnest; their worry stems in part from some experts who are asking themselves the same thing. These folks are some of the smartest people alive today, such as Stephen Hawking, Elon Musk, Max Tegmark, Sam Harris, and Bill Gates, and they believe this scenario very likely could be true. Recently at a conference convened to discuss these AI issues, a panel of nine of the most informed gurus on AI all agreed this superhuman intelligence was inevitable and not far away.
Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them.
What is the potential of machine learning over the next 5-10 years? And how can we develop this technology in a way that benefits everyone?
The Royal Society’s machine learning project has been investigating these questions, and has today launched a report setting out the action needed to maintain the UK’s role in advancing this technology while ensuring careful stewardship of its development.
For decades, scientists have worked toward the ‘holy grail’ of finding a cure for cancer. While significant progress has been made, their efforts have often been worked on as individual entities. Now, as organizations of all kinds seek to put the massive amounts of data they take in to good use, so, too, are the health care industry and the U.S. federal government.
The National Cancer Institute (NCI) and the U.S. Department of Energy (DOE) are collaborating on three pilot projects that involve using more intense high-performance computing at the exascale level, which is the push toward making a billion billion calculations per second (or 50 times faster than today’s supercomputers), also known as exaFLOPS (a quintillion, 1018, floating point operations per second). The goal is to take years of data and crunch it to come up with better, more effective cancer treatments.
Mark Zuckerberg now acknowledges the dangerous side of the social revolution he helped start. But
is the most powerful tool for connection in human history capable of adapting to the world it created?
Cambridge, MA Saturday, April 30. Join us as experts from both the public and private sectors explore the issues surrounding environmental data, discuss the consequences and impact of its uses, and examine who we trust to work with our data and why. [rsvp required]
New York, NY Fri, May 5, he final 13 teams will pitch off to see which ventures win the grand prizes. Starts at 1 p.m., NYU Stern Tisch Hall [free, please register]
Details on how to submit comments and feedback regarding Version 1 of the Ethically Aligned Design (EAD) can be found via the Submission Guidelines. Individuals interested in specific content within the EAD can submit feedback on the following sections relevant to their particular interests. Deadline for submisssions is May 15.
We’re interested in seeing and celebrating the fun and innovative ways your library is helping students excel. Now through May 17, send an email to SAGE.Contest@sagepub.com with a photo that shows how your library supports success in a creative way and you could win a travel grant to the 2017 Charleston Conference, or one of five $50 Amazon gift cards.
If you have a unique vision of the future, NASA wants to help you share it with the world through its CineSpace Short Film Competition. Filmmakers of all stripes are asked to utilize NASA’s incredible library of space imagery to craft imaginative, celebratory films up to ten minutes long. A grand prize of $10,000 is on the line, so get those internal combustion engines rumbling. Deadline for submissions is July 31.
An advertiser looking to boost response rates for a mobile offer can employ a simple solution: Target commuters on crowded trains. And if there’s a delay or disruption, response rates rise further.
The reason? The more crowded the subway car, the more consumers immerse themselves in their phones to avoid contact with the strangers next to them, according to research by Anindya Ghose, a professor at New York University’s Stern School of Business.
Today, we’re excited to share a tool we built to help bridge the gap between designers and engineers working on design systems at scale. React-sketchapp is an open-source library that allows you to write React components that render to Sketch documents.
If you’re a designer or an engineer familiar with React, you should feel right at home with the new library, and you can play with it right now.
There is no real middle ground when it comes to TensorFlow use cases. Most implementations take place either in a single node or at the drastic Google-scale, with few scalability stories in between.
This is starting to change, however, as more users find an increasing array of open source tools based on MPI and other approaches to hop to multi-GPU scalability for training, but it still not simple to scale Google’s own framework across larger machines. Code modifications get hairy beyond single node and for the MPI uninitiated, there is a steep curve to scalable deep learning.
Handwritten digits classification from MNIST on Android with TensorFlow.
If you want to make your own version of this app or want to know how to save your model and export it for Android or other devices check the very simple tutorial bellow.
Considerable attention has been devoted to the use of persistent identifiers for assets of interest to scientific and other communities alike over the last two decades. Among persistent identifiers, Digital Object Identifiers (DOIs) stand out quite prominently, with approximately 133 million DOIs assigned to various objects as of February 2017. While the assignment of DOIs to objects such as scientific publications has been in place for many years, their assignment to Earth science data sets is more recent. Applying persistent identifiers to data sets enables improved tracking of their use and reuse, facilitates the crediting of data producers, and aids reproducibility through associating research with the exact data set(s) used. Maintaining provenance – i.e., tracing back lineage of significant scientific conclusions to the entities (data sets, algorithms, instruments, satellites, etc.) that lead to the conclusions, would be prohibitive without persistent identifiers. This paper provides a brief background on the use of persistent identifiers in general within the US, and DOIs more specifically. We examine their recent use for Earth science data sets, and outline successes and some remaining challenges. Among the challenges, for example, is the ability to conveniently and consistently obtain data citation statistics using the DOIs assigned by organizations that manage data sets. [full text]
Fall 1996. A young Chris Fralic is selling software for Oracle. He’s not sure what he wants to do next, but he’s always been curious about venture capital. And then some unusual magic happens — a friend offers to introduce him to Kevin Compton, a vaunted name in VC. To his surprise, they talk on the phone for over an hour, and Fralic not only walks away with a comprehensive download on the industry, but a thesis on networking he’s adhered to ever since: The best way to be highly influential is to be human to everyone you meet.
Fast forward to today, Fralic is a successful VC himself, responsible for First Round’s investments in Warby Parker, Roblox, HotelTonight and Adaptly among others. When asked what’s made his career possible, he’ll tell you outright it’s the relationships — built deliberately over many years.