Stanford University, Stanford Medicine, News Center
from
In a health care sector now awash with data and digital technologies, physicians are actively preparing for the transformation of patient care, according to the 2020 Health Trends Report published today by Stanford Medicine.
Stanford Medicine’s 2020 Health Trends Report once again documents key trends steering the industry’s future, including a maturing digital health market, new health laws opening patient access to data, and artificial intelligence gaining regulatory traction for medical use.
To understand how these trends will reach the doctor’s office and ultimately shape patient care, Stanford Medicine commissioned a national survey of more than 700 physicians, residents, and medical students.
Animals are born with innate abilities and predispositions. Horses can walk within hours of birth, ducks can swim soon after hatching, and human infants are automatically attracted to faces. Brains have evolved to take on the world with little or no experience, and many researchers would like to recreate such natural abilities in artificial intelligence.
New research finds that artificial neural networks can evolve to perform tasks without learning. The technique could lead to AI that is much more adept at a wide variety of tasks such as labeling photos or driving a car.
Why are healthcare providers being targeted more than other organizations?
In the same way, as experienced robbers tend to target banks, rather than newspaper stands, cybercriminals are on the lookout for ‘big ticket’ targets, which can offer the most value.
Medical providers handle enormous amounts of high-value data daily. Healthcare has the highest cost per breach for any industry, with each breach costing nearly $6.5 million on average, according to IBM’s annual Cost of a Data Breach Report.
From social security numbers, insurance information, addresses, to more personal information like health conditions, hospitals patients visit and medicines they take, data stored on healthcare systems is detailed in nature, offering hackers a wealth of information which can be used for further attacks on medical users via social engineering. As such, individual medical records can sell for as high as $1000 on the dark web.
In a paper published this month in IEEE Transactions on Visualization and Computer Graphics, researchers described an artificial intelligence (AI) system that analyzes students’ emotions based on video recordings of the students’ facial expressions.
The system “provides teachers with a quick and convenient measure of the students’ engagement level in a class,” says Huamin Qu, a computer scientist at the Hong Kong University of Science and Technology, who co-authored the paper. “Knowing whether the lectures are too hard and when students get bored can help improve teaching.”
There is widespread apprehension that introducing submission fees will deter authors from submitting to fee-charging journals in favor of those that don’t have such charges. Editorial rejections are a big driver of this concern: it is frustrating to spend several hundred dollars to submit your article only to have it back in your inbox within ninety minutes. This situation can be alleviated by further separating the fees into a small submission fee, a fee for peer review, and a fee to cover the costs of publishing the accepted article (see an upcoming Scholarly Kitchen post for a proposal about this). Nonetheless, there is a world of difference between free and every other amount, and even a very small mandatory submission fee may deter authors.
An alternative is to give authors a choice of how they pay for their article
Bloomberg, Future Finance, Katie Linsell and Lananh Nguyen
from
Chris Purves has been at the cutting edge of markets for more than a decade – from algorithmic trading to machine learning.
Now the head of UBS Group AG’s Strategic Development Lab is turning his focus to the human survivors of the tech invasion, persuading them to understand things will never be the same. They’re going to have to — in the jargon of Silicon Valley’s missionaries — “unlearn” how they’ve always operated.
It’s not just that their software may know their next move before they do. The extinction of an entire way of life is looming, as Purves sees it: the end of the bonus culture. Compensation will be a last frontier in the onslaught of technology on finance.
The ball has dropped on a new year and a new decade, as we move from the 2010s into the 2020s. The last 10 years have seen incredible advances in science and technology, including a dramatic reduction in the cost of genetic sequencing, the first successful uses of gene therapy in humans, and the existence of gravitational waves. But what about the next decade? What previously impossible things will humans achieve? The Wyss Institute for Biologically Inspired Engineering at Harvard University asked its faculty members across a wide range of scientific disciplines what they predict will be the most impactful developments in their fields between now and the year 2030.
You may have heard from a lot of businesses telling you that they’ve updated their privacy policies because of a new law called the California Consumer Privacy Act. But what’s actually changed for you?
EFF has spent the past year defending this law in the California legislature, but we realize that not everyone has been following it as closely as we have. So here are answers to ten frequently asked questions we’ve heard about the CCPA.
A new MIT study finds “health knowledge graphs,” which show relationships between symptoms and diseases and are intended to help with clinical diagnosis, can fall short for certain conditions and patient populations. The results also suggest ways to boost their performance.
Health knowledge graphs have typically been compiled manually by expert clinicians, but that can be a laborious process. Recently, researchers have experimented with automatically generating these knowledge graphs from patient data. The MIT team has been studying how well such graphs hold up across different diseases and patient populations.
In a paper presented at the Pacific Symposium on Biocomputing 2020, the researchers evaluated automatically generated health knowledge graphs based on real datasets comprising more than 270,000 patients with nearly 200 diseases and more than 770 symptoms.
I’ve worked for over a decade to help reduce civilian casualties in conflict, an effort sorely needed given the fact that most of those killed in war are civilians. I’ve looked, in great detail, at the possibility that automation in weapons systems could in fact protect civilians. Analyzing over 1,000 real-world incidents in which civilians were killed, I found that humans make mistakes (no surprise there) and that there are specific ways that AI could be used to help avoid them. There were two general kinds of mistakes: either military personnel missed indicators that civilians were present, or civilians were mistaken as combatants and attacked in that belief. Based on these patterns of harm from real world incidents, artificial intelligence could be used to help avert these mistakes.
Though the debate often focuses on autonomous weapons, there are in fact three kinds of possible applications for artificial intelligence in the military: optimization of automated processing (e.g., improving signal to noise in detection), decision aids (e.g., helping humans to make sense of complex or vast sets of data), and autonomy (e.g., a system taking actions when certain conditions are met). While those calling for killer robots to be banned focus on autonomy, there are risks in all of these applications that should be understood and discussed.
Lockheed Martin Corp. didn’t have to look far for its next chief technology officer — just over to Arlington and the Department of Defense.
The Bethesda defense contractor (NYSE: LMT) tapped former Defense Advanced Research Projects Agency director Steven Walker as its CTO Thursday, with current CTO Keoki Jackson shifting to a new role as chief engineer and vice president of Engineering & Program Operations.
Carnegie Mellon University, School of Computer Science
from
“Even if there’s lots of hateful content, we can still find positive comments,” said Ashiqur R. KhudaBukhsh, a post-doctoral researcher in the LTI who conducted the research with alumnus Shriphani Palakodety. Finding and highlighting these positive comments, they suggest, might do as much to make the internet a safer, healthier place as would detecting and eliminating hostile content or banning the trolls responsible.
We come back from our field seasons increasingly broken. You can either think: I can’t do this, I’m going to have to change the science I do; or you might try to internalise all of that pain that you feel. Lots of scientists do the latter – they feel we should be objective and robust, not at the mercy of our emotions.
Increasingly, we’re realising that we can use that emotional response to form new questions. Working on the bleached and dying coral reefs is enormously important to understanding how those environments are changing. There is a real urge to want to do something about it, rather than just chart the demise. And that’s where our research is heading now. We’re trying to restore some coral reef communities, or a fishery, or replant a mangrove forest. We’re just trying to find ways of protecting pockets of really diverse, vibrant life, which might reseed much larger areas when we tackle the big issues.
On Nov. 25, an article headlined “Spot the deepfake. (It’s getting harder.)” appeared on the front page of The New York Times business section.[1] The editors would not have placed this piece on the front page a year ago. If they had, few would have understood what its headline meant. Today, most do. This technology, one of the most worrying fruits of rapid advances in artificial intelligence (AI), allows those who wield it to create audio and video representations of real people saying and doing made-up things. As this technology develops, it becomes increasingly difficult to distinguish real audio and video recordings from fraudulent misrepresentations created by manipulating real sounds and images. “In the short term, detection will be reasonably effective,” says Subbarao Kambhampati, a professor of computer science at Arizona State University. “In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.”
Political Communication University of Michigan Working Group, Department of Communication and Media
from
Ann Arbor, MI February 21 at University of Michigan. “The topic of the conference is Online and Engaged: Political News in a Digital Media Environment.” [save the date]
Cooper Hewitt Smithsonian Design Museum, curated by Ellen Lupton
from
New York, NY Until May 17. “Presented in Cooper Hewitt’s Process Lab, Face Values: Exploring Artificial Intelligence is an immersive installation that explores the pervasive but often hidden role of facial-detection technology in contemporary society. This high-tech, provocative response investigates the human face as a living data source used by governments and businesses to track, measure, and monetize emotions.” [$$]
“The Women in Data Science (WiDS) initiative today announced that its 2020 datathon will focus on Intensive Care Unit (ICU) data to help predict patient mortality. The WiDS datathon is part of the WiDS global initiative, which reaches more than 100,000 people worldwide each year through a technical conference at Stanford and at 150+ locations around the world, plus online through live streaming and a podcast series. WiDS is part of the Stanford Institute for Computational and Mathematical Engineering (ICME) and aims to inspire and educate data scientists worldwide, regardless of gender, and to support women in the field.” Deadline for submissions is February 24.
“WiDS brings together some of the best and most creative data scientists in the world,” said Karen Matthys, Stanford ICME Executive Director, External Partners, and Co-Director of the WiDS Conference. “This year the datathon participants are seeking patterns and insights in data to find ways to reduce ICU deaths. There are approximately 500,000 ICU deaths annually in the U.S. alone. Our data scientists will race each other and the clock to find insights for addressing ICU mortality.
Medium, Pinterest Engineering, Song Cui and Dhananjay Shrouty
from
We built Pin2Interest (P2I), a scalable machine learning system for content classification, to map our corpus of 200B+ Pins to our interest taxonomy. The results from P2I are used to generate personalized recommendations and create ranking features for other machine learning models. P2I is in production and has many consumers such as home feed ranking and Ads targeting.
P2I leverages both text and visual inputs such as annotations, visual embeddings, and board names. It uses Natural Language Processing (NLP) techniques such as lexical expansion and embedding similarities to map the inputs of every single image to a list of taxonomy nodes as prediction candidates. Then, a search relevance model is used to predict and rank the matching score between the image and every single taxonomy node. A sample P2I output is shown below, including the most relevant interest prediction with a score for the image.
Lyft is excited to announce the open sourcing of Flyte, a structured programming and distributed processing platform for highly concurrent, scalable, and maintainable workflows. Flyte has been serving production model training and data processing at Lyft for over three years now, becoming the de-facto platform for teams like Pricing, Locations, Estimated Time of Arrivals (ETA), Mapping, Self-Driving (L5), and more. In fact, Flyte manages over 7,000 unique workflows at Lyft, totaling over 100,000 executions every month, 1 million tasks, and 10 million containers. In this post, we’ll introduce you to Flyte, overview the types of problems it solves, and provide examples for how to leverage it for your machine learning and data processing needs.