The field of imaging science — marked by rapidly changing and improving technology — plays a critical role in applications ranging from cancer diagnosis to virtual reality. With the aim of training the next leaders in imaging, the School of Engineering & Applied Science is collaborating with other Washington University in St. Louis schools to offer an interdisciplinary doctoral program in imaging sciences, beginning in the 2018-19 academic year.
Designed to prepare students for careers in academic research or in industry, the interdisciplinary doctoral program will incorporate the latest imaging technologies, including biomedical, satellite, seismic, sonic and light detection and ranging (LiDAR).
Company Data Science News
Andrew Ng, AI rockstar, has announced the AI Fund which will incubate machine learning start-ups and make $175m in funding available. Professor Ng, please hire me to be your chief of human and social impact.
Jeff Bezos, Warren Buffett, and Jamie Dimon have agreed to start a company to fix health care for their employees. Somehow. What can I say? It is unlikely there will be any sector Amazon does not enter.
Amazon hired Candace Thille away from Stanford. Her title will be “Director of learning science and engineering” and her background suggests she may be heading up a new employee education program within the company. This seems like a return to a more classic liberalism in which companies had a stake in their employees over the long term.
Ford Motor Company acquired two data-centric companies and announced it will open Ford X to figure out how to be part of the future, which will undoubtedly be data-driven transportation.
Censys, a startup launched at the University of Michigan “continuously scans the internet, analyzing every publicly visible server and device.” The intent is to use Censys to detect cybersecurity threats by identifying unsecured devices as soon as they connect to the network. Censys data will remain free to researchers.
Chroniclea new company emerging from X, Google’s moonshot incubator, “will focus on helping companies comprehend their own security data.” It’s not entirely clear what the product is, but I’m optimistic that it is right-sized to corporate needs.
The Royal Bank of Canada is making a modest commitment to cybersecurity research, contributing $1.78m to the University of Waterloo. Elsewhere, the AI team at RBC was able to predict a fall in share price at Chipotle by using social media data about Chipotle’s ‘queso’ combined with the price of avocados. So now you see what they’re doing in-house.
Trifacta, a big-data startup that is building tools to help businesses structure and analyse data that gets generated in their networks through customer interactions and other actions, is today announcing that it has closed out its Series D round at $48 million, with a notable list of strategic and financial investors that includes Google, Ericsson, the Deutsche Börse, Accel and more.
The funding brings the total raised by Trifacta to $124 million, and while it is not disclosing its valuation, this puts the company’s valuation at $258 million, based on figures from PitchBook. The company did not dispute that figure when I asked about it.
The company plans to use the funding to expand globally as well as add more firepower to its platform. The company has tripled its customers in the last year, although it’s not releasing any hard numbers about how many customers it has, or its revenues.
It’s thus natural for us to think about what principles we want to give our AI-based machines, and to puzzle through how they might be applied in particular cases. If you’d like to engage in these thought experiments, spend some time at MoralMachine.mit.edu where you’ll be asked to make the sort of decision familiar from the Trolley Problem: if you had to choose, would you program AVs to run over three nuns or two joggers? Four old people or two sickly middle-aged people? The creators of the site hope to use the crowd’s decisions to provide guidance to AV programmers, but it can also lead to a different conclusion: We cannot settle moral problems — at least not at the level of detail the thinking behind MoralMachines demands of us — by applying principles to cases. The principles are too vague and the cases are too complex. If we instead take a utilitarian, consequentialist approach, trying to assess the aggregated pains and pleasures of taking these various lives, the problem turns out to be still too hard and too uncertain.
So perhaps we should take a different approach to how we’ll settle these issues. Perhaps the “we” should not be the commercial entities that build the AI but the systems we already have in place for making decisions that affect public welfare. Perhaps the decisions should start with broad goals and be refined for exceptions and exemptions the way we refine social policies and laws. Perhaps we should accept that AI systems are going to make decisions based on what they’ve been optimized for, because that’s how and why we build them.[4] Perhaps we should be governing their optimizations.
Amazon could then go in one of two directions. First, Amazon could start to backwards integrate into its suppliers’ business; there are hints the company is already exploring pharmaceutical sales, and the Wall Street Journal says the idea was broached. That said, I actually think this is less likely; insurance operates best at more scale, not less: first and foremost, the larger the pool, the more risk can be spread, as well as obvious efficiency gains in administration. More scale also gives more bargaining power over other parts of the healthcare chain. Three companies, large though they may be, aren’t going to be as effective as large insurers, no matter how well-managed they may be.
What would make more sense to me is that, having first built an interface for its employees, and then a standardized infrastructure for its health care suppliers, is that Amazon converts the latter into a marketplace where PBMs, insurance administrators, distributors, and pharmacies have to compete to serve employees.
“It’s an exciting new time in cancer research. Investigation of the human genome and individual tumor genetics is producing mammoth amounts of data that need to be interpreted in order to deliver the best possible cancer care,” said Phillip A. Sharp, PhD, chairman of the SU2C Scientific Advisory Committee and institute professor at the Koch Institute for Integrative Cancer Research at the Massachusetts Institute of Technology.
“The development of computer vision over the past couple years along with the advent of deep learning has opened up dramatic opportunities to build new vision-related products that can solve very practical, real world problems,” said Bryton Shang, Founder and CEO of Aquabyte. “The same computer-vision models I worked on for tissue cancer diagnosis are applicable in a way that can transform the fish farming industry and the future of protein consumption around the world.”
Aquabyte’s technology will augment the visual IQ of human-operated systems by collecting data with underwater 3D cameras. By installing these cameras in fish farm pens, the technology watches the fish and determines the size of fish/biomass in order to determine the optimal feed quantity.
Comedians dine on sarcasm — the ironic, mocking remarks that say one thing on the surface but cut much deeper.
Could a computer learn to detect this nuanced form of expression? Pushpak Bhattacharyya says they can — and he’s got the algorithms to prove it.
Bhattacharyya — director of the Indian Institute of Technology (IIT), Patna, and a professor at IIT, Bombay — has dedicated the past few years to using GPU-powered deep learning to spot sarcasm online.
Milwaukee School of Engineering announced today it will begin offering a bachelor of science in computer science in the fall.
The new program follows the school’s announcement in October of plans to build a $34 million, 64,000-square-foot computational science facility in the center of campus, funded by a donation from MSOE regent Dwight Diercks and his wife Dian.
Our key finds are mainly driven by the nature of a skills short market and the subsequent challenges in the supply vs demand for data science professionals. … Our survey showed that the majority of respondents had spent 1 year or less with their current company. Whilst this is not indicative of experience levels, it signals to the heightened demand for data scientists and the volume of new professionals entering the field.
National Bureau of Economic Research, Robert Seamans and Manav Raj
from
We summarize existing empirical findings regarding the adoption of robotics and AI and its effects on aggregated labor and productivity, and argue for more systematic collection of the use of these technologies at the firm level. Existing empirical work primarily uses statistics aggregated by industry or country, which precludes in-depth studies regarding the conditions under which robotics and AI complement or are substituting for labor. Further, firm-level data would also allow for studies of effects on firms of different sizes, the role of market structure in technology adoption, the impact on entrepreneurs and innovators, and the effect on regional economies amongst others. We highlight several ways that such firm-level data could be collected and used by academics, policymakers and other researchers.
Kuala Lumpur in Malaysia has announced it will be introducing the City Brain which will be able to cut down congestion, catch-out people illegally parking, detect accidents and even catch speeding drivers.
The City Brain has been developed by Alibaba Cloud and the city will become the first in the world outside China to roll out the system.
Artificial intelligence will be able to use video and image recognition and data-mining to keep tabs on what is going on through the city.
The Massachusetts Institute of Technology is launching an initiative called MIT Intelligence Quest in an effort to combine multiple disciplines to reverse engineer human intelligence, create new algorithms for machine learning and artificial intelligence and foster collaboration.
Perhaps the biggest takeaway from the structure of MIT IQ is that artificial intelligence needs to be a team sport to develop breakthroughs. MIT IQ is an effort to break down multiple research silos across the institute to rally around human and machine intelligence.
In this podcast we turn to five leaders in their respective fields who’ve been intimately involved with this emerging technology. We ask them them to not only contrast what precision medicine is and may become, but also to help us clarify what holds promise and what’s just hype. [audio, 16:05]
PETER SINGER, AN expert on future warfare at the New America think-tank, is in no doubt. “What we have is a series of technologies that change the game. They’re not science fiction. They raise new questions. What’s possible? What’s proper?” Mr Singer is talking about artificial intelligence, machine learning, robotics and big-data analytics. Together they will produce systems and weapons with varying degrees of autonomy, from being able to work under human supervision to “thinking” for themselves. The most decisive factor on the battlefield of the future may be the quality of each side’s algorithms. Combat may speed up so much that humans can no longer keep up.
Frank Hoffman, a fellow of the National Defence University who coined the term “hybrid warfare”, believes that these new technologies have the potential not just to change the character of war but even possibly its supposedly immutable nature as a contest of wills. For the first time, the human factors that have defined success in war, “will, fear, decision-making and even the human spark of genius, may be less evident,” he says.
5G consists of a host of technologies that include mmWave frequencies and multiple antennas. Because mmWave signals must overcome losses not encountered at lower frequencies, the industry is moving to multiple antennas and phased-array technology that direct signals to their destination with higher power than today’s omnidirectional signals. Testing such systems is difficult, but a startup out of NYU Tandon School of Engineering may just make channel emulation practical and affordable.
Started by post-doctoral research fellow Aditya Dhananjay and NYU faculty members Sundeep Rangan and Dennis Shasha, Millilabs has developed a system that uses off-the-shelf hardware that emulates both the transmission channel and the phased-array antennas needed to produce MIMO signals.
New York, NY February 21, starting at 6 p.m. “NYU’s Institute for Public Knowledge invites you to join for a book talk on Matthew Salganik’s new book Bit by Bit: Social Research in the Digital Age, featuring the author in conversation with Duncan Watts and Beth Noveck.”
“We’re releasing a new batch of seven unsolved problems which have come up in the course of our research at OpenAI. Like our original Requests for Research (which resulted in several papers), we expect these problems to be a fun and meaningful way for new people to enter the field, as well as for practitioners to hone their skills (it’s also a great way to get a job at OpenAI). Many will require inventing new ideas.”
“The designation of ASA Fellow has been a significant honor for nearly 100 years. Under American Statistical Association bylaws, the Committee on Fellows can elect up to one-third of one percent of the total association membership as fellows each year.” … “Individuals are nominated by their ASA-member peers.”
HIMSS is now accepting speaking proposals for the Precision Medicine Summit, May 17-18 in Washington, DC. Deadline for proposals submissions is February 9.
Faceted search is a topic broad enough to deserve its own book. It has become a standard feature of all modern search engines, including open-source platforms like Solr and Elastic.
In this post, I’ll quickly explain how faceted classification and faceted search work. I’ll then outline how faceted search interacts with some of the query understanding approaches discussed in previous posts.
“Observable is a better way to code.
Discover insights faster and communicate more effectively with interactive notebooks for data analysis, visualization, and exploration.”
arXiv, Astrophysics > Instrumentation and Methods for Astrophysics; Daniel Foreman-Mackey
from
This research note presents a derivation and implementation of efficient and scalable gradient computations using the celerite algorithm for Gaussian Process (GP) modeling. The algorithms are derived in a “reverse accumulation” or “backpropagation” framework and they can be easily integrated into existing automatic differentiation frameworks to provide a scalable method for evaluating the gradients of the GP likelihood with respect to all input parameters. The algorithm derived in this note uses less memory and is more efficient than versions using automatic differentiation and the computational cost scales linearly with the number of data points.