Substack, The Gradient newsletter, Vincent Cano Gil
How is it accelerating chemical discovery?
First, machine learning has improved existing methods of simulating chemical environments. We have already mentioned that computational chemistry allows us to partly bypass lab experiments. However, computational chemistry calculations simulating quantum mechanical processes scale very poorly in both computational cost and accuracy of the chemical simulation. The underlying core problem in computational chemistry is solving the electronic Schrödinger equation for complex molecules – that is, given the positions of a collection of atomic nuclei and the total number of electrons, calculate the properties of interest. An exact solution is possible only for one-electron systems, and for the rest, we must rely on “good enough” approximations. Additionally, many popular methods for approximating the Schrödinger equation scale exponentially, making a brute force solution intractable. Over the last century, many approaches have been developed to speed up the calculation without sacrificing too much accuracy; however, even some of the “cheaper” methods can cause computational bottlenecks.
One way that AI has accelerated these calculations is by blending them with machine learning. Another approach bypasses modelling the physical process altogether by directly mapping molecule representations to the desired property. Both of these approaches allowed chemists to screen chemical databases more efficiently for various properties such as atomic charges, ionization energies etc.
The estate of Audrey Steele Burnand has gifted $57.75 million to the University of California, Irvine to fund the creation of a new campuswide center that will pursue research into the causes and treatment of depression and also support the UCI-managed Steele/Burnand Anza-Borrego Desert Research Center.
More than $55 million of the gift is earmarked for advancing depression research at UCI. It’s believed to be the largest philanthropic donation to a U.S. university to support research focused solely on depression, which is the most prevalent mental health disorder in the U.S.
Scientists would ensure, he said, that “everything we do is grounded in science, facts and truth”. His hires made good on his promise to put science first — perhaps to a fault. Senior leaders in government science policy require a trifecta of skills: research expertise, proficiency in the art of interagency policy coordination, and a deep knowledge of the legislative budgeting process. These last two are not intuitive for academic scientists, no matter how intelligent, and they are hard to learn in a high-profile leadership position.
This month, the first cabinet-level director of the White House Office of Science and Technology Policy (OSTP) resigned after acknowledging he had mistreated his staff. The vacancy his departure leaves comes on top of others — one at the head of the National Institutes of Health, another at the Food and Drug Administration. In my view, Biden should consider appointing only those with demonstrated policy chops and a history of working well with others.
Mid-career, I learnt the hard way how essential policy expertise is to these roles.
As municipalities clamor for a slice of President Biden’s $1.2 trillion infrastructure spending bill, one Johns Hopkins scientist is re-examining one of the basic elements of road-building: determining the width of road lanes. But determining the width that provides the highest level of safety, access, and comfort for every road user—drivers, cyclists, and pedestrians—is complex, says Shima Hamidi, an assistant professor in Johns Hopkins’ Department of Environmental Health and Engineering. It’s a data problem, she says, and she wants to help cities solve it.
Hamidi is undertaking a massive collection of data on urban streets across the United States to answer one question: How low can cities go on street width to make room for bike lanes and wider sidewalks?
For the study, which is funded by Bloomberg Philanthropies, Hamidi and her research team will examine a national sample of urban street segments from 11 major cities in 10 states. They will collect data on road features, traffic, and safety data for about 5,000 roads and road segments and will account for a range of geometric, urban design, and traffic factors such as lighting, crosswalks, traffic calming measures, medians, existing bike and pedestrian infrastructure, and volume of cars. The data will then be compiled into a model to determine the “tipping point” for the safest lane width for the class of road.
Northwestern Kellogg School of Business, Kellogg Insight; Robert Korajczyk Dermot Murphy and coauthors
If you handed the same data to 164 teams of economists and asked them to answer the same questions, would they reach a consensus? Or would they offer 164 different answers?
A new study put this exact proposition to the test. One hundred sixty-four teams of researchers analyzed the same financial-market dataset separately and wrote up their conclusions in 164 short papers. Teams were then given several rounds of feedback, mimicking the kind of informal peer-review process that economists engage in before they submit to an academic journal. All the researchers involved wanted to know how much variation would exist among their different papers.
From Veritasium, a video about an experiment in evolution that’s been running continuously for more than 33 years.
The LTEE (the E. coli long-term evolution experiment) was started with 12 identical bacterial populations in 1988 and as of early 2020, they have reached 73,500 generations (the equivalent of 1.5 to 2 million years in human generational terms). When you can fast-forward evolution while also preserving past generations (the bacteria can be easily frozen and reanimated), you can discover some surprising things about it.
Harvard University, John A. Paulson School of Engineering and Applied Sciences
For Alex Wulff, it started as a cool idea for a senior thesis. Wulff, a senior concentrating in electrical engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), wanted to know the combined capabilities of low-cost, commercially available radio frequency monitors when set up as a network.
Wulff’s plans changed during the pandemic. He, along with fellow seniors Isaac Struhl and Ben Harpe, both studying computer science at SEAS, decided to take a year off and try to build a company around Wulff’s low-cost radio spectrum monitoring technology.
Their goal: develop sensors and a software package that can identify whether a radio signal is disrupting communications in a localized area, classify whether the signal should or shouldn’t be there, and then localize the source of the signal.
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.
A professional sports bettor based in Los Angeles, who uses the pseudonym Joey Isaks, said he uses around 100 different bookmakers, most of whom are local and deal in cash.
“Some of them say when [legalization] comes, they’re getting out,” the professional bettor said in a phone interview this week. “But these are like 65-year-old guys who’ve never done anything else, and I don’t think they’ll actually stop.”
Bengals fans in Ohio are in the same boat as Californians when it comes to betting on Sunday’s Super Bowl. Ohio, however, has passed legislation authorizing sports betting and is aiming to launch toward the end of the year. Meanwhile, the high-stakes battle rages on in California.
“It’s definitely a big prize,” Rick Arpin, a Las Vegas-based managing partner for account firm KPMG, said of California’s sports betting potential.
There’s also no guarantee that the next mutation — and there will be more — won’t be an offshoot of a more dangerous variant such as delta. And your risk of catching Covid more than once is real.
“The virus keeps raising that bar for us every few months,” said Akiko Iwasaki, a professor of epidemiology at Yale School of Medicine. “When we were celebrating the amazing effectiveness of booster shots against the delta variant, the bar was already being raised by omicron.”
The Conversation, Research Brief, Marcio Resende and Harry J. Klee
Breeding for flavor is a difficult task for many different reasons. For one, fruit and vegetable plant breeding programs need to improve several different traits that appeal to both producers and consumers. Creating the optimal genetic combination that covers all these traits is difficult, so breeding programs often deprioritize flavor to focus on improving disease resistance and increasing yield. Plant breeders must also evaluate hundreds to thousands of potential varieties.
To streamline this process, we developed an algorithm to predict how consumers will rank flavor in tomatoes and blueberries.
Thank you for sharing this story! However, please do so in a way that respects the copyright of this text. If you want to share or reproduce this full text, please ask permission from Innovation Origins (email@example.com) or become a partner of ours! You are of course free to quote this story with source citation. Would you like to share this article in another way? Then use this link to the article: https://innovationorigins.com/en/selected/artificial-intelligence-and-big-data-can-help-preserve-wildlife/
AI can serve as a key catalyst in wildlife research and environmental protection more broadly,” says Prof. Devis Tuia, the head of EPFL’s Environmental Computational Science and Earth Observation Laboratory and the study’s lead author. If computer scientists want to reduce the margin of error of an AI program that’s been trained to recognize a given species, for example, they need to be able to draw on the knowledge of animal ecologists. These experts can specify which characteristics should be factored into the program, such as whether a species can survive at a given latitude, whether it’s crucial for the survival of another species (such as through a predator-prey relationship) or whether the species’ physiology changes over its lifetime. “We used this approach to improve a bear-recognition program a few years ago,” says Prof. Mackenzie Mathis, a neuroscientist at EPFL and co-author of the study. “A researcher studying bear DNA had installed automatic cameras in bear habitats in order to recognize individual animals. But bears shed half of their body fat when they hibernate, meaning the generic programs she used were no longer able to recognize the bears once the season changed. We therefore added criteria to the program that can not only look at whether an animal has a given characteristic but also be tweaked manually to allow for possible deviations.”
Pew Research Center; Janna Anderson and Lee Rainie
Asked to ‘imagine a better world online,’ experts hope for a ubiquitous – even immersive – digital environment that promotes fact-based knowledge, offers better defense of individuals’ rights, empowers diverse voices and provides tools for technology breakthroughs and collaborations to solve the world’s wicked problems
So far, 18 states have legalized online sports betting. The industry is now valued at $59 billion in the U.S., a figure Statista expects to grow to nearly $93 billion in 2023. Not surprisingly, more and more startups are trying to cash in. And many larger sports betting companies are looking to acquire early-stage startups for their technology. “There is an innovation gap that the market leaders — the companies most reliant on the future of this industry — are generally unable or ill-equipped to solve themselves,” says Lloyd Danzig, managing partner at New York City-based venture capital firm Sharp Alpha. The firm just announced a $10 million fund for early-stage sports betting technology startups in November 2021.
Danzig notes that for these trailblazing technology startups, the biggest challenge will be converting enough users onto their platform to convince investors and potential acquirers that their product is legitimate. This is why the product launches are often positioned around February and March for the Super Bowl and March Madness, events where market trends can pick up steam.
A multi-institutional group in New York City has united to address health disparities in multiple chronic diseases through a new collaborative center.
The vision of the Center to Improve Chronic Disease Outcomes through Multi-level and Multi-generational Approaches Unifying Novel Interventions and Training for Health Equity (COMMUNITY Center) is rooted in public health tenets, recognizing that medical advances alone can only partially reduce the outpaced burden of disease on racial and ethnic minorities. Reducing health disparities in chronic diseases requires multi-faceted approaches that intervene on structural, community, family and individual level determinants of health and well-being.
“In establishing this new collaborative center, we aim to reduce multiple chronic diseases in the communities that we serve across the New York City region, particularly in the Black and LatinX communities that face a much higher burden of chronic diseases such as cancer and heart disease,” said Dr. Mary Beth Terry, the contact principal investigator for the center and a professor of epidemiology at Columbia’s Mailman School of Public Health (Mailman).
One million dollars to support brand building and strategic marketing efforts.
… $100,000 each to 9 colleges to support brand building and strategic marketing efforts. Deadline for applications is March 15.
The eScience Institute’s Data Science for Social Good program is now accepting applications for student fellows and project leads for the 2021 summer session. Fellows will work with academic researchers, data scientists and public stakeholder groups on data-intensive research projects that will leverage data science approaches to address societal challenges in areas such as public policy, environmental impacts and more. Student applications due 2/15 – learn more and apply here. DSSG is also soliciting project proposals from academic researchers, public agencies, nonprofit entities and industry who are looking for an opportunity to work closely with data science professionals and students on focused, collaborative projects to make better use of their data. Proposal submissions are due 2/22.
We’re excited to announce the public beta of the Stately Editor! The Stately Editor is a tool for creating and editing state diagrams. We’ve received a lot of great feedback from the private beta testers, and now we’re delighted to share it with everyone.
Artificial intelligence (AI) and machine learning (ML) seem to have piqued the interest of automated data collection providers. While web scraping has been around for some time, AI/ML implementations have appeared in the line of sight of providers only recently.
Aleksandras Šulženko, Product Owner at Oxylabs.io, who has been working with these solutions for several years, shares his insights on the importance of artificial intelligence, machine learning, and web scraping.