… A research team at Harvard’s Wyss Institute for Biologically Inspired Engineering led by the Institute’s Founding Director Donald Ingber has developed a solution to this problem using ‘organ-on-a-chip’ (Organ Chip) microfluidic culture technology. His team is now able to culture a stable complex human microbiome in direct contact with a vascularized human intestinal epithelium for at least five days in a human Intestine Chip in which an oxygen gradient is established that provides high levels to the endothelium and epithelium while maintaining hypoxic conditions in the intestinal lumen inhabited by the commensal bacteria. Their “anaerobic Intestine Chip” stably maintained a microbial diversity similar to that in human feces over days and a protective physiological barrier that was formed by human intestinal tissue. The study is published in Nature Biomedical Engineering.
“The major paradigm shift in medicine over the past decade has been the recognition of the huge role that the microbiome plays in health and disease. This new anerobic Intestine Chip technology now provides a way to study clinically relevant human host-microbiome interactions at the cellular and molecular levels under highly controlled conditions in vitro,” said Ingber, M.D., Ph.D.
“What stands out the most about this round of grantees is how so many of them are taking standard AI capabilities, like a chatbot or data collection, and truly revolutionizing the value of technology in typical scenarios for a person with a disability like finding a job, being able to use a computer mouse or anticipating a seizure,” says Mary Bellard, Microsoft senior accessibility architect.
The one-year grants provide use of the Azure AI platform through Azure compute credits and can also include Azure compute credits plus engineering-related costs. AI for Accessibility has three focus areas: communication and connection; employment; and daily life.
Bloomberg Business, Justina Lee and Ksenia Galouchko
from
Maybe machines can figure out this crazy stock market.
At least that’s what quantitative traders who have struggled to beat the market for years will be hoping as a band of their peers roll out computer-driven strategies that learn from their own mistakes.
Lynx Asset Management, for one, is planning a new fund in October that executes strategies thought up by a machine—an approach that helped the $5 billion Swedish hedge fund beat most of its trend-following rivals in 2018.
It won’t take long for funds managed entirely by robots to be everywhere.
Facebook has heralded artificial intelligence as a solution to its toxic content problems. Mike Schroepfer, its chief technology officer, says it won’t solve everything.
In the quest to build AI that goes beyond today’s single-purpose machines, scientists are developing new tools to help AI remember the right things — and forget the rest.
Why it matters: Getting that balance right is the difference between a machine that can trade stocks like a pro but can’t make head or tail of a crossword puzzle, and one that learns all that plus a variety of other skills, and continually improves them — an important step toward human-like intelligence.
“AI is entirely about memory and forgetting,” says Dileep George, founder of the AI company Vicarious.
A page called “Purchases ” shows an accurate list of many — though not all — of the things I’ve bought dating back to at least 2012. I made these purchases using online services or apps such as Amazon, DoorDash or Seamless, or in stores such as Macy’s, but never directly through Google.
But because the digital receipts went to my Gmail account, Google has a list of info about my buying habits.
The mass media is one of the social forces with the most active transformative power. However, news reach people unequally. In this blog, Erick Elejalde, from the L3S Research Centre in Hannover, Germany, explains how to go about exploring the many biases of the distribution patterns of news outlets.
Civil liberties activists trying to inspire alarm about the authoritarian potential of facial recognition technology often point to China, where some police departments use systems that can spot suspects who show their faces in public. A report from Georgetown researchers on Thursday suggests Americans should also focus their concern closer to home.
The report says agencies in Chicago and Detroit have bought real-time facial recognition systems. Chicago claims it has not used its system; Detroit says it is not using its system currently. But no federal or state law would prevent use of the technology.
According to contracts obtained by the Georgetown researchers, the two cities purchased software from a South Carolina company, DataWorks Plus, that equips police with the ability to identify faces from surveillance footage in real time.
The British Psychological Society, Research Digest, Matthew Warren
from
It’s well known that science has a diversity problem, with women and members of minority groups being underrepresented. A new study suggests a solution aimed at children – reframing science as something that people do, rather than something that defines their identity, can reduce the potentially off-putting impact of the “white male” scientist stereotype.
According to the paper, published recently in Developmental Science, thoughtful use of language encourages greater interest in science among young children – and makes them less likely to lose confidence in their scientific abilities as they grow up.
Earlier this year, founder-investor Sam Altman left his high-profile role as the president of Y Combinator to become the CEO of OpenAI, an AI research outfit that was founded by some of the most prominent people in the tech industry in late 2015. The idea: to ensure that artificial intelligence is “developed in a way that is safe and is beneficial to humanity,” as one of those founders, Elon Musk, said back then to the New York Times.
The move is intriguing for many reasons, including that artificial general intelligence — or the ability for machines to be as smart as humans — does not yet exist, with even AI’s top researchers far from clear about when it might. Under the leadership of Altman, OpenAI, which was originally a non-profit, has also restructured as a for-profit company with some caveats, saying it will “need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”
Whether OpenAI is able to attract so much funding is an open question, but our guess is that it will, if for no reason other than Altman himself — a force of nature who easily charmed a crowd during an extended stage interview with this editor Thursday night, in a talk that covered everything from YC’s evolution to Altman’s current work at OpenAI.
The Ford government has axed provincial funding for two institutes credited with positioning Ontario and Canada at the forefront of artificial intelligence research — a field the government’s own prosperity think tank says must be supported if the province wants to remain competitive and create jobs in a booming technology sector.
The Ministry of Economic Development, Job Creation and Trade cut $20 million from the Vector Institute for Artificial Intelligence and $4 million annually from the Canadian Institute for Advanced Research (CIFAR), which supports a hub of AI-focused computer scientists. Both draw funding from the federal government and other sources and say they will adjust programming or operations.
Sen. Ron Wyden (D-OR) is one of the co-authors of a law often credited with creating the internet as we know it — and he’s got a few things he’d like to clear up about it. Among them: It doesn’t mean private companies have to take a neutral stance about what is and isn’t allowed on their platforms.
“You can have a liberal platform. You can have conservative platforms. And the way this is going to come about is not through government but through the marketplace, citizens making choices, people choosing to invest,” he told Recode in a recent interview. “This is not about neutrality.”
The law in question is Section 230 of the Communications Decency Act of 1996. Written by Wyden and former Rep. Chris Cox (R-CA), the law declares that “no provider or user of an interactive computer service shall be treated as a publisher or speaker of any information provided by another information content provider.” In other words, it protects internet companies from being held liable for the content posted by their users and says they’re platforms, not publishers.
Chronicle of Higher Education; Cyril Oberlander, Benjamin Miller, Eric Mott and Kris Anderson
from
For the last several years, we have walked around to gather precise location information on where students study at the library, and then analyzed the data to determine use patterns so we can adjust library spaces to better serve the needs of the campus. That seating analysis has proved invaluable in helping us see trends. Seating-use analysis includes the average number of students studying in a seating group, capacity, average use, and peak use.
During the first year of our analysis, it was clear that some seating areas were not very well used, such as microform readers, cafe couches, and study carrels in certain locations. We targeted those low-use areas for redesign, and ran tests by looking at preferences for higher furniture use, and recognizing peak use and capacity.
MIT and KTH Royal Institute of Technology, Sweden’s leading technological and engineering university, have announced a research collaboration focused on urban planning and development in Stockholm, Sweden.
The KTH-MIT Senseable Stockholm Lab will use artificial intelligence, big data, and new sensor technologies to help the city evolve into a more livable and sustainable metropolis. The City of Stockholm is part of the collaboration, which will commence work this spring and is planned to span five years.
The announcement was made during the recent 2019 Forum on Future Cities at MIT, a conference produced in association with the World Economic Forum’s Council on Cities and Urbanization.
We have already begun to offer undergraduate courses for those looking to minor. You can check out our course selection on the @NyuAlbert “Public Course Search.”
Washington D.C. Artificial Intelligence & Deep Learning meetup
from
Washington, DC May 22, starting at 6 p.m. “We are thrilled to host Nick Schmidt and Dr. Bryce Stephens of BLDS partners for an informed discussion about machine learning for high-impact and highly-regulated real-world applications.” [rsvp required]
Gaithersburg, MD May 30, starting at 9:30 a.m., National Institute of Standards and Technology, Red Auditorium, (100 Bureau Drive). “Please join the Center for Data Innovation, in partnership with NIST, for a conversation about the state of play in developing standards and oversight for AI, and the importance of these initiatives for AI innovation, adoption, and governance.” [registration required]
Stirling, Scotland October 9-10 at Stirling University. “The theme for this year is ‘Bridging Worlds’, with a focus on connecting communities, cultures and topics that stretch beyond altmetrics alone.” Deadline for paper submissions is June 14.
“Can you characterize the ionosphere with selected digitized radio-frequency (RF) spectrum recordings from sounder receiver data?” Deadline for submissions is June 28.
“Geopolitical Forecasting Challenge 2 encourages novel approaches that embrace non-traditional methods and harnesses the collective community, while offering Solvers the chance to win a share of $250,000 in prize money.” Milestone period one ends on July 18.
“The Chan Zuckerberg Initiative will soon invite applications for open source software projects that are essential to biomedical research. Applicants can request funding between $50k and $250k for one year.” Deadline for applications is August 1.
The potential costs involved when investing in the required hardware for an ambitious ML development project can seem daunting, especially for a small- to medium-sized enterprise without the in-house resources to build the necessary infrastructure on-premise. But once again, cloud-based alternatives can come to the rescue, and more specifically, Kubernetes platforms can often — but not always, as explained below — serve as the perfect conduit for at-scale ML software deployment and creation.
“Because Kubernetes can be viewed as the great equalizer offering systematic scheduling and resource management across multiple pieces of infrastructure, ML workloads will gravitate towards Kubernetes,” Ravi Lachhman, technical evangelist, for AppDynamics, said.
arXiv, Computer Science > Machine Learning; Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A. Alemi, George Tucker
from
Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning; however, bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks, but the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of our new bounds for estimation and representation learning.
“With the release of Python 3.8 coming soon, the core development team has asked me to summarize our latest discussions on the new features planned for Python 4.0, codename “ouroboros: the snake will eat itself”. This will be an exciting release and a significant milestone, many thanks to the hard work of over 100 contributors.”