I have skin in this game. I am a professor of data science at NYU and a social-science consultant for companies, where I conduct quantitative research to help them understand and improve diversity. I make my living from data, yet I consistently find that whether I’m talking to students or clients, I have to remind them that data is not a perfect representation of reality: It’s a fundamentally human construct, and therefore subject to biases, limitations, and other meaningful and consequential imperfections.
The clearest expression of this misunderstanding is the question heard from boardrooms to classrooms when well-meaning people try to get to the bottom of tricky issues:
The Murray State University Center for Telecommunications Systems Management is transforming into the Center for Computer and Information Technology — its award-winning Program of Distinction — to reflect its modern approach to computer education.
“As technology has evolved and the boundaries between specific CIT industries have started to blur, now is the right time to transform computer education at Murray State University,” said Dr. Michael Ramage, the center’s director. “We’re doing this by improving alignment among existing computer and information technology related degrees, combining recruitment and retention efforts, increasing overall CIT efforts, considering new CIT academic pathways and increasing academic–business partnerships.”
CRISPR-Cas9, a powerful gene-editing tool, stands out among DNA editors for its efficiency and potential. Still, there’s an increasing amount of controversy and interest surrounding the accuracy and safety of this DNA-altering technology.
The basic idea behind CRISPR, which stands for “clustered regularly interspaced short palindromic repeats,” is to alter a sequence of DNA to achieve a goal — like fixing a harmful mutation.
But even in a system that prioritizes precision, CRISPR can still yield mistakes. Now, using the power of machine learning, James Zou, PhD, assistant professor of biomedical data science, and collaborators have created an algorithm that predicts what type of mistakes are likely to occur during CRISPR editing.
The Fulcrum, Barbara Risman and Christine Percheski
from
With the first presidential debates behind us and the next ones rapidly approaching, the fight for the Democratic nomination is in full swing. Though the presidential hopefuls disagree on some of the specifics, the story of the 2020 election — at least at this point — is shaping out to be one that includes lots of smart policy informed by some of the best research in the field. Just look at Elizabeth Warren’s relentless policy proposals that are backed by academic research and may have driven her recent surge in the polls.
As the co-leaders for the Chicagoland chapter of the Scholars Strategy Network, we applaud the use of research and evidence in campaigns, and later in policymaking. This is exactly what our organization has been working towards for years. But it doesn’t just need to happen in presidential election years. Local and state candidates can and should work with academics to inform their policy proposals too. Our experiences in Chicago show a couple ways this can work.
Facial recognition technology. Algorithms that decide who is a good candidate for a loan or medical procedure. Interactive robots in workplaces and seniors’ homes.
These are just a few examples of the many new and emerging technologies that promise to reshape society in profound and, perhaps, unexpected ways – often raising thorny ethical questions in the process.
As the inaugural director of the University of Toronto’s new Schwartz Reisman Institute for Technology and Society and the inaugural Schwartz Reisman Chair in Technology and Society, Gillian Hadfield will draw on her varied background – in economics and law, humanities, business and high tech – to help ensure technological innovation is implemented fairly and equitably in societies around the world.
“Technologies are a means to an end,” says Hadfield, who is a U of T professor in the Faculty of Law and the Rotman School of Management.
In April this year a special collection examining social media and politics was published in SAGE Open. Guest edited by Joshua A. Tucker and Pablo Barberá, the articles grew out of a series of conferences held by NYU’s Social Media and Political Participation lab (SMaPP) and the NYU Global Institute for Advanced Study (GIAS) known as SMaPP-Global. Upon publication Joshua Tucker said ‘the collection of articles also shows the value of exposing researchers from a variety of disciplines with similar substantive interests to each other’s work at regular intervals’. Interdisciplinary collaborative research projects are a cornerstone of what makes computational social science such an interesting field. We were intrigued to know more so caught up with Josh and Pablo.
This brief explores how AI and related applications can address some of the most pressing challenges facing cities and metropolitan areas. Like every form of technology to proceed it, society must be intentional with the exact challenges we want AI to solve and be considerate of the social groups and industries who stand to benefit from the applications we deliver. While AI is just in its early development, now is the ideal time to bring that intentionality to urban applications.
Researchers can now discover and explore research from a broad range of Springer Nature publications from chemistry to public health through the free, AI-based search engine Semantic Scholar. The expanded collaboration between Semantic Scholar’s creators, AI2, and Springer Nature builds on a pilot project started in 2016 that originally included publications in computer science and biomedicine. By incorporating articles and book chapters in a wider range of research areas, scientists and scholars can now more easily find and assess relevant Springer Nature content to pursue and advance scientific discovery.
At least eight EU member states expect they need more time than originally envisioned to put in place a national strategy on artificial intelligence (AI), EUobserver has learned from diplomatic sources.
The original goal was to have these plans by mid-2019.
Now Croatia, Cyprus, Hungary, Slovenia, and Spain expect their final strategies will only be published by the end of 2019.
The Netherlands expected its strategy in September or October, Austria referred to “autumn”, while Ireland said it would be done in the final quarter of 2019.
“From day one we knew, when we had just a small amount of people at the company, we had a very clear focus,” co-founder and chairman Ori Allon said in an interview. “We wanted to bring more tech and data and transparency to real estate, and I think it’s paid off.”
Based out of New York, Compass earlier this year established an engineering hub in Seattle run by the former CTO of AI for Microsoft, Joseph Sirosh . It’s continuing to hire there and elsewhere (alongside also making acqui-hires for talent).
The Series G funding — which brings the total raised by Compass to $1.5 billion — is coming in at a $6.4 billion valuation, a huge uptick for the company compared to its $4.4 billion valuation less than a year ago. Part of the reason for that has been the company’s massive growth: in the last quarter, its revenues were up 250% compared to Q2 2018.
A new breed of data scientist — professionals focused on protecting data — is emerging. Here’s what the role entails and why the position may be a good addition to your IT team.
“18 [#deeplearning] algorithms … presented at top-level research conferences … Only 7 of them could be reproduced w/ reasonable effort … 6 of them can often be outperformed w/ comparably simple heuristic methods.”
Automated cameras and other sensors deployed in the wild are transforming the way biologists monitor natural ecosystems and animal populations. These technologies can collect huge amounts of data, however, and conservation biologists are increasingly turning to the tools of artificial intelligence (AI) to sort through it all.
In particular, a machine learning method called “deep learning,” already widely used in face recognition and other image- and speech-recognition applications, is now being applied by conservation biologists to analyze images, videos, and sound recordings of everything from African elephants to aquatic insects.
Biologists Donald Croll and Bernie Tershy, who run the Conservation Action Lab at UC Santa Cruz, have been using deep learning in their work on seabird conservation. They began developing acoustic technologies for monitoring seabird populations as a research project and eventually started a company, Conservation Metrics, to provide wildlife monitoring services.
Horizon: the EU Research & Innovation magazine, Richard Gray
from
‘It’s very difficult to be an AI researcher now and not be aware of the ethical implications these algorithms have,’ said Professor Bernd Stahl, director of the Centre for Computing and Social Responsibility at De Montfort University in Leicester, UK.
‘We have to come to a better understanding of not just what these technologies can do, but how they will play out in society and the world at large.’
He leads a project called SHERPA, which is attempting to wrestle with some of the ethical issues surrounding smart information systems that use machine learning, a form of AI, and other algorithms to analyse big data sets.
UC San Francisco scientists recently showed that brain activity recorded as research participants spoke could be used to create remarkably realistic synthetic versions of that speech, suggesting hope that one day such brain recordings could be used to restore voices to people who have lost the ability to speak. However, it took the researchers weeks or months to translate brain activity into speech, a far cry from the instant results that would be needed for such a technology to be clinically useful.
Now, in a complementary new study, again working with volunteer study subjects, the scientists have for the first time decoded spoken words and phrases in real time from the brain signals that control speech, aided by a novel approach that involves identifying the context in which participants were speaking.
Boston, MA August 13, starting at 6 p.m., Ned Devines (1 Fanueil Hall Marketplace). “Join us to raise a glass for this summer’s recent healthtech IPO successes. 2019 has been a milestone year for the healthtech sector, and we look forward to this trend continuing.” [free, registration required]
Medford, MA August 5-9 at Tufts University. “The international conference on Modern Challenges in Imaging will honor the achievements of Tufts only Nobel Laureate and keep his thriving legacy up by gathering top international researchers in mathematics, engineering, science, and medicine. A broad range of tomographic modalities, mathematics, and applications will be presented to provide an overview of the different aspects and foster new collaborations.” [registration required]
Huntington Beach, CA March 5-7. “The Society for Consumer Psychology conference provides opportunities for a high level of interaction among participants interested in consumer research and in advancing the discipline of consumer psychology in a global society.” Deadline for submissions is August 30.
San Jose, CA March 15-18, 2020. “Discover how to turn your raw data into competitive advantage at Strata in San Jose. Be a part of the program—apply to speak by September 5.”
Palermo, Italy June 3-5. “Since its inception in 1985, AISTATS has been an interdisciplinary gathering of researchers at the intersection of artificial intelligence, machine learning, statistics, and related areas.” Deadline for submissions is October 8.
We hope this list of APIs, bulk downloads, and tutorials will help you begin exploring the many ways the Library of Congress provides machine-readable access to its digital collections.
We use R Shiny a lot and we love it. That’s why our Open Source revolves mostly around improving the experience with Shiny, which is a great tool, although just by itself sometimes lacks some functionalities – that’s where we come in.
I want to introduce you to our open source by giving an overview of the packages along with resources for exploring them in more detail. This way you can quickly decide if there is something useful to you among them (and I bet there surely is).
EGG is a new toolkit that allows researchers and developers to quickly create game simulations in which two neural network agents devise their own discrete communication system in order to solve a task together. For example, in one of the implemented games, one agent sees a handwritten digit and has to invent a communication code to tell the other agent which number it represents.