Traditionally, the problem for rice farmers has been the need to diagnose the disease by visual up‑close inspection, made problematic by the difficult working conditions in the rice paddy. The areas are vast and the fields often wet, to the extent that the crop is partially submerged. In addition, once the crop has grown too tall, it is impossible to view the centre of the fields from the edges. Travelling into the fields can damage the plants, severely impacting the farmers’ ability to notice any problems. This means that disease can go unnoticed until the crop is harvested, affecting yields and profits.
By flying a drone over the fields, images can be collected and used to identify disease, if present. It can even map out its location for a targeted control response. However, most smallholder farmers do not have the money or skill to own and fly a drone in a such a way, nor do they possess the expertise to correctly identify disease in this way. AGRIONE has been developed with these limitations in mind. It is capable of diagnosing patches of diseased plants over a much larger area from the air, such as a whole rice paddy. This method is a far quicker, easier and more reliable method of diagnosing infected arable plants than by the traditional human surveyor or agronomist assessments on the ground.The AGRIONE system consists of three components: a drone survey mobile application, a farmer mobile application, and a cloud-based service platform. These three components work together to provide automatic disease detection and mapping to individual farmers.
Positive news about a potential Covid-19 treatment — a drug that blocks the receptor for the inflammatory protein interleukin-6 (IL-6) — highlight the hazards of sharing research findings via Twitter and other social media.
Researchers with the large REMAP-CAP clinical trial reported through a variety of channels, most notably Twitter, that the use of the IL-6 agonists tocilizumab or sarilumab significantly reduced deaths among critically ill patients with Covid-19.
In response to what should have been good news, some experts essentially shrugged it off on social media, partially because several earlier studies had yielded disappointing results.
A few years ago, a number of A.I.-research organizations began to develop systems for addressing ethical impact. The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (SIGCHI) is, by virtue of its focus, already committed to thinking about the role that technology plays in people’s lives; in 2016, it launched a small working group that grew into a research-ethics committee. The committee offers to review papers submitted to SIGCHI conferences, at the request of program chairs. In 2019, it received ten inquiries, mostly addressing research methods: How much should crowd-workers be paid? Is it O.K. to use data sets that are released when Web sites are hacked? By the next year, though, it was hearing from researchers with broader concerns. “Increasingly, we do see, especially in the A.I. space, more and more questions of, Should this kind of research even be a thing?” Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, told me.
Shilton explained that questions about possible impacts tend to fall into one of four categories. First, she said, “there are the kinds of A.I. that could easily be weaponized against populations”—facial recognition, location tracking, surveillance, and so on. Second, there are technologies, such as Speech2Face, that may “harden people into categories that don’t fit well,” such as gender or sexual orientation. Third, there is automated-weapons research. And fourth, there are tools “to create alternate sets of reality”—fake news, voices, or images.
When the SIGCHI ethics committee began its work, Shilton said, conference reviewers—ordinary computer scientists deciding whether to accept or reject papers based on intellectual merit—“were really serving as the one and only source for pushing back on a lot of practices which are considered controversial in research.” This had plusses and minuses. “Reviewers are well placed to be ethical gatekeepers in some respects, because they’re close to this research. They have good technical knowledge,” Shilton said. “But lots and lots of folks in computer science have not been trained in research ethics.” Knowing when to raise questions about a paper may, in itself, require a level of ethical education that many researchers lack. Furthermore, deciding whether research methods are ethical is relatively simple compared with questioning the ethical aspects of a technology’s potential downstream effects. It’s one thing to point out when a researcher is researching wrong. “It is much harder to say, ‘This line of research shouldn’t exist,’ ” Shilton said. The committee’s decisions are nonbinding.
The gold standard to learn if schools can open safely is fairly simple: Open schools, measure Covid-19 incidence, and see what happens. Many US school districts have now done this, and we have the data.
First, researchers in North Carolina published results from 11 school districts and more than 100,000 students and staff. Schools in those districts employed mandatory masking and 6-foot distancing where feasible, but no major capital improvement to HVAC systems or buildings. In the first quarter of this school year, they found the rate of transmission of Covid-19 in schools was dramatically lower (roughly 1/25) than the level of transmission in the community. Among all of the Covid-19 infections observed in schools, the state health department’s tracers found 96 percent were acquired in the community, and there were no documented cases of the virus passing from child to adult in schools — zero.
Second, a similar study followed 17 schools in Wisconsin. Like North Carolina, those schools required masks indoors, 3-foot distancing with an effort to distance farther whenever feasible, and no major capital improvements. Between August 31 and November 29, with more than 4,500 students and 650 staff, they found seven cases of coronavirus transmission to children and also found no cases of coronavirus transmission to educators in the buildings. Further, these schools eliminated transmission at the same time that the surrounding community saw a rapid rise in Covid-19 cases.
It has become trivial to point out that algorithmic systems increasingly pervade the social sphere. Improved efficiency—the hallmark of these systems—drives their mass integration into day-to-day life. However, as a robust body of research in the area of algorithmic injustice shows, algorithmic systems, especially when used to sort and predict social outcomes, are not only inadequate but also perpetuate harm. In particular, a persistent and recurrent trend within the literature indicates that society’s most vulnerable are disproportionally impacted. When algorithmic injustice and harm are brought to the fore, most of the solutions on offer (1) revolve around technical solutions and (2) do not center disproportionally impacted communities. This paper proposes a fundamental shift—from rational to relational—in thinking about personhood, data, justice, and everything in between, and places ethics as something that goes above and beyond technical solutions. Outlining the idea of ethics built on the foundations of relationality, this paper calls for a rethinking of justice and ethics as a set of broad, contingent, and fluid concepts and down-to-earth practices that are best viewed as a habit and not a mere methodology for data science. As such, this paper mainly offers critical examinations and reflection and not “solutions.”
OK, to elaborate: I’m of course thrilled when AI ethics works gets a high profile write-up, and thrilled to see scholars like Alex Hanna, Katie Shilton, and Brent Hecht quoted. And every long-form writer has their own style and own vibe, so fine. But: 1/n
notes, the framing of the piece is dismissive of CS/HCI critiques of binary gender classification as themselves sound science — it doesn’t dig into the nuanced work in the space by folks like @morganklauss
, and worse… 2/n
Colby College is a private liberal arts school located in southern Maine. You can take classes in art history, chemistry, music, all the staples, and now the school is adding artificial intelligence to the list. Colby is among the first liberal arts colleges to create an artificial intelligence institute to teach students about AI and machine learning through the lenses of subjects like history, gender studies and biology. The college received a $30 million gift from a former student to set up its new institute.
This, of course, comes as the world is grappling with ethics and AI and how to build a moral foundation into algorithms. I spoke with David Greene, the president of Colby College. He said that eventually, he’d like every student to study artificial intelligence to graduate. The following is an edited transcript of our conversation.
[David] Edwards and his graduate student, Oscar Morton, as well as a number of colleagues, assembled 31 papers that examined wildlife populations in areas where hunting and trapping occurred, as well as in areas where there was none. Overall, these papers chronicled the fates of individuals from 133 species: 452 mammals belonging to 99 species, 36 birds from 24 species, and 18 reptiles from 10 species.
The researchers then built models that helped them assess the impact that a variety of factors might be having on populations of these 133 species. The factors included how much trade there was in a species; whether it was desired for food, medicine, or some other purpose; and how far the species lived from human settlements and potential markets. They also looked at whether the species lived in a protected or unprotected area.
Virginia is set to become the second state after California to pass data privacy legislation. The bill could become law as soon April when Gov. Ralph Northam is expected to sign a measure that has passed both chambers of the state legislature but is awaiting a few last-minute tweaks.
Known as the Consumer Data Protection Act, the law would go into effect Jan. 1, 2023, and would apply to all business that control or process data for at least 100,000 Virginians, or those commercial entities that derive at least 50 percent of their revenues from the sale and processing of consumer data of at least 25,000 customers.
When it comes to defending the intellectual property (IP) rights of Linux and open-source software, global leading banks aren’t the first businesses to come to mind. Things have changed. Barclays, the London-based global corporate and investment bank, and the TD Bank Group, with its 26-million global customers, have joined the leading open-source IP defense group, the Open Invention Network (OIN)
For years, the OIN, the largest patent non-aggression consortium, has protected Linux from patent attacks and patent trolls. Recently, it expanded its scope from core Linux programs and adjacent open-source code by expanding its Linux System Definition. In particular, that means patents relating to the Android Open Source Project (AOSP) 10 and the Extended File Allocation Table exFAT file system are now protected.
As important as this is, why would banks, no matter how big, care? It’s because even banks care about opposing the abuse of IP rights by patent assertion entities (PAE), better known to most of us as “patent trolls.” Even banks are subject to patent troll attacks these days.
Luckily, there is no scarcity of recipes on the internet, or on Kaggle. We found over 800 recipes for American style fluffy pancakes. However, we needed recipes with enough positive and negative reviews to train the model to predict what amount of ingredients would give us the fluffiest pancakes.
Researchers at Yale University have been studying the brain for generations. Now, a new and historic philanthropic gift is launching an ambitious research enterprise devoted to the study of human cognition that will supercharge Yale’s neuroscience initiative and position the university to reveal the brain in its full, dynamic complexity.
The gift, made by Yale alumnus Joseph C. Tsai ’86, ’90 J.D., and his wife, Clara Wu Tsai, will establish the Wu Tsai Institute, a new kind of research organization that bridges the psychological, biological, and computational sciences. The Institute will pursue a mission to understand human cognition and explore human potential by sparking interdisciplinary inquiry. It will harness and amplify Yale’s strengths in neuroscience broadly defined, joining hundreds of researchers in a university-wide effort to understand the brain and mind at all levels — from molecules and cells to circuits, systems, and behavior.
UConn on Tuesday announced the opening of a data science technology incubator in the city.
The 5,685 square-foot facility at 9 West Broad Street is hosting five start-up companies initially. The incubator includes office space and shared work areas.
The Stamford incubator is one of three Technology Incubation Programs the university operates. The other two are at the school main campus in Storrs and at UConn Health in Farmington.
Radenka Maric, UConn’s vice president for research, innovation and entrepreneurship, said the combination of Stamford’s vibrant business community and UConn’s strengths in data science and innovation will allow the new technology incubator to “become an engine for new company and job creation.”
Online March 3, starting at 1:30 p.m. “Our hope is attendees will be better positioned to make measurable progress in bringing about continual, significant, and sustained change that shall enable gainful strides in further valuing diversity, equity and inclusion within our computing community.” [registration required]
The eScience Institute’s Data Science for Social Good program is now accepting applications for student fellows and project leads for the 2021 summer session. Fellows will work with academic researchers, data scientists and public stakeholder groups on data-intensive research projects that will leverage data science approaches to address societal challenges in areas such as public policy, environmental impacts and more. Student applications due 2/15 – learn more and apply here. DSSG is also soliciting project proposals from academic researchers, public agencies, nonprofit entities and industry who are looking for an opportunity to work closely with data science professionals and students on focused, collaborative projects to make better use of their data. Proposal submissions are due 2/22.
“Fairness is becoming a paramount consideration for data scientists. Mounting evidence indicates that the widespread deployment of machine learning and AI in business and government is reproducing the same biases we’re trying to fight in the real world. But what does fairness mean when it comes to code? This practical book covers basic concerns related to data security and privacy to help data and AI professionals use code that’s fair and free of bias.”
Got a big day at work? Exercise can strengthen your brainpower and bolster your emotional fortitude, says Panteleimon Ekkekakis, a professor of exercise psychology at Iowa State University, who studies how working out affects the mind and mood.
What kind of exercise should I do?
Although any type of exercise can improve health, cardio triggers optimal cognitive and emotional states, Ekkekakis says. Other pursuits such as yoga and weightlifting, for example, seem promising but haven’t been as thoroughly researched.