As a software engineer and computational linguist who spends most of her work and even leisure hours in front of a computer screen, I am concerned about what I read online. In the age of social media, many of us consume unreliable news sources. We’re exposed to a wild flow of information in our social networks — especially if we spend a lot of time scanning our friends’ random posts on Twitter and Facebook.
My colleagues and I at the Discourse Processing Lab at Simon Fraser University have conducted research on the linguistic characteristics of fake news.
Chess computers, also known as chess engines, have different personalities on the board. Stockfish, an open-source engine freely available and maintained by a community of programmers, has a clean, positional style, Pete Cilento, executive editor at Chess.com, told me by email. Leela Chess Zero, also known as Lc0, another open-source engine, “plays a more intuitive and hazy game,” he said. “It has gained so many fans because it plays superhuman chess in a human way.” Houdini, developed by programmer Robert Houdart, has a more aggressive and sacrificial style, which is why it has been compared to the great players of the Romantic Era. In a way, they are human: Behind every engine is a team of programmers, engineers, and chess experts.
Decades before self-driving cars or Siri, chess was an obsession of AI researchers, and getting a computer to beat a human master their holy grail. Today, twenty-two years after IBM’s Deep Blue shocked the world by beating then–world champion Garry Kasparov, chess computers have left humans in the dust. The latest generation of programs, such as AlphaZero, developed by the Alphabet-owned company DeepMind, is doing things that even their human creators don’t understand. Demis Hassabis, co-founder and CEO of DeepMind, has described aspects of AlphaZero’s decision-making as a “black box,” for example, how it assesses the overall value of a rook compared to a knight. “We don’t actually know.”
Google’s Chrome team is feeling pressure from competitors over ad tracking. Apple has long offered industry-leading protection against tracking cookies, while Mozilla recently announced that Firefox will begin blocking tracking cookies by default. Microsoft has been experimenting with tracking protection features in Edge, too.
But Google has a problem: it makes most of its money selling ads. Adopting the same aggressive cookie blocking techniques as its rivals could prevent Google’s customers from targeting ads—potentially hurting Google’s bottom line.
So in a blog post last week, Google outlined an alternative privacy vision—one that restricts some forms of user tracking without blocking the use of tracking cookies any time soon.
We know that even when people have a larger device available, they sometimes prefer to use a mobile phone instead — simply because the mobile phone is always with them and it may be more convenient to use it instead of switching devices (a phenomenon we call device inertia).
But does it mean that mobile will displace computers? Will we eventually discard big-screen devices in favor of smaller, portable ones for tasks as complex as filing taxes or writing research reports?
In this article we don’t aim to answer that question: instead we assess the current state of device preferences. We look at the importance that people assign to activities done on different devices. Has mobile caught up with computers yet?
Industrial IoT is killing the traditional narrative of startups disrupting older players out of the market. Instead, leaders in industrial equipment are parlaying both their expertise and their customers’ hesitation over new products into new business models — and in the process, facilitating a gradual transformation of their business and their customers’ operations.
Along the way, they are buying startups or investing in them as a way to keep their fingers on the pulse. But the conservative nature of manufacturing and sectors such as oil extraction have too much at stake to build their future business on an unknown platform. All of this, plus new business models, are clearly laid out in a new report from Boston Consulting Group.
Swamped with a surge of incoming freshmen it hadn’t foreseen, Virginia Tech this summer tried all kinds of dorm-maximizing tricks to squeeze in its largest class ever. Singles morphed into doubles, doubles into triples. Lounges became pop-up bedrooms for three or more.
And it shunted more than 500 students into a pair of hotels, including a Holiday Inn Express, here in the hills of Southwest Virginia.
For AI accelerators in the race to achieve optimum accuracy at minimum latency, especially in autonomous vehicles (AVs), teraflops have become the key element in many so-called brain chips. The contenders include Nvidia’s Xavier SoC, Mobileye’s EyeQ5, Tesla’s Full Self-Driving computer chip and NXP-Kalray chips.
In an exclusive interview with EE Times last week, Forrest Iandola, CEO of DeepScale, explained why this sort of brute-force processing approach is unsustainable, and said many of the assumptions common among AI hardware designers are outdated. As AI vendors gain more experience with more AI applications, it’s becoming evident to him that different AI tasks are starting to require different technological approaches. If that’s true, the way that AI users buy AI technology is going to change, and vendors are going to have to respond.
The Star (Vancouver, BC), Jen St. Denis and Ainslie CruickshankS
from
Fisheries Minister Jonathan Wilkinson announced a further $2.7 million investment in salmon conservation projects, after government officials confirmed Thursday morning that salmon stocks across British Columbia are returning in concerningly low numbers.
Fisheries and Oceans Canada, also known as DFO, had previously forecast that 4,795,000 sockeye salmon would return to the Fraser River this year.
As the run starts, that number has been adjusted to 628,000 — just 13 per cent of that original forecast. The state of sockeye salmon is now so dire that some populations “face an imminent threat of extinction,” according to DFO.
The Lancet; Marzyeh Ghassemi, Tristan Naumann, Peter Schulam, Andrew L Beam, Irene Y Chen, Rajesh Ranganath
from
Advances in machine learning and artificial intelligence (AI) offer the potential to provide personalised care that is equal to or better than the performance of humans for several health-care tasks.1
AI models are often powered by clinical data that are generated and managed via the medical system, for which the primary purpose of data collection is to support care, rather than facilitate subsequent analysis. Thus, the direct application of AI approaches to health care is associated with both challenges and opportunities.
Many AI approaches use electronic health record (EHR) data, which document health-care delivery and operational needs, and which can be relevant to understanding patient health. EHR data are heterogeneous and are collected during treatment to improve each patient’s health. Almost exclusively, EHR data are documented without consideration of the development of algorithms. Data can be collected from a wide range of sources, from high-frequency signals sampled every 0·001 seconds, to vital signs noted hourly, imaging or laboratory tests recorded when needed, notes written at care transitions, and static demographic data. Longitudinal clinical data, such as hospital records for all patients with a particular disease, are similarly heterogenous in data type, time scale, sampling rate, and reason for collection. Each data type is associated with its own challenges. [full text]
For many early-career scientists, the National Science Foundation’s (NSF’s) graduate fellowship is the award to win. Officially called the Graduate Research Fellowship Program (GRFP), it provides its 2000 or so annual awardees with 3 years of funding. It can give students a leg up when applying to graduate programs or trying to earn a spot in a coveted lab. It also confers a certain cachet among colleagues and can jump-start a strong CV for future applications. “We’re trying to support the next generation of STEM leaders,” says Nirmala Kannankutty, acting director of NSF’s Division of Graduate Education, which manages the GRFP.
But those “STEM leaders” overwhelmingly come from the same few institutions, highlighting how inequality propagates through the academic system. “It’s another way the rich get richer,” says Matthew Cover, a professor of ecology at California State University (CSU) Stanislaus in Turlock, who has analyzed how recipients are distributed across institutions. As this year’s eligible students begin to labor over their applications, it’s worth asking why certain schools come away with dozens of winners each year—and what that means about the academy at large.
Following the success of the master’s and PhD programs, this fall Worcester Polytechnic Institute (WPI) will begin rolling out a bachelor’s degree program in data science. The addition of the new major makes the university one of the few schools in the nation to offer undergraduate, graduate, and doctoral degrees in data science.
“As the availability of vast amounts of digital data increasingly impacts all facets of our daily lives, from health to business to entertainment, it is critical that we build a pipeline of programs to equip more students with the necessary skills for these 21st-century jobs,” said Elke Rudensteiner, Data Science Program director. “With the addition of the bachelor’s degree, WPI is preparing students for immediate job opportunities at every stage in this fast-growing career.”
In the next 10 years, it is projected, 1 trillion devices will be connected to the internet. With the growing number of electronics comes a colossal amount of data available from every click on the devices.
Data science is critical to finding relevant information that will help advance and improve society. At Purdue University, the importance of instilling data literacy in students has become a key goal. Purdue has started an interdisciplinary collaboration to become a national leader in applying data science to solve large and pressing problems from food insecurity to disease. Purdue’s data science program will cover big data from understanding the fundamentals of data to applying findings for immediate use in society.
The story of the data initiative is featured in a four-minute video as part of Purdue’s Boiler Bytes series. The episode, titled “Integrative Data Science Initiative,” will be featured on the Big Ten Network in the fall.
Cambridge, MA October 24-25. “As technology continues to develop at an unprecedented rate, those involved with AI often lack the tools and knowledge to expertly navigate ethical challenges. In response, MIT Professional Education is pleased to introduce an exciting new course, Ethics of AI: Safeguarding Humanity.” [$$$$]
DevOps Research and Assessment (DORA), a pioneer in helping organizations achieve high DevOps and organizational performance with data-driven insights, and Google Cloud are excited to announce the launch of the 2019 Accelerate State of DevOps Report. The report provides a comprehensive view of the DevOps industry, providing actionable guidance for organizations of all sizes and in all industries to improve their software delivery performance to ultimately become an elite DevOps performer. With six years of research and data from more than 31,000 professionals worldwide, the 2019 Accelerate State of DevOps Report is the largest and longest-running research of its kind.