Over the last few years, the political information environment has been significantly transformed. The greatest change is related to the rise of social media. Social media is increasingly pervading our everyday life, which is further strengthened by the development of mobile technologies enabling constant presence. The purpose of this study is to contribute to the understanding of the outcomes of this changing information environment by uncovering the distinct effects Facebook has on young people’s political behavior compared to more traditional, professional-operated media platforms. It examines the role and impact of Facebook as a central political information source within today’s high-choice information environment among university students.
Given what we know of how deep nets work, of their limitations, and of the current state of the research landscape, can we predict where things are headed in the medium term? Here are some purely personal thoughts. Note that I don’t have a crystal ball, so a lot of what I anticipate might fail to become reality. This is a completely speculative post. I am sharing these predictions not because I expect them to be proven completely right in the future, but because they are interesting and actionable in the present.
By analyzing viewer data – think 30 million “plays”, 4 million ratings, 3 million searches – the company was able to determine that fans of the original House of Cards, which aired in the UK, were also watching movies that starred Kevin Spacey and were directed by David Fincher, who’s one of the show’s executive producers. Netflix engineered my addiction (and thank goodness they did!)
They company’s still at it, too, analyzing everything from when you watch a show to when you pause it or turn it off. Last year, Netflix grew its US subscriber base by 10 percent and added nearly 20 million subscribers from around the globe.
Twitter may have started out as a way to connect to other people and share news quickly, but the social media platform is also a powerful tool, with the data generated representing the largest publicly accessible archive of human behavior in existence.
Guangqing Chi, associate professor of rural sociology and demography and public health sciences in Penn State’s Department of Agricultural Economics, Sociology, and Education and director of the Computational and Spatial Analysis (CSA) Core in the Social Science Research Institute, and his team have collected over 30 terabytes of geo-tagged tweets over the last four years.
The Berkeley Artificial Intelligence Research Blog, Chelsea Finn
from
Current AI systems can master a complex skill from scratch, using an understandably large amount of time and experience. But if we want our agents to be able to acquire many skills and adapt to many environments, we cannot afford to train each skill in each setting from scratch. Instead, we need our agents to learn how to learn new tasks faster by reusing previous experience, rather than considering each new task in isolation. This approach of learning to learn, or meta-learning, is a key stepping stone towards versatile agents that can continually learn a wide variety of tasks throughout their lifetimes.
So, what is learning to learn, and what has it been used for?
Text. Video. Pictures. Audio. We’re used to searching the web for different kinds of content. Now, one startup is striving to add very different kind of search category: emotion.
The first public pilot U.K.-based Emotions.Tech’s artificial emotional intelligence, launched in May, allows users to search according to how they want the results to make them feel.
The FBI yesterday released a public service announcement (PSA) alerting parents to the dangers potentially imposed by smart toys.
The document warns that connected toys with microphones, GPS tracking, Wi-Fi, and/or bluetooth connectivity could be giving criminals access to private information about children and their families. This could lead to identity theft or worse.
One night three months ago, Rosa Castro finished her dinner, opened her laptop, and uncovered a novel object that was neither planet nor star. Therapist by day and amateur astronomer by night, Castro joined the NASA-funded Backyard Worlds: Planet 9 citizen science project when it began in February — not knowing she would become one of four volunteers to help identify the project’s first brown dwarf, formally known as WISEA J110125.95+540052.8.
Siraj Raval, the AI education youtube star, moved from San Francisco to Amsterdam for its progressive attitude, which he believes is beneficial for people wanting to pioneer entirely new ideas. True enough, the Dutch capital renowned for thundering electronic music, freedom of expression and get-shit-done mentality, has been working towards positioning itself as a thriving AI ecosystem too. It’s unique culture of innovation, inclusiveness and diversity has culminated in the city receiving the European Capital of Innovation Award last year.
Moreover, Amsterdam is fantastically well connected, both on a local level and internationally. Schiphol Airport, the 3rd busiest airport in Europe, is only 15 minutes from the city centre, and everything else is but a short bike ride away… So, the city of rusty bikes is an interesting place to be, especially when computer vision is your thing. Let’s take a closer look!
Company Data Science News
WeChat and AliPay have made cash more or less obsolete for the average urban Chinese citizen. The Chinese adoption pattern is one reason why there is so much attention to FinTech in the US. There is anticipation that the entire financial industry will unite with social media. You may be approved for your next loan because you are linked via social media to people who are financially reliable…so choose your friends wisely.
Ashley Madison has finally settled its data-driven lawsuit for $11.2m. The company had created fake accounts using real people’s contact info, built a pay-to-delete feature, and then failed to actually delete accounts upon receipt of payment. This would have been bad enough, but their databases were hacked and former, fake, and real users’ data were leaked.
BP’s head of litigation left the company to consult on data analytics in lawtech (new coinage, not mine, let’s see if it sticks). Digital discovery is taking longer and longer. The specificity of legal text should make it ripe for text analysis and data analytics. Expect to see more technology driven churn in products aimed at the legal space and anticipate some legal culture pushback.
IBMopened four new data centers: two in London, one in San Jose, and one in Sydney. They are in addition to 55 other data centers.
Apple will be publishing a Machine Learning Journal going forward. It appears that the goal is to give Apple data scientists an in-house publishing outlet. What is wrong with arxiv.org for Apple employees?
Mayo Clinic and nference have launched a company called Qrativ that will take Mayo’s patient data and nference’s AI expertise to discover treatments for “diseases with unmet needs.” This focuses encompasses rare diseases AND those disease in “highly targeted” patient populations. With all the medical expertise involved – even nference’s CEO is an MD – expect the trifecta of domain expertise, AI expertise, and excellent, clean data from Mayo’s best-in-class operations. They’ve already closed a Series A.
Also in Mayo’s startup scene, AliveCor secured a third round of funding to develop tools to screen for Long QT syndrome. AliveCor, the article notes, is run by Vic Gundotra, a former Google exec. Google is such a strong brand that being former-Google is a key that can unlock everything from seed/venture funding to job offers to meeting new significant others. That last point is based solely on anecdotal observation.
Goldman Sachs is trying to be the “Google of Wall Street.” That’s another way to measure the cultural dominance of the Google brand: even storied, established, phenomenally wealthy, stupendously powerful brands like Goldman want to emulate Google.
Generating new 3D shapes is challenging. “The time consuming process of 3D content creation prevents computer graphics from being as ubiquitous as we had hoped,” said the associate professor of computer science at the National University of Defense Technology (NUDT), China, and soon-to-be visiting professor at Princeton University.
“Our work is a data-driven automatic shape generation computational method. Given a set of example 3D shapes, our task is to generate multiple shapes of one object class, automatically,” Xu said.
The US mathematician and electrical engineer Claude Shannon, whose life spanned the tumultuous, technologically explosive twentieth century, is often called the father of information theory. This is no exaggeration: Shannon crafted the idea that information can be quantified independently of its meaning and content. Working mainly in a world of analog technology, he laid the foundations of our digitized universe.
In A Mind at Play, journalist Jimmy Soni and political theorist Rob Goodman tell Shannon’s story engagingly, from the perspective of a lay reader wrestling with the sophisticated ideas that Shannon explored with dedication and panache. The book is a boon for those eager to know more about his incredibly influential life — whimsical, independent and curiosity-driven.
In recent days, I’ve gotten to know beekeepers in Rhode Island, dental hygienists in New Jersey and Wiccans in Tennessee. I’ve seen gardeners swapping fertilizer advice, flight attendants complaining about annoying passengers and fishermen arguing about which lures are best for catching muskies. I now know that there are hundreds of people who love creating memes about “The Sopranos,” and thousands who believe, with total conviction, that the Earth is flat.
All of this has been revealed to me because, for the better part of a month, I have immersed myself in the fascinating, enlightening and sometimes scary world of private Facebook groups. I’ve gotten access to scores of private groups — more than 100 in all — ranging in size from a handful of members to millions. I’ve joined Facebook groups that represent my real-life interests (Home Cooks, Pitbull Fans) and groups that have nothing to do with me (Lyme Disease Group, Quilting for Beginners, Cannabis Growers Helping Cannabis Growers). For weeks, I lurked silently in these forums and, when possible, tried to interview their moderators and members.
This wasn’t just a stunt. Facebook recently changed its corporate mission to emphasize the role of private groups, and I wanted to see what diving headfirst into the new Facebook could tell me about the company’s future.
Are you looking forward to a future filled with smart cognitive systems? Does artificial intelligence sound too much like Big Brother? For many of us, these technologies promise more freedom, not less.
One of the distinctive features of cognitive systems is the ability to engage with us, and the world, in more human-like ways. Through advances in machine learning, cognitive systems are rapidly improving their ability to see, to hear, and to interact with humans using natural language and gesture. In the process, they also become more able to support people with disabilities and the growing aging population.
[Paul] Meehl recognized that the human brain could be acutely sensitive to the unusual, and proposed what he called the “broken leg” scenario. Imagine you are trying to predict whether someone will go to the movies on Friday night. Your model gives them a 90 percent chance of going to the movie, but at the last minute you discover they have a broken leg and are in an immobilization cast in hospital. Since there is no variable for broken leg in your model, blindly sticking to its prediction will lead to certain failure.
So, what do we do? We place the algorithm or machine in the hands of the human. When there is a broken leg, the human can recognize it and intervene.
In chess, this strategy has been fairly successful. As told in Tyler Cowen’s Average is Over or Andrew McAfee and Erik Brynjolfsson’s The Second Machine Age, computers are not actually the best chess players in the world. In freestyle chess—a style where players can consult multiple computers and programs in real time—the strongest players are human-computer combinations.
Jay Edelson, partner and founder of plaintiff-side class action law firm Edelson PC in Chicago, told Bloomberg BNA that the settlement is a strong one for the plaintiffs and signals that “the cost of data breach settlements are likely to rise over the next few years.”
Scott Blackmer, information technology law partner at InfoLawGroup LLP, told Bloomberg BNA that to “avoid costly liability like this,” companies must maintain reasonable security measures and “say what you do, and do what you say.”
AshleyMadison’s parent company charged a fee for a “full” delete feature and then didn’t delete users’ data, Blackmer said. “Getting hacked because of poor security is bad, but coupling that with deceptive practices is what really makes judges, juries, or regulators want to hit you with a stick,” he said.
It’s a trend that Jim Neath noticed before he departed his position as an associate general counsel and global head of litigation at British Petroleum.
“The electronic portion of complex discovery today is taking up a higher and higher percentage of the overall cost of defending litigation,” Neath said in an interview about why he took his new position at a consulting firm that applies data analytics to legal work.
He said a gap exists: Corporations want to run their legal departments more efficiently, but broadly speaking, lawyers, both in-house and at law firms, have been wary of embracing technology when it comes to data review. He boiled this down to a still tepid faith in technology and the conservatism of the profession.
Two US tech companies announced Monday that they are jointly developing technology that functions like the human brain to help bring facial recognition technology to police body cameras.
Picture this: you’re sitting in a police interrogation room, struggling to describe the face of a criminal to a sketch artist. You pause, wrinkling your brow, trying to remember the distance between his eyes and the shape of his nose.
Suddenly, the detective offers you an easier way: would you like to have your brain scanned instead, so that machines can automatically reconstruct the face in your mind’s eye from reading your brain waves?
Sound fantastical? It’s not. After decades of work, scientists at Caltech may have finally cracked our brain’s facial recognition code. Using brain scans and direct neuron recording from macaque monkeys, the team found specialized “face patches” that respond to specific combinations of facial features.
Hassabis said at a talk in London this month that the true impact of AI on jobs “isn’t clear yet.” Speaking to an audience of entrepreneurs at Google Campus at the end of last month, Hassabis said:
“Any time a major new technology comes in, it creates a big change. We’ve known that from the industrial revolution, the internet did that, mobile did that. So you could view it [AI] as another really big disruption event in that lineage. That’s one reasonable view. In which case, society will just adapt like it’s done with all the other things and some jobs will go, but newer, hopefully better, higher quality jobs will become possible, facilitated by those new technologies. I think that’s definitely going to happen in the shorter term.
The question is then is this kind of a one time epochal event that’s beyond the level of even those big things. I’m not sure.
“Competitors can submit their own system to compete in a quiz bowl competition between computers and humans. Entrants create systems that receive questions one word at a time and decide when to answer. This then provides a framework for the system to compete against a top human team of quiz bowl players in a final game that will be part of NIPS 2017.” Deadline for submitting machine entries is October 15.
Special issue of the Journal of the Association for Information Science and Technology is calling for papers that advance the concepts, methods and theories that support the social informatics perspective. Deadline for submissions is January 15, 2018.
Yandex “announced the launch of CatBoost, an open source machine learning library based on gradient boosting — the branch of ML that is specifically designed to help “teach” systems when you have a very sparse amount of data, and especially when the data may not all be sensorial (such as audio, text or imagery), but includes transactional or historical data, too.”