Every shot in Piper is composed of millions of grains of sand, each one of them around 5000 polygons. No bumps or displacements were used in the grains, just procedurally generated Houdini models. Displacements were the first option for creating the sand, but after many tests, the required displacement detail needed to convey the sand closeups were approximately 36 times the normal displacement map resolution used in production, which made displacements inefficient and started to make sand instancing a viable solution.
In the not-so-distant future, artificial intelligence will be smarter than humans. But as the technology develops, absorbing cultural norms from its creators and the internet, it will also be more racist, sexist, and unfriendly to women.
It’s no secret that the research ecosystem has been experiencing rapid change in recent years, driven by complex political, technological, and network influences. One component of this complicated environment is the adoption of research information management (RIM) practices by research institutions, and particularly the increasing involvement of libraries in this development.
Research information management is the aggregation, curation, and utilization of information about research. Research universities, research funders, as well as individual researchers are increasingly looking for aggregated, interconnected research information to better understand the relationships, outputs, and impact of research efforts as well as to increase research visibility.
Algorithms, the set of instructions computers employ to carry out a task, influence almost every aspect of society. The explosive growth of data collection, coupled with increasingly sophisticated algorithms, has resulted in a significant increase in automated decision-making, as well as a greater reliance on algorithms in human decision-making. Industry forecasters believe software programs incorporating automated decision-making will only increase in the coming years as artificial intelligence becomes more mainstream. One of the major challenges of this emerging reality is to ensure that algorithms do not reinforce harmful and/or unfair biases.
The current search engine for programming language syntax is Google. Knowing how to search for information is a key skill in knowing how to program at all. You can know all of the algorithms and a half-dozen programming languages inside-out, but you will nonetheless be searching for how to do something at some point, whether it’s related to some brand-new or super-obscure functionality or to how to translate some feature or another in one programming language to another language.
In other words, knowing how to program has a lot to do with knowing how to access information—an acute awareness of how and when to learn.
This learning might occur in hyperdrive if you’re the sort of programmer that’s either obsessively learning new things just for the sake of it—which is a whole lot of programmers—and-or has to learn new things to apply them to a new project or task. For a recent project, for example, I needed to use a machine learning framework that’s implemented in a kind of obscure language called Lua, which is like a super-lightweight version of Python. I watched a couple of videos, but mostly I was inferring syntax from other Lua code and Googling things like “Lua for loop break.”
Next time you go out for sushi in Los Angeles, don’t bother ordering halibut. Chances are it’s not halibut at all.
A new study from researchers at UCLA and Loyola Marymount University checked the DNA of fish ordered at 26 Los Angeles sushi restaurants from 2012 through 2015, and found that 47 percent of sushi was mislabeled. The good news is that sushi represented as tuna was almost always tuna. Salmon was mislabeled only about one in 10 times. But out of 43 orders of halibut and 32 orders of red snapper, DNA tests showed the researchers were always served a different kind of fish. A one-year sampling of high-end grocery stores found similar mislabeling rates, suggesting the bait-and-switch may occur earlier in the supply chain than the point of sale to consumers.
The San Diego Union-Tribune, Deborah Sullivan Brennan
from
The tool allows researchers to post online maps showing likely “hot spots” for blue whales that will help ship captains avoid collisions with the animals.
“We can both see where they go and when they go,” said Elliott Hazen, a research ecologist with the National Marine Fisheries Service, who developed the forecasting program. “We can take their movements and combine that with remotely sensed oceanographic data, to find out not only where they go, but also some of the oceanographic conditions that trigger that.”
Using data and analytics to help improve the bottom line is now common practice in the technology and business worlds. But how can city governments use those same techniques and utilization of new information to improve the lives of its constituents?
That is the theme of a two-day workshop hosted by the City of Seattle, the University of Washington, and MetroLab, a Washington D.C.-based city-university collaboration that launched as part of the White House’s Smart Cities Initiative in September 2015.
The event, called “Big Data and Human Services” and sponsored by Microsoft, Amazon, and Comcast, gathered folks in Seattle from the public and private sectors.
Toronto-based startup Atomic Reach thinks that it’s got the answer to this problem with its Atomic AI platform, which launches today. Here’s how it works.
At the core, you’ve got an artificial neural network that’s been painstakingly built to understand 23 distinct measures of language and structure. This artificial neural network is continuously growing, and as it consumes more data, it becomes more precise.
It boasts over three million articles in its database. This is growing constantly, and is analyzed on a regular basis.
The current state of public and political discourse is in disarray. Politicians lie with impunity. Traditional news organizations amplify fact-free assertions, while outright fake news stories circulate on social media. Public trust in the media, science, and expert opinion has fallen, while segregation into like-minded communities has risen. Millions of citizens blame their economic circumstances on caricatures like “elites” rather than on specific economic forces and policies. Enormously complex subjects like mitigating climate change, or balancing economic growth and inequality, are reduced to slogans. Magical thinking (e.g., that millions of manufacturing jobs can be created by imposing trade restrictions; that everyone can have affordable, high quality, healthcare without individual mandates or subsidies; that vaccines cause autism) has proliferated.
The result has been a called a post-truth age, in which evidence, scientific understanding, or even just logical consistency have become increasingly irrelevant to political argumentation. Indeed, a flagrant disregard for consistency and evidence may even be interpreted as a demonstration of power: the power to create one’s own reality.
University of Southern California, Dornsife Colllege of Letters, Arts and Sciences
from
A computer algorithm for analyzing time-lapse biological images could make it easier for scientists and clinicians to find and track multiple molecules in living organisms. The technique is faster, less expensive and more accurate than current methods — and it even works with cell phone images.
Given the vast number of variables and uncertainties in trying to predict how much sea levels may rise, how, and where to best deploy marshes in coastal defenses, as well as how best to help those marshes thrive, some of the world’s leading climate researchers are turning to flexible computer modeling to help them prioritize where marshes fit into the larger picture.
“No one is suggesting we rely exclusively on salt marshes to mitigate the effects of climate change,” said Mark Schuerch, a post-doctoral researcher and coastal geographer at the University of Cambridge, U.K., specializing in salt marsh modeling. “It has to be seen as a complementary measure, which reduces the pressure on the hard infrastructure.”
The inaugural artificial intelligence (AI) in games survey aimed at assessing the knowledge of AI in games across the industry. Results to be published in a months time @brainybeard and www.brainybeard.co.uk
arXiv, Statistics > Machine Learning; Graham Neubig et al.
from
We describe DyNet, a toolkit for implementing neural network models based on dynamic declaration of network structure. In the static declaration strategy that is used in toolkits like Theano, CNTK, and TensorFlow, the user first defines a computation graph (a symbolic representation of the computation), and then examples are fed into an engine that executes this computation and computes its derivatives. In DyNet’s dynamic declaration strategy, computation graph construction is mostly transparent, being implicitly constructed by executing procedural code that computes the network outputs, and the user is free to use different network structures for each input. Dynamic declaration thus facilitates the implementation of more complicated network architectures, and DyNet is specifically designed to allow users to implement their models in a way that is idiomatic in their preferred programming language (C++ or Python). One challenge with dynamic declaration is that because the symbolic computation graph is defined anew for every training example, its construction must have low overhead. To achieve this, DyNet has an optimized C++ backend and lightweight graph representation. Experiments show that DyNet’s speeds are faster than or comparable with static declaration toolkits, and significantly faster than Chainer, another dynamic declaration toolkit.