Physiological signals have shown to be reliable indicators of stress in laboratory studies, yet large-scale ambulatory validation is lacking. We present a large-scale cross-sectional study for ambulatory stress detection, consisting of 1002 subjects, containing subjects’ demographics, baseline psychological information, and five consecutive days of free-living physiological and contextual measurements, collected through wearable devices and smartphones. This dataset represents a healthy population, showing associations between wearable physiological signals and self-reported daily-life stress. Using a data-driven approach, we identified digital phenotypes characterized by self-reported poor health indicators and high depression, anxiety and stress scores that are associated with blunted physiological responses to stress. These results emphasize the need for large-scale collections of multi-sensor data, to build personalized stress models for precision medicine. [full text]
It started during yoga class. She felt a strange pull on her neck, a sensation completely foreign to her. Her friend suggested she rush to the emergency room. It turned out that she was having a heart attack.
She didn’t fit the stereotype of someone likely to have a heart attack. She exercised, did not smoke, watched her plate. But on reviewing her medical history, I found that her cholesterol level was sky high. She had been prescribed a cholesterol-lowering statin medication, but she never picked up the prescription because of the scary things she had read about statins on the internet. She was the victim of a malady fast gearing up to be a modern pandemic — fake medical news.
While misinformation has been the object of great attention in politics, medical misinformation might have an even greater body count. As is true with fake news in general, medical lies tend to spread further than truths on the internet — and they have very real repercussions.
It seems as if hardly a day passes without some new tech ethics controversy. Earlier this year, the Cambridge Analytica scandal dominated news cycles, in part because it raised a host of ethical issues: microtargeting and manipulation of social media users, research ethics, consumer privacy, data misuse, and the responsibility of platforms like Facebook.
As an academic who studies both technology ethics and research ethics, I was active in the public discourse around the controversy (which mostly meant tweeting and talking to journalists). This was international news, and the world seemed blown away by tales of privacy violations, research misconduct, and mass manipulation of voters, all facilitated by a social media platform that in many ways has become embedded in the fabric of society. The level of surprise varied, but I heard a common refrain when it came to the root of the problem: all these software developers, tech designers, data scientists, and computer engineers are just so darn unethical.
Robin is an economist at George Mason University in Virginia, USA. I had an argument with him because Robin proposed – all the way back in 1990 – that “gambling” would save science. He wanted scientists to bet on the outcomes of their colleagues’ predictions and claimed this would fix the broken incentive structure of academia.
Researchers from the University of Basel have reported a new method that allows the physical state of just a few atoms or molecules within a network to be controlled. It is based on the spontaneous self-organization of molecules into extensive networks with pores about one nanometer in size. In the journal small, the physicists reported on their investigations, which could be of particular importance for the development of new storage devices.
Language is the core of the problem. Papers are ostensibly written for other scientists to read and understand, but the sheer volume of information means the scientists are in serious need of help.
The answer, some think, is simply to do a better job of sorting, cataloging and assessing papers as they are published.
The surprise arrest of Meng Wanzhou, chief financial officer of Huawei Technologies Co., has thrust the company into a political firestorm and deepened a core threat: that more and more countries will blacklist its switches, routers and phones out of growing concern that they could be hijacked by foreign spies.
Yet inside Huawei’s Shenzhen headquarters, a secretive group of engineers toil away heedless to such risks. They are working on what’s next — a raft of artificial intelligence, cloud-computing and chip technology crucial to China’s national priorities and Huawei’s future. As the trade war drags on, China’s government has pushed to create an industry that is less dependent on cutting-edge U.S. semiconductors and software.
The number of international students enrolling in US graduate programmes is falling, according to reports from the US Council of Graduate Schools in Washington DC and the Institute of International Education in New York City.
In a survey of 619 institutions, the council found that 339,038 international students enrolled in US graduate studies for the first time in autumn 2017, down 3.7% from the previous year.
Google’s New York office is already its largest outside the San Francisco Bay Area, but on Monday the company announced plans to double the size of its New York workforce to more than 14,000. The company is building a new campus in the Hudson Square neighborhood, about a mile south of its current New York headquarters in the Chelsea neighborhood.
It’s been a big couple of months for technology companies expanding beyond the West Coast. Last month, Amazon announced it would add a total of 50,000 jobs in two new campuses—one in New York’s Long Island City neighborhood, and the other in Crystal City in the Virginia suburbs of Washington, DC. Last week, Apple announced it would expand its 6,000-person Austin campus by another 5,000 workers, with the potential to add an additional 10,000 people later on.
Now it’s Google’s turn. The search giant is planning to add at least 7,000 more New York City jobs over the next decade.
“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart,” former South African President Nelson Mandela said.
But how does language affect your experience of the internet?
The British Psychological Society, Research Digest, Jesse Singal
from
For a long time, some psychologists have understood that their field has an issue with WEIRDness. That is, psychology experiments disproportionately involve participants who are Western, Educated, and hail from Industrialised, Rich Democracies, which means many findings may not generalise to other populations, such as, say, rural Samoan villagers.
In a new paper in PNAS, a team of researchers led by Mostafa Salari Rad decided to zoom in on a leading psychology journal to better understand the field’s WEIRD problem, evaluate whether things are improving, and come up with some possible changes in practice that could help spur things along.
PR Newswire, Artificial Intelligence Finance Institute
from
Columbia University Adjunct Professors, Michael Oliver Weinberg, CFA, and Miquel Noguer i Alonso, PhD, are pleased to announce their co-founding of the Artificial Intelligence Finance Institute (AIFI). AIFI’s mission is to be the world’s leading educator in the application of artificial intelligence to investment management, capital markets and risk.
Taught by a diverse staff of leading academics and practitioners, AIFI’s course will teach the theory and practical implementation of artificial intelligence and machine learning tools in investment management. The course, which will include 75 hours of interactive coding and lectures, will award individuals the Artificial Intelligence in Investment Management Certificate.