Delaware Health Information Network

Delaware Health Information Network on FacebookDelaware Health Information Network on LinkedInDelaware Health Information Network on TwitterDelaware Health Information Network on YouTubeDelaware Health Information Network on YouTube


Telehealth (74)

Affordable Care Act (249)

Healthcare Fraud (18)

A Healthier You (358)

Health Tech (115)

Spotlight On... (500)

Health Tech

Facebook Increasingly Reliant on A.I. To Predict Suicide Risk

11/27/2018

(NPR) – A year ago, Facebook started using artificial intelligence to scan people’s accounts for danger signs of imminent self-harm.

Facebook Global Head of Safety Antigone Davis is pleased with the results so far.

“In the very first month when we started it, we had about 100 imminent-response cases,” which resulted in Facebook contacting local emergency responders to check on someone. But that rate quickly increased.

“To just give you a sense of how well the technology is working and rapidly improving … in the last year we’ve had 3,500 reports,” she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn’t include Europe, where the system hasn’t been deployed. (That number also doesn’t include wellness checks that originate from people who report suspected suicidal behavior online.)

Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people’s replies.

“Maybe like, ‘Please don’t do this,’ ‘We really care about you.’ There are different types of signals like that that will give us a strong sense that someone may be posting of self-harm content,” Davis says.

When the software flags someone, Facebook staffers decide whether to call the local police, and AI comes into play there, too.

“We also are able to use AI to coordinate a bunch of information on location to try to identify the location of that individual so that we can reach out to the right emergency response team,” she says.

In the U.S., Facebook’s call usually goes to a local 911 center, as illustrated in its promotional video.

Mason Marks isn’t surprised that Facebook is employing AI this way. He’s a medical doctor and research fellow at Yale and NYU law schools, and recently wrote about Facebook’s system.

“Ever since they’ve introduced livestreaming on their platform, they’ve had a real problem with people livestreaming suicides,” Marks says. “Facebook has a real interest in stopping that.”

He isn’t sure this AI system is the right solution, in part because Facebook has refused to share key data, such as the AI’s accuracy rate. How many of those 3,500 “wellness checks” turned out to be actual emergencies? The company isn’t saying.

He says scrutiny of the system is especially important because this “black box of algorithms,” as he calls it, has the power to trigger a visit from the police.

“It needs to be done very methodically, very cautiously, transparently, and really looking at the evidence,” Marks says.

Read the full story at npr.org


View Full Site
Top