© 2024 Ideastream Public Media

1375 Euclid Avenue, Cleveland, Ohio 44115
(216) 916-6100 | (877) 399-3307

WKSU is a public media service licensed to Kent State University and operated by Ideastream Public Media.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Weather-Related Closings and Delays

Anti-Defamation League Report Says Online Anti-Semitism Is A 'Daily Occurrence'

ARI SHAPIRO, HOST:

A day before the shooting in Pittsburgh, the ADL published a study about online harassment and propaganda against Jews in the U.S. The ADL works to fight anti-Semitism. And its report says online attacks against Jews were rare a few years ago and now, quote, "anti-Semitism has become normalized, and harassment is a daily occurrence." Sam Woolley is an author of the report and joins us now.

Welcome.

SAM WOOLLEY: Thanks for having me.

SHAPIRO: This report is based on an analysis of more than 7 1/2 million tweets over a couple of weeks this past September. And it finds that 30 percent of Twitter accounts using derogatory terms were highly automated. What do those words highly automated mean?

WOOLLEY: Highly automated is shorthand for a social media bot. And so a bot is a profile on a site like Twitter that appears to be real and appears to be run by a person but is actually using automation to amplify posts on a particular topic.

SHAPIRO: Who's running these anti-Semitic bots?

WOOLLEY: That's a really good question. And part of the problem is that we don't actually know who is running these bots. It appears that a lot of the content related to these accounts is coming from white nationalists and the "alt-right." That said, it's very hard for researchers like me to determine the provenance of these bots. And past experience shows that while a lot of attacks can come from domestic actors, they can also come from foreign entities as well, much like we saw in 2016 with the Russian interference in the election.

SHAPIRO: People who experience online harassment are often told - report the offensive users; block the offensive content. Is that enough?

WOOLLEY: I don't think it's enough. I think that it's a shame that we put the burden of proof and the burden of reporting upon the very people who are experiencing harassment. I think the social media firms need to do more to protect people who experience extreme trolling and extreme harassment online. At the moment, the mechanisms that exist for reporting things like doxxing, or releasing of personal details online, fall really short of what they should.

SHAPIRO: Do the people engaging in this kind of harassment have a goal beyond just harassing people?

WOOLLEY: You know, the goal in a lot of cases is to prevent people from speaking out, to prevent their voices from being heard in American politics. Bots generally get used to do two things, to amplify particular voices and suppress others. So if you have one account that can tweet about politics, that's one thing. But if you have 10,000 automated accounts that can tweet about something else, then you can imagine what effect they have on someone's ability to actually get things done.

These accounts generate a lot of noise, and they generate a lot of fear amongst the people who are the recipients of attacks from them. Some of the content here points people away from voting, suggests they can vote via text and all sorts of things like that. So they're actually undermining democracy in a big way.

SHAPIRO: Can technology be used to counteract these bots on a large scale?

WOOLLEY: Well, I think that there's a perception that AI is the big solution to this problem because of questions of scale. The social media companies have grown so quickly and expanded so fast that now we come to accept the idea of scale in our day-to-day life. But I think that it's really crucial that we have people on the other end making decisions about what hate speech and propaganda looks like because technology can't suss out details. It can't feel. It can't understand humor or sentiment. And so we have to also have human-based ways of mitigating the problems of automation and computational propaganda.

SHAPIRO: Sam Woolley is director of the Digital Intelligence Lab at the Institute for the Future and one of the authors of the ADL report on anti-Semitic propaganda online.

Thanks for joining us today.

WOOLLEY: Thanks for having me.

(SOUNDBITE OF THE XX'S "INTRO (THE XX SONG)") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.