Don’t you hate it when you comment on an article to engage in a discussion and you’re attacked by online trolls? Or you want to read about others’ opinions yet all you seem to be able to find are toxic comments? Online hate speech is a real thing, and today Google is launching Perspective to identify and moderate these nasty comments.
Google Ideas is now Jigsaw
Google explains that 72 percent of American Internet users have witnessed harassment online, and almost half have experienced it first-hand. That’s a huge problem not only because it’s a source of fear that ultimately undermines freedom of speech, but it can cause serious psychological impacts on people, especially when there are 10-year-olds on social media.
Unfortunately, this kind of behavior also affects websites – in particular news organizations that want to “encourage engagement and discussion around their content.” The problem is that with so many trolls out there, sorting through millions of comments to get rid of abusive ones could be very costly in terms of money, labor, and time. That’s why some sites don’t even have comment sections, sadly.
The problem is that with so many trolls out there, sorting through millions of comments to get rid of abusive ones could be very costly in terms of money, labor, and time.
That’s where Perspective comes in: Google and Jigsaw describe it as “an early-stage technology that uses machine learning to help identity toxic comments.” Similar to other Google platforms, it’ll be available as an API which can be used by publishers on their sites. What it does is relatively simple: Perspective examined hundreds of thousands of comments that are labeled abusive by human reviewers and uses that to compare new comments and evaluate their toxicity. Of course, the more types of comments it finds, the better it gets at accurately scoring future ones.
The cool thing about Perspective is how publishers can incorporate it into their websites. The first method is pretty straight-forward: Perspective could flag potentially abusive comments and human moderators can have the ultimate say. Or the publisher could let commenters see the level of toxicity of their comments as they write them, using Perspective. Or – I, for one, think this would be particularly useful – publishers could allow readers to sort comments by toxicity, making it easier for them to find real content.
Using Perspective, publishers could allow readers to sort comments by toxicity, making it easier for them to find real content.
Google is currently working with The New York Times and says that although it’s still in the development phase, it will get more accurate and sophisticated over time – sophisticated enough to identify not just toxic comments but off-topic comments even in other languages.
Do you think Perspective will help improve online conversations? Let us know your thoughts by leaving a comment below (and please, no trolling).