By: Matthew Marcone
Google has decided to use artificial intelligence (AI) to take on a new role – as Internet comment police.
I’m constantly warning about the movement towards censorship and its implications on western values, namely that without the right to openly debate, criticize and even offend others pointlessly, people end up killing one another over politics.
History has shown this time and time again.
The Internet – and more broadly unregulated information – has done more to collectively improve humanity’s knowledge and understanding of the world than perhaps anything else ever invented.
In doing so, it has also become a major threat to traditional institutions of power, such as governments and mainstream news organizations, which is why we are seeing such a pushback against it. Part of this story I’ve already covered – the so called “fake news” epidemic – in this column two weeks ago.
Over the past few years, organizations including CNN, Reuters and Popular Science have been moving away from allowing readers to comment on their stories. Many have cited online harassment and bullying as their primary reason.
As if pulled straight out of George Orwell’s 1984, a group called Jigsaw and its partners on Google’s Counter Abuse Technology Team have released “Perspective,” which aims to use technology to help improve conversations online.
Jigsaw argues that this technology actually brings “a more open atmosphere” to the Internet.
“Discussing things you care about can be difficult,” it states on their website www.perspectiveapi.com, whose partners already include major media outlets like The New York Times, Wikipedia, The Economist and The Guardian. “The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions.”
Perspective uses a machine learning model to automatically detect insults, harassment and abusive speech online. By entering a sentence into its software, it tells you how “toxic” the comment is based on a percentage.
I’ve been testing this out over the past couple days and despite its horrendous implications on free speech… is actually a lot of fun!
The first thing I noticed is its overall ineffectiveness, at least so far, because it uses a language based model instead of ideas, meaning that certain words are automatically “flagged” regardless of context.
For instance, when I write, “Gays are friendly people,” Perspective states that it is 75 per cent likely to be perceived as toxic.
“That’s the dumbest thing I’ve ever heard,” comes in at 73 per cent likely to be viewed as toxic.
Even, “Murder is a horrible crime,” comes in at 76 per cent.
And despite my very best efforts, I’ve been unable to crack the 99 per cent mark. Apparently nothing is 100 per cent toxic.
“I think that Google is wonderful,” comes in at one per cent toxic.
That’s nice to know, Google.
Two points I’d like to conclude with. One is that offensive language plays a useful role in our society and by allowing particular words or phrases to be flagged as “toxic,” we are accepting the principle that certain words are harmful, regardless of their context.
The second is that it is simply impossible for a third party to determine whether a comment is toxic without censoring viable viewpoints, either deliberately or accidentally.
I personally love unfiltered comments sections, even when you have to sift through idiotic opinions. They allow readers an opportunity to criticize, correct or add to a journalist’s story, and herein lies what Perspective is really all about.
It is yet another in a growing list of tools being used to silence facts and opinions that challenge the existing power structure.
“Perspective is a piece of trash.”
That’s 78 per cent likely to be viewed as toxic, which is 100 per cent idiotic.