Google Software Detects People Comments Mal Polite

Google robot will hunt trolls in the internet comments

Google Software Detects People Comments Mal Polite

Software read 17 million comments left on texts – and learned to identify poorly educated people with 92% accuracy – If you have already commented some text on the Internet, you’ve probably encountered the trolls: people whose sole purpose is to cause, fight and distill hatred. Generally, no use arguing with trolls, which makes many people prefer to ignore them (is the policy of “do not feed the trolls”, common on the Internet)…

Google Software Detects People Comments Mal Polite

The problem is that no use: the trolls keep multiplying, and its sourness just contaminating the rest of the people – a study by the US government, 28% of respondents admitted that they have acted as trolls. But Google thinks it has the solution for this: artificial intelligence (AI). He created intelligent software, AI Conversation, and put him to read 17 million comments left on news reports the New York Times, as well as 13,000 discussions of Wikipedia pages. The software was learning to identify offensive messages and now, according to Google, you can do this with over 92% accuracy.

According to Wired magazine, which had exclusive access to the robot, both the New York Times and Wikipedia are already planning to put the AI Conversation in action in day-to-day monitoring and approving comments before they are posted. It will also be released as open source, so that any site can use it. But still it has two drawbacks: only works with commentary in English, and is vulnerable to false positives called – the use of terms of profanity and slang, even non-offensive way, makes the robot spreads the comment. Receive our free tips by email – Sign up for our newsletter in the form below!

Leave a Reply