Today, there will be 500 million tweets posted, 1 million snaps created, and 430,000 hours of video uploaded to YouTube. This month, almost 5 billion comments will be posted on Facebook pages. In 2019, it is estimated that there will be around 2.77 billion social network users around the globe, up from 2.46 billion in 2017. Social network penetration is only going in one direction and users love posting content.
But hosting so much user-generated content (UGC) comes with a serious health warning. The risk of offensive or illegal material being posted grows ever higher as the amount of content online reaches new heights. Content that involves child exploitation, incitement to terrorism, certain types of hate speech, and intellectual property or copyright infringement offends viewers, damages the brand, and breaks trust between the company and its community. Increasingly, it also causes the platform to break laws and incur fines.
Social networks are now facing increased pressure to detect and remove illegal, or legal–but–objectionable, content from their platforms, but it is a battle that will be hard-won. A situation where thousands of humans are moderating trillions of pieces of content brings to mind David v Goliath.
As with many issues of scale, AI and Machine Learning play a central role. As we have outlined before, bots can handle basic, repeatable customer experience tasks at a scale unrivalled by human agents. So it is with content moderation. Google revealed that since that company started using machine learning to flag violent and extremist content in June 2017, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
Platforms are already using AI tools to identify inappropriate content including troll accounts, election interference, fake news, terrorism content, hate speech, and pornographic material. In these cases text analytics, natural language processing (NLP), and machine learning techniques are used to develop and train algorithms to support pre-publication moderation.
But these tools have their flaws. Detection algorithms are only as good as their input data, and content moderation data can vary in many ways and in many places. For example, what one country or culture will label as ‘hate speech’, another will not. Within the same culture, what is offensive to one person may be perfectly acceptable to another; it is highly subjective. Add to this the fact that social mores and language evolve over time, and what is hateful or offensive this year, may be far less so the next. The list goes on.
So, while effective, AI is not perfect. And not-perfect at scale is still a big problem. With so much user-generated video and text content being produced every minute, a tiny margin of error still results in a huge amount of offensive material slipping through. 1% of almost 3 trillion posts is still a lot of posts. This is why humans are becoming more a more critical cog in the content moderation wheel by the day.
Facebook, for example, is committed to having a total of over 20,000 people working on security and content review by the end of 2018. The same company currently employs over 60 people dedicated just to crafting the policies for its content moderators. Susan Wojcicki, CEO of YouTube, confirmed in a blog post in late 2017 that Google would bring the total number of people working to address content that might violate their policies to over 10,000 in 2018. Wojicicki also pointed out that machine learning is helping Google’s human reviewers remove nearly five times as many videos than they were previously.
It is clear that platforms see humans working in partnership with technology as the solution to this problem – something that Voxpro-powered by TELUS International refers to as The Age of Engagement. Our experience in protecting the communities of some of the world’s largest platforms has taught us a lot about the type of person that can help build trust between a brand and its customer – native speakers with deep cultural knowledge and equal parts EQ (emotional intelligence) and PQ (technical proficiency). We also have extensive experience in hiring the right profile of agent at incredible scale. Most importantly, we know how difficult such work can be and have become expert at providing the kind of environment and supports necessary to protect the agents who are protecting online communities.