Building the right support network for content reviewers can help deliver an efficient and effective content moderation service, writes Aidan O’Shea.
Today is a tricky time to be in the business of content moderation. Indeed, it has been the subject of much controversy of late. Sharing platforms have been criticised for both removing content, such as the photo of the Napalm girl that has become iconic of the Vietnam war, as well as their reaction times for removing content, in particular when videos of terrorist attacks go viral.
Companies have been in the news too for the impact content moderation can have on workers in the field. Because, and let’s not sugar-coat it, content moderation is not easy work. Trawling through graphic and potentially disturbing content on a daily basis poses risks. How could it not? Reviewers are expected to make important decisions very quickly; they are expected to enforce their ever-changing company policy whilst possessing a cultural awareness of the content they’re viewing – not easy when you consider the global context these platforms operate in.
And we, the global population, are creating a lot of content. Currently, there are 4.1 billion internet users in the world. Facebook purports to have 2.7 billion users (for context, that’s more than the number of followers of Christianity) while Twitter has 330 million who send out 500 million tweets every single day. Ninety-five million photos are uploaded on Instagram per day, and 400 hours of video are uploaded on YouTube per minute. Worldwide, people want to share and they want to do so instantaneously.
With that, there will always be troubling content making its way online, and whomever you ask about this will agree that material exists that they do not want to see. It’s reviewers that are responsible for removing this offensive or disturbing content, and it’s difficult but important work. For that reason, these people need to be supported in the provision of a good and necessary service, and we believe there are tangible measures a company can take to deliver a best-in-class content moderation service, and support the wellbeing of those engaged in this field of work.
At Voxpro – powered by TELUS International, we take the business of content moderation seriously. We understand that in order to deliver an efficient and effective service, the right supports for these digital first responders operating on the frontline need to be in place.
Be Transparent In Recruitment
First and foremost, content moderation is not an easy field to hire for, and a transparent and robust hiring process is essential. At a pre-screening stage, applicants should be given descriptions of the content they might be required to view and, at this point, they can decide if they wish to continue. Recruiters and operations managers involved in the recruitment process then need to ask the right questions, assess how an applicant will deal with certain situations and monitor them when exposed to content. The testing phase is crucial, and needs to be designed to see if applicants can maintain objectivity while understanding and implementing policy. Again, it’s important that applicants can pull themselves out easily if they feel the work is not right for them.
When performing this work, regular sessions with counsellors or therapists should be mandatory for all employees. This removes any stigma – cultural or personal – that employees may associate with counselling. Employers also need to reiterate the message that this support is available for moderators.
Create A Culture Of Wellness
It’s important to have a culture of wellness permeating through the organisation. Management needs to encourage participation with the resources available and communicate its benefits with their team. Creating an environment of safety is also important so workers don’t feel afraid to say they need help or to take a break.
Management also has a duty to look out for signs of stress and concern in their team in regular one-to-one meetings, and should be trained to ask the appropriate questions and assess what their employee needs. While managers won’t be able to eliminate exposure to content entirely, they should be able to manage workflow effectively, and separate pockets of work that are less challenging to create rotation within their team – for example, if someone has worked on X-type of content for three days, they should then have a day in a less challenging area.
Additionally, it’s important to create a comfortable and pleasant working environment, with space to relax, take breaks or get outside, and having set times for people to take part in extracurricular activities. While there can be spikes in content generation, companies have the data available to plan workflow that allows for separate wellness time, in addition to lunch and breaks.
Balance Automation With Human Review
Artificial intelligence is part of the present and future of content moderation, and automation is already monitoring large swathes of content. It has been particularly successful in identifying nudity and child-exploitation content and has removed a lot of the initial human review in these sections. However, automation is not without its limitations. Violence, for instance, is harder for AI to assess. Instead, companies can limit the amount of violence workers see by breaking up potentially violent videos into picture frames from which a moderator can judge whether something violates company policy.
Written and spoken word is a lot more difficult to moderate through automation, as it can be highly nuanced and subjective. For example, profanity can be used jokingly in areas like the UK, where the same language might be considered a slur in the US. Added to that, language is constantly evolving. There is a lot of room to build automation in this area, and while it will improve, there will be a need for human review for a number of years yet.
The content moderation field has developed incredibly quickly, and employee attrition can sometimes be a challenge. That in itself shows the need for an innovative approach in supporting the people who are doing this type of work.
It’s important to remember that content moderation is a very new area of work, but with advances in AI in the future, it’s likely it will become a much more niche role, but there will most likely always be a need for some human involvement to make those more nuanced decisions. The companies that invest enough – in technology, in creating a robust policy and in doing the right thing by their employees – will perform the best in the future of this space.
As part of its content moderation service, Voxpro – powered by TELUS International aims to hire and retain well-educated, adaptable and resilient people with sound judgement, who keep abreast of current events and who can repeatedly make objective and informed decisions on challenging content. If your company needs the right support in conducting content moderation in a safe and secure environment, get in touch.
Found this post valuable? If so, you might be interested in downloading our e-book on building trust through content moderation.