What can we learn about the GIPHY content moderation slip-up?

What can we learn about the GIPHY content moderation slip-up?

James Glenn Gomez
March 16, 2018

Shocked woman looking at smartphone

When it comes to content moderation, consider the butterfly effect. In a sense, one mistake can make a ripple and affect everything. And sometimes, one mistake can cost a business relationship.

Earlier this year, image-sharing social media apps Instagram and Snapchat had integrated the animated GIF aggregator GIPHY into their apps as stickers for snaps and stories. Just months later, they’re now cutting ties with the GIF database after a content moderation mishap. This comes after one racist GIF wasn’t scrubbed from their library, and was therefore available for use for the apps. The GIF in question is a “death counter” of sorts for black people. Thus, their parting of the ways with GIPHY.
While the GIF aggregator blamed a glitch in their code that allowed for the tasteless moving image to be available as a sticker, it doesn’t change the fact that this has strained their online reputation. Such things happen when content moderation goes awry. In retrospect, you can learn many things from this mishap, including the following.

1.     Automation doesn’t always work

inverted color robots working in office

It’s not every day that one game-changing bug like this one occurs to a website. However, if you’re hosting user-generated content—images, videos, audio, or what have you—it’s important to take note that artificial intelligence won’t catch objectionable content 100% of the time. GIPHY has a gigantic library of GIFs; an offensive one went past their radar. Content moderation still needs a human moderator’s subjective eye to discern what is and what isn’t acceptable.

 

2.     Implement a foolproof content moderation strategy

business team developing content moderation strategy on white board by glass wall

Sometimes, it takes a bit of a reality check like this one to know that something’s wrong with your moderation policies. Your content moderation strategy must be stringent enough to prevent objectionable content from spilling out into anyone’s dashboards but accommodating enough to encourage creation of user-generated content (UGC). Whether you’re going to use AI, human moderators, or both, your content moderation strategy must be reliable.

 

3.     Be thorough and proactive

stylish gentleman with robot in dark

Your content won’t clean itself. You must take a thorough and proactive approach in preventing the prevalence of objectionable content on your website. Take, for instance, GIF database alternative Gfycat. In order to combat “deepfakes” (AI-assisted face-swap pornography), it uses two kinds of AI that identify fakery if they see one. Since AI’s not foolproof, they use human moderators and review a GIF’s metadata—all just to combat deepfakes. This preemptive strike on such malicious content allows Gfycat to maintain the safety of their database.

 

4.     Consider outsourcing your content moderation services

content moderation analyst being assisted by call center team leader

If you feel that your company’s lacking in its reviewing efforts, maybe it’s time to consider moving your content moderation services to a provider. Now, there’s nothing wrong with an in-house team of reviewers. However, outsourcing presents an economical and scaling advantages without a drop in quality, letting you maintain a pristine website at a fraction of the cost.

Sometimes, one mistake can be too little too late when it comes to content moderation. Such is what happened to the app integration of GIPHY with Instagram and Snapchat. Prevent such mistakes by taking a holistic and proactive approach to reviewing content.

Leave a Reply

Search
Events