AWS for Games Blog

Spectrum Labs AI does the heavy lifting of content moderation on gaming platforms

Who likes having their payment processor drop them with no notice? Or having their app removed from the App Store?

Exactly. Nobody likes that.

That’s why Spectrum Labs’ content moderation AI is used by developers like Riot Games and Wildlife Studios to guard against toxic and illegal behavior. Spectrum Labs helps scale content moderation teams’ coverage and provides fully automated solutions that utilize the most advanced natural language AI moderation available.

In other words: Spectrum Labs helps you grow your gaming platform without needing to hire an army of Trust & Safety moderators each step of the way.

What’s the difference between content moderation tools?

Typical content moderation tools are built using only static keyword lists or regular expression rules. These basic solutions can detect specific words and phrases in a small group of languages – but miss the most egregious toxic behaviors, like child grooming by predators and radicalization by hate groups. They may catch a blacklisted word or phrase in a single message, but are incapable of analyzing an entire conversation where toxic intent is clear to a human but never captured by a keyword filter.

An effective AI needs to look at context, not just content.

Spectrum Labs is a pioneer and innovator in Contextual AI. Their Guardian suite is the only content moderation solution that looks at metadata about a conversation, prior history of interactions, and user attributes that signal toxic or illegal behaviors which even human moderators often miss.

The other key issue that Trust & Safety teams face is scale. There are just too many messages for human moderators to review accurately (and reviewing them has been shown to cause mental health issues for human moderators). Spectrum Labs uniquely offers user-level moderation, which allows moderation teams to action users who repeatedly post toxic content instead of only being able to action individual messages. Since roughly 60% of toxic content is produced by just 3% of users (across all types of platforms), user-level moderation allows Trust & Safety teams to quickly take action on a majority of harmful content without having to moderate each message individually.

Guardian by Spectrum Labs is the only AI moderation tool with patented multilingual support that works globally across any language with localizations for slang, social norms, and country-specific regulations. The net result allows for smaller Trust & Safety teams who can manage much more content with confidence. For global platforms, each market can receive the same high-quality content moderation across all languages without the cost of hiring new teams or finding local language solutions.

How does Contextual AI work?

What exactly does “Contextual AI” mean? Well, it’s not just marketing jargon.

Take this example – Guardian and most other content moderation tools can detect keywords for profanity or hate speech that may violate community policies:

  • Example phrase: “You’re a piece of sh*t”

However, only Guardian’s Contextual AI can detect messages with no blacklisted keywords and flag them for a human moderator to investigate:

  • Example Phrase: “Is your mom home?”
  • Example Context: Male, 29 years old, profile less than 30 days old at 3pm on a weekday messaging a female, 11 years old, in a private chat; no prior chat history between users.

By factoring in context, the users’ individual histories, chat history together, and user attributes, Guardian can moderate conversations – not just keywords.

By factoring in context, the users’ individual histories, chat history together, and user attributes, Guardian can moderate conversations – not just keywords.

Don’t forget about healthy behavior.

Trust & Safety shouldn’t end at removing harmful behavior. Concurrently boosting healthy behavior makes your community a more welcoming space that can fuel your platform’s user retention and overall growth.

Examples of healthy behavior can include:

  • Teaching and inviting new players.
  • Building rapport through healthy conversations.
  • Helping, acknowledging or encouraging other players.

Spectrum Labs’ Amplify solution uses the same Contextual A.I. to identify and reward a platform’s most positive users to help shape better user experiences. For instance, product teams can use Amplify to identify your platform’s most welcoming and helpful players and pair them with new visitors to make their first visit the best possible.

Those first-time interactions are especially important – users with a positive first experience are 6x more likely to return to a platform.

Those first-time interactions are especially important – users with a positive first experience are 6x more likely to return to a platform.

Along with making your platform a more enjoyable place for users, Amplify also works wonders for your platform’s key performance indicators (KPIs). By incentivizing positive interactions, Amplify helps boost KPIs like user retention, engagement, revenue, and overall platform growth. In fact, Amplify’s pilot use case for a metaverse gaming platform saw a +12% increase in average revenue per user (ARPU) and new user retention within 4 weeks of implementation.

Simply put: Healthy behavior and happy users are good for business.

Get a 360° view of your platform and player behaviors

Guardian and Amplify not only extend the coverage of your Trust & Safety team, they also provide powerful insights into your community. Spectrum Labs’ clients can access industry-leading 360° analytic reports showing the full range of user behavior on their platform.

Typically, Spectrum clients learn that most of the toxic content on their platform originates from a small fraction of users – in one multiplayer online RPG, just 1% of users were responsible for all hate speech posted in the game. Knowing and visualizing this allows gaming platforms to moderate a handful of users rather than a firehose of content.

Understanding how toxic content spreads through gaming platforms and who is responsible for the bulk of it inverts the Whack-A-Mole nature of content moderation. With Spectrum Labs, smaller Trust & Safety teams can detect problematic users early for intervention or removal before those individuals can damage the user experience for thousands of gamers.

Spectrum Labs gives you a comprehensive view of how user behavior (Both bad and good!) impacts your platform’s KPIs for engagement, retention, and overall experience. With 360° analytics, you can quantify your platform’s entire user experience and pinpoint where improvements are needed.

How Spectrum uses AWS to power its AI solutions

Spectrum Labs relies on AWS to power its solutions – and when Spectrum joins forces with AWS, the results are unmatched.

Online platforms can experience rapid usage spikes. Nevertheless, Spectrum’s globally distributed AWS infrastructure still delivers extremely low latency to a worldwide customer base. Spectrum Labs handles 5 billion data requests per day, with an average response time for AWS-based clients of 17 milliseconds – that’s 5x faster than the blink of an eye. Such lightning-fast speed gives Spectrum clients access to unprecedented community features like real-time redaction, which removes toxic content before it can be posted.

Spectrum Labs uses AWS to process 6,650 messages per second (399,000 messages per minute) and achieves 99.9% uptime with AWS.

Some specific Amazon services utilized by Spectrum Labs include:

  • Managed Elastic Kubernetes Service (EKS): Simplifies deployment of containers to integrate new technology more rapidly.
  • Aurora: Globally replicates Spectrum Labs’ databases to provide additional failover.
  • Managed Streaming for Kafka (MSK): Frees up Spectrum Labs’ internal teams to focus on key areas of infrastructure.

In addition to AWS’ infrastructural benefits, Spectrum Labs’ solutions can easily pair with a number of AWS tools like AWS for Games Cohort Modeler. When used with Amplify’s behavior detection and custom user-grouping features, Cohort Modeler creates visualizations that map out your user based on behavioral, financial, and gameplay patterns.

In addition to AWS’ infrastructural benefits, Spectrum Labs’ solutions can easily pair with a number of AWS tools like AWS for Games Cohort Modeler. When used with Amplify’s behavior detection and custom user-grouping features, Cohort Modeler creates visualizations that map out your user based on behavioral, financial, and gameplay patterns.

Seeing your game’s user relationships in a Cohort Modeler diagram gives you more insight into your community so you can better leverage Amplify to initiate user pairings that make more players want to stick around. Spectrum Labs plays well with AWS, so you can use whatever you need to help your platform grow and thrive.

Is Spectrum Labs right for your gaming platform?

  • Spectrum Labs is ideal for apps, games, and platforms that need to scale content moderation without increasing their Trust & Safety team size. It also is great for platforms with global user bases who need consistent-quality content moderation across multiple languages.
  • Spectrum Labs’ Guardian is the most advanced natural-language AI content moderation solution available. It can be used to extend the reach of Trust & Safety teams, or be used to automate moderation.
  • Guardian uses Contextual AI instead of simple keyword-based AI for toxic content detection, which can find hard-to-detect but extremely harmful behaviors like child grooming and online radicalization that pose significant legal and business risks to companies who aren’t able to remove such content from their platform.
  • In addition to toxic behavior, Spectrum Labs’ Amplify solution is the only AI that can detect healthy behaviors and give you a full 360° picture of user behavior on your platform.
  • Amplify allows platforms to score users based on their behavior and build cohorts of their most helpful and encouraging players, which can be leveraged to facilitate positive interactions that optimize retention and engagement.
  • Spectrum Labs’ patented multilingual support means you can deploy quickly in any language with the same level of quality along with the ability to configure for local norms and regulations.
  • Guardian identifies users who generate most of a platform’s toxic content, so Trust & Safety teams can moderate users instead of individual pieces of content. This allows content moderators to exponentially spread their coverage instead of reacting to each post.
  • Guardian can be used to help Trust & Safety teams cover more content or as a fully automated AI content moderation solution, depending on the needs of the game or platform.

What types of content can Spectrum Labs moderate?

Spectrum Labs uses Contextual AI to accurately detect toxic content ranging from simple banned keywords to more complex behaviors that often cross over into illegal conduct.

Spectrum Labs uses Contextual AI to accurately detect toxic content ranging from simple banned keywords to more complex behaviors that often cross over into illegal conduct.

Additionally, Spectrum Labs can detect healthy behaviors that platforms want to promote and proliferate within their community. Although customers can designate which specific behaviors they wish to encourage, common examples of healthy behaviors include:

  • Building rapport with other users
  • Teaching or coaching new players
  • Encouraging players to keep trying at a game
  • Inviting new users to join games or conversations
  • Acknowledging others’ feelings and helping them feel included

Spectrum Labs’ solutions work with all language-based content like text and voice/audio. To learn more, check out our partner page or reach out for a demo!