Le Lézard
Classified in: Science and technology
Subject: MISCELLANEOUS

Updated: Content moderation is hard, but there's a new approach... and it's fueled by Spectrum Labs


San Francisco, CA, Jan. 27, 2020 (GLOBE NEWSWIRE) -- Yes, the internet has become the most transformative invention of the modern age ? it has forever changed technology, communication, gaming, marketing, banking, dating and more. But along with that change comes a dark side: The internet has also become a cesspool of toxic human behavior, poisoning the experience both for users and for the content moderators charged with safeguarding online platforms.

But, real talk: Faced with harassment or a disgusting experience online, many of us never report it. Instead, up to 30% of users decide to close their account or stop using certain social networks altogether. They just... leave. All that focus on growth? Wasted.

Which begs a couple of questions: With all the transformation and dizzying innovations brought by technology, why do we still see daily headlines of online harassment, radicalization, human trafficking, child sex abuse, and more? And can online platforms manage growth while still keeping their communities safe?

Many companies think of "Trust and Safety" as just a compliance play ? a box to check, rather than seeing the connection to their platform's health and growth.

But Spectrum Labs, a San Francisco-based Contextual AI platform, thinks that's a mistake. Growth is directly tied to user experience.

Platforms like Facebook have faced backlash for outsourcing their content moderation services ? traumatizing lower-paid contractors with images and videos of shootings, violence and hate ? and only removing a fraction of toxic content on their platform.

Content moderation tools, while seeing some improvement over the last decade, are still flawed and need to be drastically improved. That's where Spectrum Labs comes in.

Spectrum Labs has developed an astonishingly accurate Contextual AI system that identifies toxic behaviors like hate speech, radicalization, threats, and other ugly behaviors which drive users away from online communities. They've also made it dead-simple, so that even people who don't understand code or datasets can know what's happening on their platforms any time.  Spectrum Labs' approach is gaining traction with giant names in social networks, dating, marketplaces and gaming communities.

Legacy content moderation technologies typically use some form of keyword and simple message recognition (classification), which works best for interactions that occur at a single point in time. But most toxic behavior builds gradually;  and Spectrum Labs' superpower is spotting those larger patterns of toxic behavior ? in context. Some customers have already seen a reduction of 75% or more in violent speech, heading them off before they ever reach users, while flagging the trickier, ambiguous cases to human moderators on the Trust and Safety team.

 "Our customers put the safety of their community first ? and are seeing better retention rates and satisfaction. Our technology gives them the visibility and power to easily know what's happening on their platforms, any time, and in real time."

"In 16 years of working in tech, this is the first company I've been with where we are actually saving and improving lives ? users, players, kids, and moderators. We never forget that online experiences can have offline impact, so we're excited to continue helping companies make the Internet safer and healthier for their users," Davis added.

Spectrum Labs has built a library of large labeled datasets for over 40 unique models of toxic behavior, such as self-harm, child abuse/sexual grooming, terrorism, human trafficking, cyberbullying, radicalization and more, across multiple languages. Spectrum Labs centralizes its library of models across languages and then democratizes access so that each client can tune the service to their own specific platform and policies. No one-size-fits-all because a) it doesn't exist and b) it doesn't work (see: headlines every day of one-size-fits-all keyword recognition failing, with disastrous consequences).

This collaborative approach solves the "cold start" problem of launching new models without training data, and brings together a fractured and siloed data landscape, giving online platforms the ability to automate their moderation needs, at scale, while allowing for human judgment to be the final arbiter of what to allow on their platform.

Additionally, the ethical use of AI, in combination with a strong commitment to diversity and inclusion, and transparent data sets are just a few of the critical elements needed in order to operationalize automated AI systems that can recognize and respond to toxic human behaviors and content on social platforms at scale without causing harm to employees, contractors and users.

Tiffany Xingyu Wang, Chief Strategy Officer of Spectrum Labs said, "Whether it's the content children are watching, the dating apps adults are on, the gaming done by both children and adults, enjoying the experience safely is the priority." Wang added, "Internet safety is no longer just a nice-to-have. We're getting closer to a world where investments in trust and safety are differentiators that drive topline revenue."

Contact:

Tiffany Wang
[email protected]


These press releases may also interest you

at 01:35
Regulatory News: Sensorion (FR0012596468 ? ALSEN) a pioneering clinical-stage biotechnology company which specializes in the development of novel therapies to restore, treat and prevent within the field of hearing loss disorders, today announces...

at 01:15
The first-ever public auction of the epic satoshi held by CoinEx, the leading global cryptocurrency exchange, ended on April 25, 2024, at 16:00 (UTC). As the first example shown in history, the auction attracted global users for 35 bids, and the epic...

at 01:05
Dassault Systèmes (Euronext Paris: FR0014003TT8, DSY.PA) and Peugeot Sport, the motorsports division of Stellantis, today announced their partnership to simulate and optimize the aerodynamics of the PEUGEOT 9X8 Hybrid Hypercar for the 2024 endurance...

at 00:39
World Malaria Day is marked each year on April 25. World Health Organization (WHO) gave as the theme for World Malaria Day 2024 Accelerating the fight against malaria for a more equitable world. WHO stated that malaria not only continues to directly...

25 avr 2024
The Industrial Technology Research Institute (ITRI) convened the 2024 ITRI Net Zero Day in Taipei, accelerating industry's transition to net-zero emissions. The event highlighted key innovations and successful business cases, focusing on the...

25 avr 2024
The report titled "Identity Governance & Administration Market by Component (Services, Solution), Modules (Access Certification & Compliance Control, Access Management, Identity Lifecycle Management), Organization Size, Deployment, Vertical - Global...



News published on and distributed by: