Skip to page content

Startup Trust Lab is gearing up to be the internet's trust and safety department


Tom WhiteHeadshot Cropped JPG copy
Tom Siegel, CEO of Trust Lab
Trust Lab

Thanks in part to mass layoffs, trust and safety departments that oversee content moderation and monitor malicious actors are being scaled back at major tech companies. Twitter, for one, has gutted its own department.

As the ranks of permanent positions at these departments grow thinner, startup Trust Lab is attempting to capitalize on this retrenchment by offering third party services that act as a trust and safety team.

The Palo Alto firm helps tech companies identify and moderate harmful content using machine learning, and stay in compliance with various laws governing moderation around the world. It also works with governments to help identify bad actors and contextualize threats and misinformation into readable data.

"Because of some of the staffing trends, tech companies are relying on third parties to augment what they're doing," said Trust Lab CEO Tom Siegel. "But we're most excited about the new crop of future companies — the future YouTubes and Tik Toks — which I don't think necessarily want to build all this out in house."

Trust Lab just raised $15 million in a series A round led by U.S. Venture Partners (USVP) and Foundation Capital and plans to use the funding to scale up its operations in preparation for major elections happening around the world, as well as European regulations coming into effect around content moderation.

Siegel was formerly the founder of Google's trust and safety team and has assembled a group of founders with trust and safety experience — it's chief product officer Benji Loney was the former director of trust and safety at Bytedance (the owner of Tik Tok).

Its clients are mostly secret, but it says it works with 5 of the 10 top social media platforms. It also works with the European Commission to study terroristic speech online to find out whether social media algorithms will feed more terrorist content to users who are interested in the same way it does for more banal topics.

Siegel also sees both opportunity and causes for concern in the recent proliferation of generative AI. He says these generative AI models can be used to identify threats and misinformation and be the basis for developing new trust and safety technologies, but can also be used to generate massive amounts of misinformation and harmful content.

"Governments are having a lot of conversations on the topic — in the US they had hearings in the Senate and in Europe, they have draft regulation on how they actually deal with generative AI from from regulatory perspective," Siegel said. "Depending on how that goes, that could also have a huge boost to trust and safety or it could make the space a lot more challenging as well."


Keep Digging

News
Fundings
News
Inno Insights


SpotlightMore

Raghu Ravinutala, CEO and co-founder, Yellow Messenger
See More
Image via Getty
See More
SPOTLIGHT Awards
See More
Image via Getty Images
See More

Upcoming Events More

Aug
01
TBJ
Aug
22
TBJ
Aug
29
TBJ

Want to stay ahead of who & what is next? Sent twice-a-week, the Beat is your definitive look at the Bay Area’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow the Beat

Sign Up