Artist interpretation of the creatures talking about your mom on Xbox Live last night.Aurich Lawson / Thinkstock
Anyone who’s worked in community moderation knows that finding and removing bad content becomes exponentially tougher as a communications platform reaches into the millions of daily users. To help with that problem, Microsoft says it’s turning to AI tools to help “accelerate” its Xbox moderation efforts, letting these systems automatically flag content for human review without needing a player report.
Microsoft’s latest Xbox transparency report—the company’s third public look at enforcement of its community standards enforcement—is the first to include a section on
→ Continue reading at Ars Technica