DAY 1 14:20-14:35 JST Main Room B
JaEnKo
Onsite

Protecting Service Safety with Humans and AI: Three Key Lessons from Developing a Content-Moderation AI

Bridging the gap between AI development and practical implementation is a common challenge in AI projects. In this session, we’ll share how we tackled this challenge and built “AI that works in practice” through our content moderation AI implementation case study.

We’ll focus on three practical approaches:

  1. Domain Specialization: Developing AutoML mechanisms specialized for moderation and models optimized for specific service characteristics
  2. Operational Perspective: Improving accuracy by incorporating operational staff knowledge into training data
  3. Holistic View: Comprehensive improvements including automated suggestion of decision rationales and optimization of surrounding processes

We’ll concisely share the outcomes and key learnings from these approaches with practical examples.

We welcome anyone facing challenges with content moderation or AI implementation in business operations.

Speaker

Nishimura Tomohiro

Nishimura Tomohiro / LY Corporation

Infrastructure Group / Security Platform Division / Content Abuse Prevention Team

Born in 1992. I entered university intending to study user interfaces, but somehow found myself growing tomatoes in a hardware security laboratory. I joined Yahoo Japan Corporation as an engineer in the security department. Currently, as the product owner of an internal AutoML tool specialized in content moderation, I aim to bridge AI technology with practical challenges in the field.

Back to Sessions