GLiGuard: Schema-Conditioned Classification for LLM Content Moderation

ABSTRACT

Ensuring safe, policy-compliant outputs from large language models requires real-time content moderation that can scale across multiple safety dimensions. However, state-of-the-art guardrail models rely on autoregressive decoders with 7B–27B parameters, reformulating what is fundamentally a classification problem as sequential text generation, a design choice that incurs high latency and scales poorly to multi-aspect evaluation. In this work, we introduce GLiGuard, a 0.3B-parameter schema-conditioned bidirectional encoder that reframes LLM content moderation as multi-aspect classification. The key idea is to encode task definitions and label semantics directly into the input sequence as structured token schemas, enabling the model to simultaneously evaluate prompt safety, response safety, refusal detection, 14 fine-grained harm categories, and 11 jailbreak strategies in a single non-autoregressive forward pass. Crucially, this schema-conditioned design allows supported task and label blocks to be composed directly in the input schema at inference time. Across nine established safety benchmarks, GLiGuard achieves F1 scores competitive with 7B–27B decoder-based guards despite being 23–90× smaller, while delivering up to 16× higher throughput and 17× lower latency. These results suggest that compact bidirectional encoders can approach the accuracy of much larger guard models while drastically reducing inference cost.

Article Link