A lightweight, fast, and easy-to-use service for detecting LLM prompt injection attempts before they reach your model. No extra latency, no extra LLM calls — just a simple API that returns true or false.