One line of code to score, analyze, and protect every AI API call. 1006 attack patterns. Self-learning ML. Zero latency overhead.
No SDKs to install. No config files. Just change one URL.
Sign up and get your API key instantly. No credit card required.
Change one URL in your existing code. We auto-detect your LLM provider.
Every request scored in real-time. Risk headers on every response.
See how PromptCI scores prompts in real-time
Enterprise-grade protection with zero complexity.
13 categories covering injection, manipulation, extraction, encoding, social engineering, and more. Pattern-based scoring in under 1ms.
Your firewall gets smarter with every request. Online SGD updates per-customer models. No raw data stored — only 50KB of weights.
X-PromptCI-Score and X-PromptCI-Verdict on every response. Your application reads the headers and decides the action.
Auto-detects OpenAI, Claude, Gemini, Azure, and Ollama formats. Native protocol forwarding — no translation overhead.
Monitor → Flag → Enforce. Start passive to build confidence, tighten controls when you're ready. Your rules, your thresholds.
We never store your prompts. Customer API keys encrypted with AES-256-GCM. ML models retain only mathematical weights.
Three scoring engines in series. Every request analyzed in under 3ms total.
Change one URL. That's the entire integration.
from openai import OpenAI client = OpenAI( base_url="https://proxy.promptci.dev/v1", default_headers={"X-PromptCI-Key": "your_api_key"} ) # That's it. Every request is now protected. response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}] )
import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://proxy.promptci.dev/v1", defaultHeaders: { "X-PromptCI-Key": "your_api_key" }, }); const response = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Hello!" }], });
curl https://proxy.promptci.dev/v1/chat/completions \ -H "X-PromptCI-Key: your_api_key" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}] }'
import "github.com/sashabaranov/go-openai" config := openai.DefaultConfig("your_openai_key") config.BaseURL = "https://proxy.promptci.dev/v1" // Add X-PromptCI-Key header via transport
Start free. Scale when you're ready.
No credit card required. 15 free API requests.