Guardrails, DLP, and observability for GenAI

Your models aren’t the risk—your runtime is. Treeline is a drop-in sidecar that stops secrets, redacts PII/PHI, governs tool-use, and exports ground-truth metrics—without changing your app.

No code changes OpenAI · Anthropic · Azure · Vertex · Bedrock Sidecar or Gateway Prometheus + OTLP/OTel

Why put guardrails on the wire?

LLM leaks don’t look like queries; they look like runtime behavior: pasted secrets, hidden PII, jailbreak prompts, tool calls that reach into prod. Policies in code reviews don’t help if the risk is in the conversation. Treeline enforces where it matters—on the wire.

Enforce

Block bearer tokens & API keys, redact PII/JWTs, gate tools & file I/O; fail-open or fail-closed.

Observe

Prometheus counters + OTLP traces show which rules fired and why—no black boxes.

Fit

Sidecar or egress gateway; OpenAI/Anthropic/Azure/Vertex/Bedrock/local; any SDK.

How it works

  1. Intercept — transparent HTTP(S) proxy.
  2. Inspect — secrets, PII/PHI, JWTs, jailbreak/exfil; optional semantic checks.
  3. Decideallow / redact / block; headers off in prod.
  4. Observe — export metrics/logs/traces; wire into SIEM.

Quickstart (copy/paste)

curl -fsSL https://treelineproxy.io/downloads/compose.quickstart.yml -o compose.yml
docker compose -f compose.yml up -d
curl -fsSL https://treelineproxy.io/downloads/smoke.sh -o smoke.sh && chmod +x smoke.sh && ./smoke.sh

Expect: /ready → LIVE · header/key → 403 · bodies → [REDACTED] · metrics at 127.0.0.1:9096/metrics

Prompt Scanner

Local-first checks—no data leaves your browser.

Prompt Workshop

Compose robust prompts with structure, constraints, and guardrails.