Treeline Proxy
Architecture & Pricing
Low-friction pilot • Signed policy packs • Proof-grade evidence

Pricing that teams can start — and procurement can sign.

Treeline slots into your existing gateway stack (Envoy / API Gateway / CloudFront + WAF) while producing the artifacts security and compliance demand: signed policies, test vectors, and audit-grade telemetry.

Gateway-native
fits your stack
No raw retention
privacy-safe default
Evidence artifacts
audit narrative
Versioned policies
roll back safely
What buyers usually want first
A pilot that produces one signed policy pack + one evidence artifact they can review — before committing to annual spend.

Deploy where you already are

Treeline is a pattern + commercial policy system. Keep your platform boring.

🧱

Envoy / Service Mesh

Ideal for internal platforms. Inline enforcement as part of existing L7 routing.

  • Per-route policy
  • Low latency
  • Standard SLOs
🚪

API Gateway

Best for “AI gateway” teams. Central control for multiple applications and agents.

  • Tenant aware
  • Central governance
  • Easy onboarding
🌐

CloudFront + WAF

Great for internet-facing flows. Combine edge controls with runtime policy enforcement.

  • Edge throttles
  • Abuse protection
  • Defense-in-depth
Core design principle
Treeline should reduce operational burden, not increase it. Keep enforcement deterministic, policies versioned, and telemetry proof-grade — and let your existing gateway stack handle scale.

Pricing tiers

A simple ladder: pilot → production → regulated.

Pilot
$7.5k one-time
Best for evaluation (30 days)
  • Reference deployment guidance (choose 1: Envoy / API GW / CloudFront)
  • 1 signed policy pack (PII + secrets baseline)
  • Sample test vectors + evidence artifact
  • Telemetry schema + starter dashboard template
  • Email support
Start pilot
Production Starter
$25k/year
For internal apps moving to production
  • Base signed policy packs
  • Reference deployment pattern
  • Telemetry schema (metrics contract)
  • Quarterly policy updates
  • Email support
Get started
Enterprise
from $150k/year
For production GenAI platforms
  • Everything in Production Starter
  • Industry policy packs (PII, secrets, regulated)
  • Test vectors + CI evidence artifacts
  • Dashboard templates
  • Slack support
Talk enterprise
Regulated
from $400k/year
Gov / healthcare / finance
  • Everything in Enterprise
  • Custom policy packs + approvals workflow
  • Audit-ready evidence reports
  • Advisory sessions
  • Priority support
Talk regulated
Pricing shown is a starting point. Final pricing depends on deployment model, policy scope, support level, and compliance requirements.

Not sure where you fit?

If you can describe your deployment and risk profile, we’ll recommend a tier in one email.

What you receive

Concrete artifacts you can review, test, and hand to auditors.

Policy packs

rules.yml + thresholds.yml
manifest.json (sha256, issuer, version)
signature.sig (ed25519)

Policies ship like code. Rollouts are measurable. Rollbacks are safe.

Evidence artifacts

test_vectors/
expected_results.json
CI output logs
evidence-vX.Y.Z.zip

Machine-verifiable proof that policy vX did what it claims.

The outcome
Deterministic runtime governance with a clean audit narrative — without retaining sensitive prompt content.

Privacy & compliance posture

Designed to support governance workflows without creating new data-retention risk.

🧊

Data minimization

Default: export derived signals (decision, rule hits, counters, latency). Keep raw text out of logs.

🔎

Traceability

Every decision maps to a signed policy version. Evidence artifacts provide a verifiable record.

🛡️

Regulated options

For regulated environments, add approvals, reporting, and framework-aligned evidence outputs.

FAQ

Straight answers your security and platform teams will ask.

Do we have to route all traffic through Treeline?

No. Start with high-risk routes (external-facing, regulated, agent tool calling) and expand. Per-route policies are part of the model.

How do you handle privacy?

Default: do not retain raw prompts. Export derived signals (decision, hit types, counters, latency). If you retain samples, they must be redacted and time-bounded.

Why “deterministic” instead of LLM-based moderation?

Determinism is explainable and auditable. You can layer ML classifiers later, but enforcement must remain bounded and defensible in regulated environments.

Ready to talk?

Email val@tirman.com. We’ll align on deployment model, compliance needs, and a policy pack plan.