A drop-in OpenAI proxy that gives your team prompt-level cost visibility, version tracking, and usage analytics — without touching your data.
Your team is shipping AI features fast. But nobody knows which prompts are expensive, which teams are over-spending, or whether that "improved" prompt actually reduced costs.
PromptLens fixes that in 5 minutes.
See exactly which prompts are driving your AI spend, ranked by cost and request volume.
Break down costs by team so you know who's spending what — perfect for chargeback reporting.
Track configuration changes over time and compare cost impact across versions side-by-side.
Tag requests by feature to understand costs per product area with a single header.
Only metadata is logged. Your prompt content and responses never leave your infrastructure.
One URL change. No SDK required. Works with any OpenAI-compatible client you already use.
PromptLens sits between your app and OpenAI. Every request passes through unchanged.
No prompt content stored · No response logging · Async metadata only
Change your base URL and add a header. That's it.
const response = await fetch(
'https://your-app.vercel.app/api/proxy/chat/completions',
{
method: 'POST',
headers: {
'Authorization': 'Bearer pl_your_key',
'X-Prompt-Key': 'document-summarizer', // ← add this
},
body: JSON.stringify({ model: 'gpt-4o', messages }),
}
);
// Response is identical to OpenAI's — nothing changesBring your own OpenAI API key and get full access to cost analytics, prompt management, and team dashboards at no cost. We track usage metrics internally so you don't have to — none of your data is ever exposed or shared.
“We had no idea our summarization prompt was 3× more expensive than everything else combined. PromptLens made that obvious in the first hour.”
Early user · Backend engineering team
Set up in 5 minutes. No credit card required.