A key concern when building a web or mobile application that uses OpenAI’s language models is protecting your private API key. If you want to keep complexity low and release features quickly, setting up a fully fledged backend just to safeguard an API key can be time consuming. In this article, you’ll learn how to avoid the most common security mistakes when integrating the OpenAI API into a client-side app—without sacrificing simplicity or safety. Overlooking these pitfalls can leave the door open to OpenAI API misuse, which can:
Putting your API key directly in client-side code (using any type of web or mobile framework) as shown in the sample frontend code below is tempting, but also very risky.
import OpenAI from "openai";
const client = new OpenAI({
dangerouslyAllowBrowser: true,
apiKey: 'sk-proj-YOURSECRETKEY',
});
const completion = await OpenAI.instance.chat(...)
Anyone can inspect your code (even if it is minified or obfuscated somehow), extract your key, and abuse it. Don't let this happen to you. The OpenAI SDK requires the dangerouslyAllowBrowser
flag to try to prevent this problem and make sure you add a layer of protection that keeps the secret key away from the end user’s browser or device.
Putting your OpenAI API key as a secret in a hosted server or cloud function and calling that endpoint from your client is much better than exposing it directly, but it is still unsafe. Let's take the following client-side application code:
const opair = await fetch("https://functionroute.yourcloud.com/v1/chat/completions", {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({...})
});
Alongside the cloud function OpenAI API proxy it is calling:
import route from "yourcloud";
route.get('/', req, res => {
const opair = await fetch(`https://api.openai.com/${req.path}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authentication': `Bearer ${env.OPENAI_SECRET_KEY}`
},
body: req.body.clone(),
});
return res.send(opair);
})
Since this proxy is publicly accessible, an attacker could bypass your app and access your OpenAI API even without your private key at https://functionroute.yourcloud.com/
.
Putting your OpenAI API key as a secret in a hosted server or cloud function and adding authentication so only your users can call the proxy endpoint is quite common and still much better than the setups laid out thus far. However, it is still unsafe. Let's take the following frontend code:
import supabase from "supabase-js";
const jwt = supabase.auth.session().access_token;
const opair = await fetch("https://functionroute.yourcloud.com/v1/chat/completion", {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authentication': `Bearer ${jwt}`
},
body: JSON.stringify({...}),
});
Alongside the cloud function OpenAI API authenticated proxy it is calling:
import route from "yourcloud";
import { createClient } from '[@supabase](/supabase)/supabase-js'
const supabase = createSupabaseClient(...);
route.get('/', req, res => {
const authHeader = req.headers['authorization'];
try {
// 1. Get the token from the Authorization header
const authHeader = req.headers['authorization'] || ''
const jwt = authHeader.split(' ')[1] // Bearer <token>
if (!jwt) {
return res.status(401).json({ error: 'No token provided' })
}
// 2. Verify the token using the JWKS
const { data: { user }, error } = await supabase.auth.getUser(jwt);
if (error || !user) {
return createResponse('Unauthorized', { status: 401 });
}
// 3. The token is valid
const opair = await fetch(`https://api.openai.com/${req.path}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authentication': `Bearer ${env.OPENAI_SECRET_KEY}`
},
body: req.body.clone(),
});
return res.send(opair);
} catch (error) {
console.error(error)
return res.status(401).json({ error: 'Invalid token' })
}
})
This backend integration for the OpenAI API looks fine at first glance and provides some safety. However, there are three problems with it:
https://functionroute.yourcloud.com/v1/
. The JWTs, or authentication tokens, typically expire within an hour. That means that someone can generate a token from your client-side code and (mis)use it until it expires. Then rinse and repeat.import supabase from "supabase-js";
const jwt = supabase.auth.session().access_token;
const opair = await fetch("https://functionroute.yourcloud.com/v1/files", {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'Authentication': `Bearer ${jwt}`
},
body: JSON.stringify({...}),
});
To prevent the vulnerabilities laid out thus far, you should implement the following safeguards in your backend:
Implementing proper integration with the OpenAI API in a backend can be time-consuming. A Backend-as-a-Service (BaaS) like Backmesh abstracts this complexity handling authentication, rate limits and permission enforcement for you—without requiring you to build or maintain a backend. The idea is straightforward:
The resulting client-side code is described below and is taken from one of our tutorials:
import OpenAI from "openai";
import supabase from "supabase-js";
const BACKMESH_PROXY_URL =
"https://edge.backmesh.com/v1/proxy/gbBbHCDBxqb8zwMk6dCio63jhOP2/wjlwRswvSXp4FBXwYLZ1/v1";
const jwt = supabase.auth.session().access_token;
const client = new OpenAI({
httpAgent: new HttpsProxyAgent(BACKMESH_PROXY_URL),
dangerouslyAllowBrowser: true,
apiKey: jwt,
});
From your perspective, there’s “no backend” to manage—no servers, no DevOps, no complicated configurations. But under the hood, Backmesh is a secure middleman between your front end and OpenAI API. Read more about Backmesh's security considerations here or check out the open source code on Github.