OpenAI Agents Streaming Server (Express)
OpenAI Agents Streaming Server (Express)
Section titled “OpenAI Agents Streaming Server (Express)”Latest: 6.8.1 Last verified: 2025-11
import express from "express";import OpenAI from "openai";
const app = express();const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
app.get("/stream", async (req, res) => { res.setHeader("Content-Type", "text/event-stream"); res.setHeader("Cache-Control", "no-cache"); const q = String(req.query.q || ""); const stream = await client.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: q }], stream: true, }); for await (const chunk of stream) { res.write(`data: ${JSON.stringify(chunk)}\n\n`); } res.end();});
app.listen(8080, () => console.log("listening on 8080"));Deployment
Section titled “Deployment”Dockerfile
Section titled “Dockerfile”FROM node:20-slimWORKDIR /appCOPY package*.json ./RUN npm ci --omit=devCOPY . .EXPOSE 8080CMD ["node", "server.js"]Kubernetes (deployment.yaml)
Section titled “Kubernetes (deployment.yaml)”apiVersion: apps/v1kind: Deploymentmetadata: name: openai-agents-ts-streamspec: replicas: 2 selector: matchLabels: app: openai-agents-ts-stream template: metadata: labels: app: openai-agents-ts-stream spec: containers: - name: app image: ghcr.io/yourorg/openai-agents-ts-stream:latest env: [{ name: OPENAI_API_KEY, valueFrom: { secretKeyRef: { name: openai-secrets, key: apiKey } } }] ports: [{ containerPort: 8080 }]---apiVersion: v1kind: Servicemetadata: name: openai-agents-ts-streamspec: selector: { app: openai-agents-ts-stream } ports: [{ port: 80, targetPort: 8080 }]Security Best Practices
Section titled “Security Best Practices”- Require auth tokens on streaming route; set CORS and no-cache headers
- Implement rate limiting and timeouts at proxy and app levels
- Store OPENAI_API_KEY in secrets; never log request bodies/raw chunks