ChatGPT security audit
Built with ChatGPT? These security mistakes are extremely common.
ChatGPT is the default pair-programmer for solo builders. It is also the source of the same three or four mistakes we keep finding in production. Here is the short list — every item below has shipped to a real codebase we audited.
Common problems
What we keep finding in ChatGPT codebases.
Over-trusting middleware
Generated code assumes middleware protects everything. In reality, route exceptions, stale handlers, and edge cases stay completely exposed.
ExampleAmatcherin middleware skips/api/*“for performance” — and now nothing checks auth on the API.Insecure password reset flows
Predictable tokens, long-lived reset links, no rate limits on the reset endpoint, and no session invalidation after the password actually changes.
Open CORS and public APIs
“Just make it work” becomes Access-Control-Allow-Origin: * — and now any site on the internet can call your authenticated endpoints from a victim’s browser.
Prompt fixes
What to tell the model.
Paste these into your next conversation. They steer generation toward safer defaults — but they aren't a substitute for review.
For: Over-trusting middleware
“Never rely on middleware alone for authorization. Re-check identity and permissions inside every protected route handler and server action.”
For: Insecure password reset flows
“Enforce strict CORS, short-lived reset tokens, rate limits, and full session invalidation after credential changes.”
For: Open CORS and public APIs
“Allow-list specific origins, deny credentials on wildcard origins, and require an explicit auth header on every cross-origin route.”
Manual verification
The ChatGPT checklist.
Run through each item by hand before you ship. If anything is unclear, treat it as a red flag, not a green light.
- Hit every /api/* route from an unauthenticated session and confirm a 401.
- Generate 50 password reset links and confirm tokens look uniformly random.
- Time how long a reset link is valid — anything over 30 minutes is too long.
- Inspect CORS response headers on a logged-in request from a non-allowed origin.
- Verify sessions are revoked after password change and after logout.
If issues are already live
Damage control, in order.
If you suspect any of the above already shipped to real users, work the list top-to-bottom. Don’t skip rotation.
- 1Rotate any signing keys used for reset tokens and JWTs.
- 2Force a global session reset for all users.
- 3Tighten CORS to an explicit allow-list before doing anything else.
- 4Add rate limits on auth, reset, and registration endpoints.
- 5Re-test every endpoint with an unauthenticated session.
Why AI-generated fixes still fail
The model that wrote the bug rarely sees it.
Same blind spots
The patterns that produced the vulnerability are baked into the model's training. Asking it to audit itself reproduces the same assumptions.
Context windows lie
The model sees the file you paste, not your auth middleware, your RLS policies, or the route you forgot to protect. It can't review what it can't see.
Confidence ≠ correctness
AI fixes look polished and read well. That's a signal of fluency, not of safety. Real verification needs a human who can hit the endpoint.
Don’t ask AI to audit AI
You wouldn’t let an intern grade their own homework.
Most builders type “is this secure?” back into the same chat that wrote the code. You need independent verification — someone whose context isn’t poisoned by what just got generated. That’s Kleared.
Shipping fast is good. Shipping exposed is expensive.
Kleared verifies what generated code missed before users — or attackers — find it. Self-serve, no demos, fix-PRs in your repo within minutes.
Before you launch
Run a real human security review.
Not another prompt. Kleared checks the boring stuff that breaks production:
- auth
- permissions
- secrets
- uploads
- database exposure
- API security
- payment flows
- production configs
So you can ship without guessing.