Gemini Code Assist security audit
Using Gemini Code Assist? Check these vulnerabilities before shipping.
Gemini Code Assist favors convenience-first generation — code that runs end-to-end on the first try, even if some routes never picked up auth. The patterns below are what we keep finding when we audit Gemini-assisted codebases.
Common problems
What we keep finding in Gemini Code Assist codebases.
Middleware overconfidence
Generated code assumes middleware protects every path. In practice, exceptions and edge routes remain exposed.
Example/api/internal/*is excluded from middleware “for service-to-service calls” — and is reachable from the public internet.Unsafe form handling
Generated forms trust user input too much. Hidden fields get accepted, dangerous payloads get persisted, and injection paths stay open.
Public API exposure
Convenience-first generation leaves sensitive routes unintentionally public — especially internal-only ones added late in development.
Prompt fixes
What to tell the model.
Paste these into your next conversation. They steer generation toward safer defaults — but they aren't a substitute for review.
For: Middleware overconfidence
“Verify authorization inside protected routes and business logic, not only in middleware.”
For: Unsafe form handling
“Validate all form input server-side, sanitize dangerous inputs, and protect against injection and abuse.”
For: Public API exposure
“Require explicit authentication and permission checks for every non-public API route.”
Manual verification
The Gemini Code Assist checklist.
Run through each item by hand before you ship. If anything is unclear, treat it as a red flag, not a green light.
- Map every API route and confirm whether middleware actually applies.
- Hit each “internal” route from an unauthenticated session.
- Submit forms with extra and tampered fields and confirm rejection.
- Audit form handlers for SQL injection, XSS, and SSRF surfaces.
- Verify every public route is intentionally public.
If issues are already live
Damage control, in order.
If you suspect any of the above already shipped to real users, work the list top-to-bottom. Don’t skip rotation.
- 1Review middleware coverage and route exceptions.
- 2Test exception routes from an unauthenticated session.
- 3Validate API exposure paths against intended access policy.
- 4Audit form submission endpoints for unsafe input handling.
- 5Patch and re-test before announcing the fix publicly.
Why AI-generated fixes still fail
The model that wrote the bug rarely sees it.
Same blind spots
The patterns that produced the vulnerability are baked into the model's training. Asking it to audit itself reproduces the same assumptions.
Context windows lie
The model sees the file you paste, not your auth middleware, your RLS policies, or the route you forgot to protect. It can't review what it can't see.
Confidence ≠ correctness
AI fixes look polished and read well. That's a signal of fluency, not of safety. Real verification needs a human who can hit the endpoint.
Don’t ask AI to audit AI
You wouldn’t let an intern grade their own homework.
Most builders type “is this secure?” back into the same chat that wrote the code. You need independent verification — someone whose context isn’t poisoned by what just got generated. That’s Kleared.
AI-generated code looks polished. Attackers test what polish hides.
Kleared verifies what users can’t see — middleware coverage, server-side validation, route-level auth — and opens fix-PRs back to your repo.
Before you launch
Run a real human security review.
Not another prompt. Kleared checks the boring stuff that breaks production:
- auth
- permissions
- secrets
- uploads
- database exposure
- API security
- payment flows
- production configs
So you can ship without guessing.