Kleared

Supabase RLS analyzer

The Supabase mistake we catch in 87% of audits.

Row-Level Security on Supabase has one failure mode that no other scanner reliably catches. Here's what it is, why it slips past everyone, and how Kleared finds it before someone with a Wireshark tab does.

The failure mode

When you enable RLS on a table without adding a policy, Supabase denies all reads — looks safe. The trap is the inverse: enabling RLS and adding a policy that resolves to true is functionally identical to no policy at all, but the dashboard renders it green.

AI codegen tools love “using (true)”. They also love writing policies that gate on auth.uid() is not null when what you actually wanted was auth.uid() = author_id.

-- looks safe
alter table public.posts enable row level security;

-- isn't safe
create policy "anyone can read posts"
  on public.posts for select
  using (true);  -- <- read by literally anyone with your URL

How we find it

Two passes. The first is static — we parse every migration in your repo, build a model of which tables have RLS, which policies are attached, and what each policy's USING / WITH CHECK clause actually evaluates to.

The second is dynamic — when you give us a read-only Supabase Management API token, we query pg_policies directly and compare it to the static model. Drift between the two is its own finding (“your migrations say one thing, your live DB says another”).

Each finding ships with a candidate policy. Claude generates the fix; we validate the SQL parses cleanly before opening the PR.

Why no other scanner has this

  • Generic SAST tools don't understand the Supabase semantics. Their SQL linters flag syntax, not auth model errors.
  • Cloud-native scanners (Wiz, Orca) audit the AWS / GCP control plane. Supabase isn't a cloud control plane; it's a Postgres on top of one.
  • Supabase's own advisor catches a subset, but only what Supabase decides to surface in their dashboard. We pull the raw pg_policies table and reason about it ourselves.

What you'll get on the first scan

  • A list of every table without a usable policy.
  • Every policy that resolves to true or otherwise short-circuits.
  • Drift between migrations and live pg_policies.
  • A fix-PR for each one, if the auto-remediator can write a safe policy.

On the roadmap

Supabase RLS is where we started — it's the failure mode we see most often. The same analyzer pattern (parse policy DSL, model its evaluation, compare against live state) generalises to every modern backend that pushes a rules engine, and to every code host where a fix-PR has to land somewhere. Coming next:

  • Firebase Firestore Security Rules

    Rules enginePhase 2

    Direct analog to RLS — same failure modes (allow-all reads, missing field-level checks, request.auth.uid == null traps). Parses .rules files and cross-checks against live emulator output.

  • PocketBase API Rules

    Rules enginePhase 2

    Collection-level rule expressions like @request.auth.id != "" that look safe but let any signed-in user read everyone's records. Static-analyses the rule DSL plus the schema export from the live instance.

  • Convex

    Rules enginePhase 2

    TypeScript-first ACL functions. Static-analyse query/mutation handlers for missing ctx.auth checks; flag handlers that read viewer identity but never branch on it.

  • Neon

    Rules enginePhase 2

    Postgres, so the existing RLS analyzer applies. The novel piece: per-branch scanning. We diff policies between your main branch and a feature branch before you merge a deploy preview.

  • Appwrite

    Rules enginePhase 3

    Per-collection and per-document permissions (read("any"), write("user:abc")). Catches collections left at any-readable, document-level overrides that contradict collection rules, and exposed API keys.

  • Hasura

    Rules enginePhase 3

    Permission rules per role per table. Catches misconfigured row/column permissions, the classic _eq: X-Hasura-User-Id with no operator restrictions.

  • PlanetScale

    Rules enginePhase 3

    MySQL doesn't have RLS, so the analyzer shifts: deploy-request review, role grants, schema changes that widen access. Branch-aware just like Neon.

  • MongoDB Atlas App Services

    Rules enginePhase 3

    Atlas Rules + Functions are increasingly how MongoDB apps gate access. Same failure modes as Firestore: catch-all read rules, function-based gates that forget to check the caller.

  • GitLab

    Code hostPhase 3

    Same scan + remediation pipeline; merge requests instead of pull requests, GitLab CI instead of Actions. Targets EU customers and self-hosted GitLab installs where GitHub isn't an option.

  • Bitbucket

    Code hostPhase 3

    Atlassian-shop fit. Pull requests via Bitbucket Cloud API. Lower priority for our solo-founder ICP — most of those teams are on GitHub — but tractable when an Atlassian customer surfaces.

Want one of these prioritised? Email hello@kleared.app with the stack you ship on. We weight the queue by customer demand, not vendor logo size.