Skip to main content

Benchmarks

Real performance measurements and honest comparisons: @gentleduck/iam benchmarked against every major JS authorization library.

Benchmarked against 7 libraries: @casl/ability, casbin, accesscontrol, role-acl, @rbac/rbac, and easy-rbac. Numbers from vitest bench on identical authorization scenarios. Sizes verified via bundlephobia on 2026-03-30.

Run bun run bench in packages/duck-iam to reproduce the numbers here.


The honest verdict

CASL is faster than us. On simple RBAC checks, CASL is ~2x faster in production mode — it pre-compiles rules into a hash-map index at build time. duck-iam can't match that while keeping runtime-updatable policies.

We are faster than everyone else. In production mode, duck-iam beats easy-rbac, @rbac/rbac, accesscontrol, casbin, and role-acl.

We ship more features. Scoped roles, explain/debug traces, lifecycle hooks, batch permissions, 18 condition operators, 5 server middlewares, 3 client libraries. No competitor bundles all of them.

We are larger than CASL. ~21 KB vs ~6 KB. duck-iam includes a full policy engine, RBAC-to-ABAC converter, explain tracer, builder, config validator, and LRU cache. CASL ships none of that.


Library Overview

@gentleduck/iam@casl/abilitycasbinaccesscontrolrole-acl@rbac/rbaceasy-rbac
ModelPolicy engineAbility-basedPERM DSLFluent grantsRole + conditionsHierarchicalHierarchical
ABACYes (18 ops)YesYesNoYesNoNo
RBACYesYesYesYesYesYesYes
Runtime deps0051300
TypeScriptFull genericsFullString-basedPartialPartialYesNo
MaintainedActiveActiveActiveNo (2020)ActiveActiveNo (2021)

Runtime Performance

All numbers are ops/sec (higher is faster). Each library solves the same authorization problem. CASL condition checks use subject() so conditions actually run (bare string checks skip them). duck-iam has two modes: [DEV] returns rich Decision objects with timing and reasons, [PROD] returns plain booleans with zero overhead.

Simple RBAC: "can viewer read post?"

#Libraryops/secvs CASL
1@casl/ability16,857,000--
2@gentleduck/iam evaluatePolicyFast() [PROD]8,233,0002x slower
3@gentleduck/iam evaluateFast() [PROD]7,737,0002.2x slower
4easy-rbac5,003,0003.4x slower
5@rbac/rbac2,884,0005.8x slower
6@gentleduck/iam evaluatePolicy() [DEV]1,355,00012.4x slower
7@gentleduck/iam evaluate() [DEV]1,049,00016x slower
8accesscontrol674,00025x slower
9casbin143,000118x slower
10role-acl140,000120x slower

ABAC condition check: "can owner update own draft?"

Only libraries with real ABAC condition support. CASL uses subject() so conditions run.

#Libraryops/secvs CASL
1@casl/ability (with subject())3,910,000--
2@gentleduck/iam evaluateFast() [PROD]1,177,0003.3x slower
3@gentleduck/iam evaluate() [DEV]648,0006x slower

Others excluded — no attribute-based condition support.

Role + condition: "can admin delete post?"

#Libraryops/secvs CASL
1@casl/ability (with subject())5,677,000--
2easy-rbac4,504,0001.3x slower
3@rbac/rbac2,780,0002x slower
4@gentleduck/iam [DEV]786,0007.2x slower
5accesscontrol388,00014.6x slower
6casbin55,000103x slower
7role-acl55,000103x slower

Deny path: "viewer cannot delete"

#Libraryops/secvs fastest
1easy-rbac3,114,000--
2@casl/ability1,664,0001.9x slower
3@gentleduck/iam [DEV]803,0003.9x slower
4role-acl141,00022x slower
5@rbac/rbac68,00046x slower
6casbin51,00061x slower

Batch: 20 permission checks

#Libraryops/secvs CASL
1@casl/ability3,481,000--
2easy-rbac497,0007x slower
3@gentleduck/iam evaluateFast() [PROD]462,0007.5x slower
4@gentleduck/iam evaluate() [DEV]137,00025.4x slower
5accesscontrol68,00051x slower
6role-acl22,000158x slower
7@rbac/rbac14,200245x slower
8casbin9,800354x slower

Cold start: build everything + first check

#Libraryops/secvs CASL
1@casl/ability3,284,000--
2easy-rbac3,118,0001.1x slower
3accesscontrol830,0004x slower
4@gentleduck/iam311,00010.6x slower
5role-acl306,00010.7x slower
6@rbac/rbac183,00017.9x slower
7casbin62,00053x slower

Why CASL is faster, and why it rarely matters

The architectural difference

CASL and duck-iam solve authorization at different engine levels:

CASL: pre-compiled lookup table. build() iterates every rule once and produces an index keyed by [action, subjectType]. Every can() call is a single hash-map lookup — O(1), ~0.012 us. Rules are frozen after build() and can't change at runtime.

duck-iam: dynamic policy engine. Policies load from databases, update at runtime through adapters, and invalidate via the LRU cache. Each evaluation does: WeakMap index lookup, Map.get by action:resource, condition evaluation, combining algorithm. Even with rule indexing, each check costs ~0.12 us — about 2x a single hash lookup.

Where the ~2x gap comes from (profiled)

Profiled operations in the production fast path:

OperationCostWhat it does
WeakMap index lookup~0.004 usRetrieve cached rule index for the policy
String key concat~0.001 usBuild "read\0post" lookup key
Map.get~0.014 usFind rules matching this action+resource
for loop (1 rule)~0.003 usIterate matched rules
Condition check~0.003 usSkip (empty conditions) or evaluate
policyApplies~0.003 usCheck policy targets
Precomputed cache hit~0.080 usTwo nested Map.get calls (action -> resource)
Total~0.120 us
CASL total~0.060 usSingle hash lookup + return

The gap is not one big bottleneck. It's the sum of small costs a policy engine requires. CASL sidesteps them by freezing rules at build time.

What we optimized (and what we can't)

Every optimization that keeps the dynamic policy model is applied:

  1. Rule indexing: pre-built Map<action:resource, Rule[]> per policy, cached via WeakMap. Removes the linear scan over all rules.
  2. Unconditional rule flag: rules with empty conditions skip evalConditionGroup().
  3. Inlined combiners: deny-overrides and allow-overrides inline into the evaluation loop — no array allocation, no function calls.
  4. Path cache: condition field paths like subject.attributes.role split once and cache forever.
  5. Production mode: no performance.now(), no Date.now(), no Decision allocation, no reason strings.

Closing the last ~2x gap means dropping dynamic policies and pre-compiling at init like CASL. That breaks adapters, runtime policy updates, and the LRU cache — the features that make duck-iam a policy engine instead of a lookup table.

Why it doesn't matter in practice

Authorization isn't the bottleneck. A typical API request:

StepTime
Network round trip5,000--50,000 us
Database query500--5,000 us
JSON serialization50--500 us
duck-iam check (prod)0.12 us
CASL check0.06 us

The gap is 60 nanoseconds. At 100 checks per request, that's 6 us — 0.00012% of a 50 ms request.


Dev vs Prod Mode

duck-iam has two execution modes. They change runtime behavior and return types:

// Development (default) -- rich Decision with timing, reasons, rule refs
const engine = new Engine({ adapter, mode: 'development' })
const decision = await engine.check('user-1', 'read', post)
// decision: Decision { allowed: true, effect: 'allow', reason: '...', duration: 0.5, timestamp: ... }
// engine.explain() is available
// Hooks (afterEvaluate, onDeny, onError) fire on every check
 
// Production -- plain boolean, maximum throughput
const prodEngine = new Engine({ adapter, mode: 'production' })
const allowed = await prodEngine.check('user-1', 'read', post)
// allowed: true (boolean)
// No performance.now(), no Date.now(), no object allocation, no reason strings
// engine.explain() throws -- not available in production
// Hooks (afterEvaluate, onDeny, onError) are skipped for maximum speed
// Development (default) -- rich Decision with timing, reasons, rule refs
const engine = new Engine({ adapter, mode: 'development' })
const decision = await engine.check('user-1', 'read', post)
// decision: Decision { allowed: true, effect: 'allow', reason: '...', duration: 0.5, timestamp: ... }
// engine.explain() is available
// Hooks (afterEvaluate, onDeny, onError) fire on every check
 
// Production -- plain boolean, maximum throughput
const prodEngine = new Engine({ adapter, mode: 'production' })
const allowed = await prodEngine.check('user-1', 'read', post)
// allowed: true (boolean)
// No performance.now(), no Date.now(), no object allocation, no reason strings
// engine.explain() throws -- not available in production
// Hooks (afterEvaluate, onDeny, onError) are skipped for maximum speed

engine.can() always returns boolean in both modes (for middleware compatibility).

Does production mode reduce bundle size?

The mode flag alone does not reduce bundle size. It's a runtime check. Import patterns do — the package is tree-shakeable, so bundlers drop unused code:

// Smallest production bundle -- import only the fast evaluator
// Tree-shakes away: Engine, explain, builder, config, validate, dev evaluate
import { evaluateFast } from '@gentleduck/iam'
const allowed = evaluateFast(policies, request) // boolean
 
// Full engine -- includes everything (dev + prod paths)
import { Engine } from '@gentleduck/iam'
// Smallest production bundle -- import only the fast evaluator
// Tree-shakes away: Engine, explain, builder, config, validate, dev evaluate
import { evaluateFast } from '@gentleduck/iam'
const allowed = evaluateFast(policies, request) // boolean
 
// Full engine -- includes everything (dev + prod paths)
import { Engine } from '@gentleduck/iam'

evaluateFast + evaluatePolicyFast give the smallest bundle when you manage policies yourself. The Engine, explain system, builder, and config validator only ship if imported.

engine.explain() is development-only.


Internal Performance

Pure evaluation timing, average of 2,000 iterations after 200 warmup rounds.

OperationTime
evaluatePolicyFast() -- simple rule~0.87 us
evaluatePolicyFast() -- with conditions~1.61 us
evaluatePolicy() [DEV] -- target match~0.59 us
evaluatePolicy() -- target skip~0.37 us
evaluate() -- 2 policies~0.70 us
evaluate() -- deny path~0.96 us

Engine Performance (with LRU caching)

OperationTime
engine.can() -- cached~5.5 us
engine.check() -- cached~4.2 us
engine.permissions() -- 20 checks~21 us
engine.explain() -- full trace~5.7 us

Times vary by machine. Run bun run benchmark for your hardware.


Bundle Size

LibrarySize (gzip)Runtime depsTree-shakeable
easy-rbac~2 KB0No
@rbac/rbac~4 KB0No
@casl/ability~6 KB0Yes
accesscontrol~8.2 KB1No
role-acl~12 KB3No
@gentleduck/iam (full)~21 KB0Yes
casbin (node-casbin)~30 KB5No

We are not the smallest. At ~21 KB, duck-iam is 3.5x larger than CASL. The full package bundles: evaluation engine, RBAC-to-ABAC converter, conditions engine (18 operators), explain/debug tracer, type-safe builder, config validator, and LRU cache. CASL ships none of that.

The package is tree-shakeable. Import only evaluateFast and skip the engine, explain, and builder for a much smaller bundle. Each adapter and server middleware adds ~0.8-1.7 KB.

Module Sizes

ModuleSize (gzip)
Core (full entry)21.9 KB
Adapter: Memory1.1 KB
Adapter: Prisma1.4 KB
Adapter: Drizzle1.7 KB
Adapter: HTTP1.2 KB
Adapter: Redis1.4 KB
Server: Express1.1 KB
Server: Next.js1.0 KB
Server: Hono0.9 KB
Server: NestJS1.3 KB
Server: Generic0.8 KB
Client: React1.1 KB
Client: Vue1.0 KB
Client: Vanilla1.4 KB

Feature Comparison

FeaturegentleduckCASLCasbinaccesscontrolrole-acl@rbac/rbaceasy-rbac
RBACYesYesYesYesYesYesYes
ABAC (conditions)18 operatorsYesYesNoYesNoNo
Policy engineYesNoYesNoNoNoNo
Dev/Prod modeYesNoNoNoNoNoNo
Deny-overridesYesNoYesNoNoNoNo
Combining algorithms41Custom1111
Scoped rolesYesNoNoNoNoNoNo
Explain / debugYesNoNoNoNoNoNo
Lifecycle hooksYesNoNoNoNoNoNo
LRU cachingBuilt-inNoNoNoNoNoNo
Rule indexingYesYesNoNoNoNoNo
DB adapters5320+0030
Server middleware5020030
React integrationYesYesNoNoNoNoNo
Vue integrationYesYesNoNoNoNoNo
Type-safe configYesYesNoYesNoYesNo
Zero runtime depsYesYesNoNoNoYesYes
Batch permissionsYesNoNoNoNoNoNo

Where each library wins

@gentleduck/iam wins on

  • Feature density: only library with scoped roles + explain/debug + lifecycle hooks + batch permissions + 18 condition operators + dev/prod mode in one package
  • Faster than casbin, role-acl, accesscontrol: 3-50x faster in production mode
  • Server integration: 5 framework middlewares (Express, Next.js, Hono, NestJS, generic)
  • Client libraries: React, Vue, and vanilla JS with hooks and reactive state
  • Type safety: full generic type parameters for actions, resources, roles, and scopes
  • Explain API: the only library that tells you exactly why a permission was granted or denied
  • Dev/Prod mode: rich debug objects in development, fast booleans in production

@casl/ability wins on

  • Raw speed: 2x faster than duck-iam in production mode from the pre-compiled ability index
  • Bundle size: ~6 KB, 3.5x smaller
  • Maturity: production since 2017
  • Ecosystem: ~900K downloads/week, extensive docs and community
  • Isomorphic: proven frontend + backend sharing pattern

easy-rbac wins on

  • Fastest deny path: 2x faster than CASL on deny checks
  • Tiny bundle: ~2 KB, the smallest
  • Zero config: hierarchical RBAC, nothing to set up

casbin wins on

  • Adapter ecosystem: 20+ database adapters across 15+ languages
  • Admin UI: web-based policy management panel
  • Academic backing: formal PERM metamodel

@rbac/rbac wins on

  • Fast simple checks: 2.5M ops/sec for basic RBAC
  • Built-in middleware: Express, NestJS, Fastify
  • Runtime role updates: add or change roles without restart

Smallest possible bundle

createAccessConfig() sets up the whole authorization system in one call, but it pulls in the full config system, validator, and builder. If all you need is policy evaluation, skip the config layer and import the building blocks directly.

Build a typed policy and evaluate it without createAccessConfig:

import type { Policy, AccessRequest } from '@gentleduck/iam'
import { evaluatePolicyFast } from '@gentleduck/iam'
 
// Define your action/resource types for type safety
type Action = 'read' | 'update' | 'delete'
type Resource = 'post' | 'comment'
 
const policy: Policy<Action, Resource> = {
  id: 'blog-policy',
  algorithm: 'deny-overrides',
  rules: [
    { id: 'allow-read', effect: 'allow', actions: ['read'], resources: ['post', 'comment'], conditions: {}, priority: 0 },
  ],
}
 
const request: AccessRequest<Action, Resource> = {
  subject: { id: 'user-1', roles: ['viewer'] },
  action: 'read',
  resource: { type: 'post', id: 'post-1' },
}
 
const allowed = evaluatePolicyFast(policy, request) // boolean
import type { Policy, AccessRequest } from '@gentleduck/iam'
import { evaluatePolicyFast } from '@gentleduck/iam'
 
// Define your action/resource types for type safety
type Action = 'read' | 'update' | 'delete'
type Resource = 'post' | 'comment'
 
const policy: Policy<Action, Resource> = {
  id: 'blog-policy',
  algorithm: 'deny-overrides',
  rules: [
    { id: 'allow-read', effect: 'allow', actions: ['read'], resources: ['post', 'comment'], conditions: {}, priority: 0 },
  ],
}
 
const request: AccessRequest<Action, Resource> = {
  subject: { id: 'user-1', roles: ['viewer'] },
  action: 'read',
  resource: { type: 'post', id: 'post-1' },
}
 
const allowed = evaluatePolicyFast(policy, request) // boolean

The package is fully tree-shakeable. Anything you don't import drops out: Engine, explain, builder, config, validate, adapters. From the module sizes above, evaluateFast alone is tiny next to the 21.9 KB core entry — pay only for what you use.

Other low-level pieces to import directly: PolicyBuilder, RuleBuilder, evaluateFast, evaluatePolicy, and the condition operators. Mix and match for the exact surface area you need.


Methodology

  • @gentleduck/iam: bundle sizes from dist/ via gzip -c | wc -c. Performance via vitest bench with N=3 inner loops. Production mode uses evaluateFast() with rule indexing (WeakMap-cached per policy, Map lookup by action:resource).
  • @casl/ability: condition benchmarks use subject() for real condition evaluation. Bare string checks (can('read', 'Post')) skip conditions and would give misleading numbers — we don't do that.
  • casbin: real RBAC model (newModel() + StringAdapter) with role inheritance via grouping rules.
  • accesscontrol, @rbac/rbac, easy-rbac: excluded from ABAC benchmarks (no condition support).
  • Competitor sizes from bundlephobia.com, verified 2026-03-30.
  • Sizes are minified + gzipped.
  • All benchmarks run on the same machine in the same vitest session.

Reproduce:

cd packages/duck-iam
bun run bench       # vitest bench -- competitive comparison
bun run benchmark   # JSON data output + console summary
cd packages/duck-iam
bun run bench       # vitest bench -- competitive comparison
bun run benchmark   # JSON data output + console summary