← Back to Blog
DevOps2026-05-13Β·79 min read

6 architectures I considered for a privacy-first personal SaaS β€” and why I built two of them

By Deeshan Sharma

Current Situation Analysis

Modern SaaS development faces a structural contradiction: users demand offline-first reliability and strict data sovereignty, while business models require subscription enforcement, invite controls, and predictable infrastructure costs. The industry standard response is to centralize everything in a managed cloud database. This approach satisfies access control and sync requirements but immediately violates privacy guarantees and introduces fixed operational overhead before the first dollar of revenue arrives.

The core misunderstanding lies in treating all application data as a single storage problem. Developers routinely place sensitive user records, authentication state, and subscription metadata in the same database. This creates a false equivalence between data that belongs to the user and data that governs the user's relationship with the product. When privacy is a hard constraint, this monolithic approach fails on two fronts:

  1. Data Custody Risk: Centralized databases make user records subpoena-able, breach-exposable, and subject to provider policy changes.
  2. Security Bypass: Client-side feature flags or local database columns can be trivially modified by end-users, rendering paywalls and access controls meaningless.

The operational reality is equally unforgiving. Traditional backend stacks require paid compute, managed databases, and continuous maintenance (migrations, backups, patching) regardless of subscriber count. Pure client-side alternatives eliminate infrastructure costs but sacrifice cross-device portability, crash recovery, and cryptographic enforcement. The result is a product that either compromises user privacy, leaks revenue through bypassed paywalls, or accumulates debt before achieving product-market fit.

WOW Moment: Key Findings

Evaluating six distinct architectural patterns against four non-negotiable constraints reveals a clear inflection point. The table below measures each approach against data sovereignty, zero-cost scalability, access control enforcement, and offline capability.

Approach Data Sovereignty Zero-Cost at Launch Access Control Enforcement Offline Capability
Traditional Backend + Postgres ❌ Centralized ❌ Fixed infra cost βœ… Server-side ❌ Requires network
Supabase (Full Stack) ❌ Provider-hosted βœ… Generous free tier βœ… RLS + Auth ❌ Requires network
Pure IndexedDB βœ… Browser-local βœ… Zero infra ❌ No cryptographic gates βœ… Native
Electron + SQLite βœ… Local file βœ… Zero infra ❌ Easily bypassed βœ… Native
Client-Side SPA + Drive Sync βœ… User-owned file βœ… Zero infra ❌ Local flag bypass βœ… Native
Hybrid Split-Storage βœ… User-owned + Provider-managed βœ… Zero infra βœ… Cryptographic tokens βœ… Native

The critical insight is that data sovereignty and access enforcement are orthogonal problems. Work logs, financial records, and personal metadata require user-controlled storage with offline resilience. Identity, subscription status, and invite tokens require server-side cryptographic enforcement. Attempting to solve both with a single storage layer forces a compromise. Decoupling them unlocks a architecture that satisfies all constraints simultaneously.

This pattern enables solo developers and small teams to ship privacy-first products without sacrificing monetization security or incurring pre-revenue infrastructure costs.

Core Solution

The hybrid split-storage architecture separates application data into two distinct domains: User-Owned Data and Access Control Metadata. Each domain uses storage optimized for its specific security, privacy, and operational requirements.

Step 1: Define the Data Boundary

  • User-Owned Data: Work logs, hourly rates, project names, earnings calculations. Stored locally, synced to user-controlled cloud storage (e.g., Google Drive, Dropbox). Never touches provider infrastructure.
  • Access Control Metadata: User identity, subscription status, invite tokens, waitlist records. Stored in a managed backend with strict authentication and cryptographic enforcement.

Step 2: Implement Local-First Storage with WASM SQLite

Browser-native storage (IndexedDB) lacks portability and explicit backup paths. Running SQLite via WebAssembly (sql.js) provides a full relational engine in-memory, serialized to a binary file that can be exported, backed up, or synced.

// hooks/useLocalDatabase.ts
import initSqlJs, { Database } from 'sql.js';
import { useState, useEffect, useCallback } from 'react';

export function useLocalDatabase() {
  const [db, setDb] = useState<Database | null>(null);
  const [isReady, setIsReady] = useState(false);

  useEffect(() => {
    let mounted = true;
    initSqlJs({ locateFile: f => `https://sql.js.org/dist/${f}` }).then(SQL => {
      if (!mounted) return;
      const existing = localStorage.getItem('workvault_db');
      const instance = existing 
        ? new SQL.Database(new Uint8Array(JSON.parse(existing)))
        : new SQL.Database();
      
      instance.run(`
        CREATE TABLE IF NOT EXISTS time_entries (
          id TEXT PRIMARY KEY,
          client_name TEXT NOT NULL,
          hours_worked REAL NOT NULL,
          rate_per_hour REAL NOT NULL,
          entry_date TEXT NOT NULL
        )
      `);
      
      setDb(instance);
      setIsReady(true);
    });
    return () => { mounted = false; };
  }, []);

  const persist = useCallback(() => {
    if (!db) return;
    const data = db.export();
    localStorage.setItem('workvault_db', JSON.stringify(Array.from(data)));
  }, [db]);

  return { db, isReady, persist };
}

Step 3: Build the Sync Layer

Sync operates on a last-write-wins model with timestamp comparison. Since this is a single-user application, conflict resolution is simplified to modification time validation. Writes are debounced to batch rapid edits and reduce API calls.

// lib/syncEngine.ts
export class SyncEngine {
  private syncToken: string | null = null;
  private lastSync: number = 0;

  async pullFromCloud(fileId: string): Promise<Uint8Array> {
    const response = await fetch(`/api/sync/pull?fileId=${fileId}`);
    const payload = await response.json();
    
    if (payload.modifiedTime > this.lastSync) {
      this.lastSync = payload.modifiedTime;
      return new Uint8Array(payload.binaryData);
    }
    return new Uint8Array(0);
  }

  async pushToCloud(fileId: string, binaryData: Uint8Array): Promise<void> {
    await fetch('/api/sync/push', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ fileId, binaryData: Array.from(binaryData) })
    });
  }
}

Step 4: Provision Lightweight Identity Backend

Supabase or equivalent BaaS handles authentication, waitlist management, and subscription records. Crucially, no work data is stored here. The backend only manages the relationship between the user and the product.

Step 5: Implement Cryptographic Entitlement Tokens

Client-side feature flags are insecure. The solution uses ECDSA ES256 signed JWTs minted server-side and verified client-side via the WebCrypto API. The token contains subscription status and expires after a short window (e.g., 72 hours), forcing periodic online validation without storing sensitive data on the server.

// lib/entitlement.ts
import * as jose from 'jose';

const PUBLIC_KEY = `-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
-----END PUBLIC KEY-----`;

export interface EntitlementPayload {
  sub: string;
  plan: 'free' | 'pro';
  exp: number;
  iat: number;
}

export async function verifyEntitlement(token: string): Promise<EntitlementPayload | null> {
  try {
    const key = await jose.importSPKI(PUBLIC_KEY, 'ES256');
    const { payload } = await jose.jwtVerify(token, key, {
      algorithms: ['ES256'],
      issuer: 'workvault-auth',
    });
    return payload as EntitlementPayload;
  } catch {
    return null;
  }
}

Architecture Rationale

  • WASM SQLite over IndexedDB: Provides explicit file export, crash recovery, and relational querying without browser storage quotas.
  • ECDSA ES256 over RSA: Smaller token size reduces payload overhead; native WebCrypto support ensures fast verification.
  • Short-lived JWTs: Prevents indefinite offline access to paid features while maintaining offline usability. Server re-issuance enforces subscription continuity.
  • Split Storage: Isolates attack surfaces. A breach in the identity provider exposes only auth metadata. A compromised local file contains only user-owned data.

Pitfall Guide

1. Storing Paywall Flags in Local Database

Explanation: Developers often add a is_pro = 1 column to their local SQLite schema. Users can open the file with any SQLite GUI tool, modify the value, and bypass payment gates. Fix: Never trust local state for entitlements. Use server-minted cryptographic tokens verified at runtime. Local DB should only contain user data.

2. Ignoring Browser Storage Eviction Policies

Explanation: Browsers may silently purge IndexedDB or localStorage under storage pressure or after periods of inactivity. Relying solely on these APIs risks data loss. Fix: Implement explicit export/import flows. Provide a "Download Backup" button that saves the SQLite binary to the user's filesystem. Use navigator.storage.persist() where supported, but never assume persistence.

3. Over-Engineering Sync Conflict Resolution

Explanation: Multi-user collaborative apps require CRDTs or operational transforms. Single-user productivity tools do not. Implementing complex merge logic adds unnecessary complexity. Fix: Use last-write-wins with modifiedTime comparison. For single-user apps, the latest timestamp is authoritative. Add a manual conflict resolution UI only if cross-device edits occur simultaneously.

4. Mixing Authentication and Data Sync Lifecycles

Explanation: Tying Google Drive sync directly to Google OAuth tokens creates coupling. If the auth provider changes, the sync layer breaks. Token refresh failures also interrupt data access. Fix: Decouple identity from storage. Use a dedicated sync service account or user-granted OAuth scope for Drive/Dropbox. Store refresh tokens securely in the backend, not the frontend.

5. Assuming Client-Side Validation is Secure

Explanation: Checking if (user.plan === 'pro') in JavaScript provides zero security. The code is visible, modifiable, and bypassable via dev tools or network interception. Fix: Perform all entitlement checks against verified JWT payloads. Cache the verification result in memory, not in persistent storage. Re-verify on app load and after token expiration.

6. Hardcoding Private Keys in Frontend Bundles

Explanation: Some developers attempt to sign tokens client-side to avoid backend costs. This exposes the private key, allowing anyone to forge unlimited pro tokens. Fix: Keep the private key strictly server-side. Distribute only the public key. Use environment variables and secret management tools (e.g., AWS Secrets Manager, Vercel Env Vars) for key storage.

7. Neglecting Offline Queue Management

Explanation: Users make edits offline. If the app doesn't queue changes, data is lost when sync resumes. Blindly overwriting the cloud file causes data loss. Fix: Maintain a local transaction log. On reconnect, apply pending operations to the local DB, then push the updated binary to cloud storage. Use debounced writes (e.g., 5-10 seconds) to batch rapid edits.

Production Bundle

Action Checklist

  • Define data boundary: Separate user-owned records from access control metadata before writing schema.
  • Provision identity backend: Set up Supabase/Auth0 for auth, invites, and subscription records only.
  • Initialize WASM SQLite: Configure sql.js with explicit schema creation and localStorage persistence fallback.
  • Implement sync engine: Build timestamp-based pull/push logic with debounced write batching.
  • Generate ECDSA keypair: Create ES256 keys, store private key in backend secrets, embed public key in frontend.
  • Build token minting route: Create Next.js/Express endpoint that issues short-lived JWTs on authenticated requests.
  • Add verification layer: Integrate WebCrypto JWT verification into app initialization and feature gate checks.
  • Test offline resilience: Simulate network failure, verify local writes persist, confirm sync resumes correctly.

Decision Matrix

Scenario Recommended Approach Why Cost Impact
Solo dev shipping privacy-first tool Hybrid split-storage Zero infra cost, cryptographic paywalls, user data sovereignty $0 until scale
Enterprise compliance (HIPAA/GDPR) Fully encrypted backend + client-side keys Audit trails, centralized access control, legal data residency High infra + compliance
Multi-user collaborative workspace Cloud-native DB + CRDT sync Real-time sync, conflict resolution, shared state management Moderate to high
High-volume consumer app Traditional SaaS stack Predictable scaling, analytics, centralized user management Linear with MAU
Offline-first field tool Local SQLite + manual export No network dependency, explicit backup, minimal attack surface Near zero

Configuration Template

// pages/api/entitlement/issue.ts
import { NextApiRequest, NextApiResponse } from 'next';
import { SignJWT } from 'jose';
import { getServerSession } from 'next-auth/next';

const PRIVATE_KEY = `-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQg...
-----END PRIVATE KEY-----`;

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  if (req.method !== 'POST') return res.status(405).end();

  const session = await getServerSession(req, res);
  if (!session?.user?.email) return res.status(401).json({ error: 'Unauthorized' });

  const hasActiveSubscription = await checkSubscriptionStatus(session.user.email);

  const token = await new SignJWT({
    sub: session.user.email,
    plan: hasActiveSubscription ? 'pro' : 'free',
  })
    .setProtectedHeader({ alg: 'ES256' })
    .setIssuedAt()
    .setIssuer('workvault-auth')
    .setExpirationTime('72h')
    .sign(new TextEncoder().encode(PRIVATE_KEY));

  res.status(200).json({ token });
}

Quick Start Guide

  1. Initialize the project: Run npx create-next-app@latest workvault --typescript --tailwind --app. Add sql.js, jose, and @supabase/supabase-js to dependencies.
  2. Set up the identity layer: Create a Supabase project. Configure Google OAuth provider. Add a subscriptions table with user_id, status, and plan_type.
  3. Generate cryptographic keys: Run openssl ecparam -genkey -name prime256v1 -noout -out private.pem and extract the public key. Store the private key in your deployment environment variables.
  4. Implement the sync hook: Copy the useLocalDatabase and SyncEngine examples. Wire them to a Next.js API route that handles Drive/Dropbox file operations using a service account or user OAuth scope.
  5. Deploy and verify: Push to Vercel/Netlify. Test offline mode by disabling network in DevTools. Confirm that local edits persist, sync resumes on reconnect, and feature gates correctly validate the JWT payload.
6 architectures I considered for a privacy-first personal SaaS β€” and why I built two of them | Codcompass