Documentation
Comprehensive MovaLab documentation covering every aspect of the platform.
For AI Tools: Give your AI this URL to scrape all documentation at once: https://movalab.dev/docs
Overview
MovaLab is an open-source agency operations platform for capacity planning, project management, and workflow automation. Built with Next.js 15, Supabase, and TypeScript. Designed for agencies managing multiple client accounts with complex workflows.
Tech Stack
- Next.js 15 - App Router with React Server Components
- TypeScript - Full type safety, strict mode
- Supabase - PostgreSQL + Auth + Realtime
- Tailwind CSS - Utility-first styling
- shadcn/ui - Radix-based component library
- @xyflow/react - Visual workflow builder
- Recharts - Analytics visualizations
- @dnd-kit - Drag-and-drop for Kanban
- Zod - Runtime type validation
- SWR - Client-side data fetching
Key Features
- ~40 Permissions - Granular RBAC system
- Row Level Security - Database-enforced access
- 33 Database Tables - Comprehensive schema
- 83+ API Endpoints - Full REST/tRPC coverage
- Visual Workflow Builder - Drag-and-drop automation
- Capacity Planning - Proportional allocation
- Time Tracking - Clock in/out with auto-protection
- Client Portal - Separate client access
- Multi-tenancy - Account-based isolation
- Org Chart - Visual hierarchy editor
Quick Start
# Clone and setup git clone https://github.com/itigges22/movalab.git cd movalab ./scripts/first-time-setup.sh # or scripts\first-time-setup.bat on Windows npm run dev # Open http://localhost:3000 # Login: superadmin@test.local / Test1234!
Platform Statistics
33
Database Tables
~40
Permissions
83+
API Endpoints
100+
RLS Policies
107
Components
43
Services
90k+
Lines of Code
E2E + Unit
Test Coverage
Getting Started
This guide walks you through setting up MovaLab for local development. The process takes approximately 10-15 minutes on a fresh system, with most time spent downloading Docker images. By the end, you'll have a fully functional development environment with 8 test users spanning all role levels.
Prerequisites
Install these dependencies before running the setup script. Each is required for different aspects of the development workflow.
| Requirement | Version | Purpose | Installation |
|---|---|---|---|
| Node.js | 18.17+ | JavaScript runtime for Next.js | brew install node or nvm install 20 |
| Docker Desktop | 4.0+ | Runs Supabase locally | docker.com/products/docker-desktop |
| Git | 2.30+ | Version control | brew install git or git-scm.com |
| npm | 9.0+ | Package manager (comes with Node) | Included with Node.js |
| Supabase CLI | 1.0+ | Local Supabase development | npm install -g supabase (or auto-installed via npx) |
| Docker Hub Account | Optional | Avoid image pull rate limits | hub.docker.com/signup (free) |
Verify Prerequisites
# Check Node.js version (must be 18.17+) node -v # Check npm version npm -v # Check Docker is running (should list version, not error) docker --version docker ps # Should show empty table, not "Cannot connect" # Check Git version git --version # Optional: Check Supabase CLI (auto-installed via npx if not present) npx supabase --version # Optional: Login to Docker Hub (avoids rate limits) docker login
Docker Desktop Setup
Install Docker Desktop
Download from docker.com. macOS users can also use: brew install --cask docker
Start Docker Desktop
Launch the app. Wait for the whale icon to stop animating (indicates ready)
Allocate Resources
Settings > Resources: 4GB+ RAM, 2+ CPUs recommended for Supabase
Quick Start Installation
# Step 1: Clone the repository git clone https://github.com/itigges22/movalab.git cd movalab # Step 2: Run the setup script (macOS / Linux / Git Bash) ./scripts/first-time-setup.sh # OR for Windows CMD / PowerShell: scripts\first-time-setup.bat
What the Setup Script Does
- 1
Install dependencies
Runs npm install to download all packages (~2 min)
- 2
Start Docker containers
Pulls Supabase images and starts services (~3-5 min first time)
- 3
Wait for services
Ensures PostgreSQL, Auth, and Storage are healthy
- 4
Run migrations
Applies all 33+ database migrations in order (~30 sec)
- 5
Seed test data
Creates 8 test users, sample accounts, projects, and workflows (~10 sec)
- 6
Generate types
Creates TypeScript types from database schema
- 7
Create .env.local
Copies example environment variables with correct local values
- 8
Start dev server
Launches Next.js development server on port 3000
Manual Installation (Step by Step)
If the setup script fails or you prefer manual control, follow these steps:
# 1. Clone and enter directory git clone https://github.com/itigges22/movalab.git cd movalab # 2. Install npm packages npm install # 3. Copy environment template cp .env.example .env.local # 4. Start Supabase (first run downloads images) npx supabase start # 5. Get the anon key (copy output to .env.local) npx supabase status # 6. Update .env.local with values from step 5: # NEXT_PUBLIC_SUPABASE_URL=http://127.0.0.1:54321 # NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY=<anon key from status> # 7. Run database migrations npx supabase db push # 8. Seed test data npx supabase db seed # 9. Generate TypeScript types npm run gen:types # 10. Start development server npm run dev
Environment Variables
The setup script creates .env.local automatically. For manual setup or production, configure these variables:
| Variable | Required | Local Dev Value | Description |
|---|---|---|---|
| NEXT_PUBLIC_SUPABASE_URL | Yes | http://127.0.0.1:54321 | Supabase API endpoint |
| NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY | Yes | (from npx supabase status) | Supabase anon/public key |
| UPSTASH_REDIS_REST_URL | No | Redis URL for rate limiting | |
| UPSTASH_REDIS_REST_TOKEN | No | Redis authentication token | |
| ENABLE_RATE_LIMIT | No | false | Enable API rate limiting |
| EXPOSE_ERROR_DETAILS | No | true | Show detailed errors (dev only) |
| LOG_LEVEL | No | debug | Logging verbosity: debug|info|warn|error |
Setup Files Reference
| File | Purpose |
|---|---|
| .env.example | Base environment template - copy this to .env.local |
| .env.local.template | Complete local dev template with all variables pre-filled |
| scripts/first-time-setup.sh | Automated setup script for macOS/Linux/Git Bash |
| scripts/first-time-setup.bat | Automated setup script for Windows CMD/PowerShell |
| scripts/create-seed-users.ts | Creates test user accounts with all role levels |
Security Warning: NEVER use NEXT_PUBLIC_SUPABASE_ANON_KEY or the service_role key in client code. The anon key bypasses Row Level Security policies, creating a massive security vulnerability. Always use the publishable key which respects RLS.
Test User Accounts
The seed script creates 8 test users covering all permission levels. All use password: Test1234!
| Role | Access Level | Use For Testing | |
|---|---|---|---|
| superadmin@test.local | Superadmin | Full access, bypasses all checks | Admin features, system config |
| exec@test.local | Executive Director | Org-wide access, analytics | Dashboards, reports |
| manager@test.local | Account Manager | Multi-account management | Account CRUD, assignments |
| pm@test.local | Project Manager | Project-level management | Projects, tasks, workflows |
| designer@test.local | Senior Designer | Design dept, full project access | Design-specific features |
| dev@test.local | Senior Developer | Engineering dept | Dev-specific features |
| contributor@test.local | Contributor | Limited access, 20 hrs/week | Basic user experience |
| client@test.local | Client | Portal only, view deliverables | Client portal testing |
First Login Walkthrough
Open the Application
Navigate to http://localhost:3000 in your browser
Click Sign In
Use the Sign In button in the top navigation
Enter Credentials
Try superadmin@test.local / Test1234! for full access
Explore Dashboard
View projects, tasks, accounts - explore the full system
Docker Commands
Note: npm scripts wrap Supabase CLI commands for convenience. npm run docker:start runs npx supabase start internally.
# First-time Setup npm run docker:init # Initialize Supabase project (runs supabase init) # Daily Development npm run docker:start # Start Supabase services (wraps: npx supabase start) npm run docker:stop # Stop containers (preserves data) npm run docker:status # Check service health (wraps: npx supabase status) # Database Management npm run docker:reset # Reset DB, re-run all migrations npm run docker:seed # Reset DB + seed test users npm run docker:studio # Open Supabase Studio at localhost:54323 # Troubleshooting npm run docker:health # Verify Docker setup is healthy npm run docker:logs # View container logs npm run docker:clean # Remove all containers and volumes (fresh start)
Development Commands
# Development Server npm run dev # Start on localhost:3000 npm run dev:local # Start with local Supabase (docker:start + dev) npm run dev:cloud # Start with cloud Supabase (uses .env.local cloud keys) npm run dev:clean # Clean .next cache and start npm run dev:fresh # Kill port 3000, clean cache, start fresh # Build & Production npm run build # Production build (catches type errors) npm run start # Start production server npm run lint # Run ESLint # Types & Code Generation npm run gen:types # Regenerate Supabase TypeScript types # Cleanup npm run clean # Clean .next and cache directories
Supabase CLI Commands
Direct Supabase CLI commands for database and schema management. Use npx supabase or install globally with npm install -g supabase.
# Schema & Migrations npx supabase db pull # Pull remote schema to local migrations npx supabase db push # Push local migrations to remote database npx supabase db reset # Reset local database and run all migrations npx supabase db diff # Show differences between local and remote schema # Type Generation npx supabase gen types typescript --local > src/types/supabase.ts # Generates TypeScript types from your database schema # Status & Info npx supabase status # Show local Supabase service status and keys npx supabase start # Start local Supabase (same as docker:start) npx supabase stop # Stop local Supabase (same as docker:stop) # Remote Connection (for cloud Supabase) npx supabase link --project-ref <project-id> # Link to cloud project npx supabase db pull --schema public # Pull cloud schema locally
Service URLs
Application
http://localhost:3000Next.js development server
Supabase Studio
http://localhost:54323Database admin, table editor, SQL runner
Supabase API
http://localhost:54321REST and GraphQL endpoints
PostgreSQL
localhost:54322Direct database connection (user: postgres)
Inbucket (Email)
http://localhost:54324Catches all emails (password reset, etc.)
Kong Gateway
http://localhost:54321API gateway with rate limiting
Post-Installation Verification
After installation, verify everything is working correctly:
# 1. Check all Docker containers are running docker ps # Should show 6+ Supabase containers (db, auth, storage, etc.) # 2. Verify Supabase is healthy npm run docker:health # Should report all services as healthy # 3. Test database connection npx supabase status # Should show connection details and API keys # 4. Verify app is running curl http://localhost:3000 # Should return HTML (or open in browser) # 5. Run build to catch any TypeScript errors npm run build # Should complete without errors
Common First-Time Issues
Docker images download slowly
Fix: First run pulls ~2GB of images. Wait or use Docker Hub login to avoid rate limits.
Port 3000 already in use
Fix: Run: npm run dev:fresh to kill the port and restart.
Supabase services not starting
Fix: Ensure Docker Desktop is running. Check: docker ps. Try: npm run docker:reset.
Login fails with 401
Fix: Verify .env.local has correct keys from npx supabase status. Restart dev server.
TypeScript errors in IDE
Fix: Run: npm run gen:types to regenerate types. Restart TypeScript server.
Architecture
MovaLab is built on Next.js 15 App Router with Supabase as the backend. The architecture follows a layered approach with clear separation of concerns: React components for UI, API routes for HTTP endpoints, service layer for business logic, and PostgreSQL with Row Level Security for data protection.
High-Level Architecture
┌─────────────────────────────────────────────────────────────────────────────┐ │ MOVALAB ARCHITECTURE │ ├─────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────────────────────────────────────────────────────────────┐ │ │ │ PRESENTATION LAYER │ │ │ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ │ │ │ │ Server Components│ │ Client Components│ │ shadcn/ui │ │ │ │ │ │ (Data Fetching) │ │ (Interactivity) │ │ (UI Library) │ │ │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────────────────┐ │ │ │ API LAYER │ │ │ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ │ │ │ │ API Routes │ │ Server Actions │ │ Middleware │ │ │ │ │ │ (83 endpoints) │ │ (Form handling) │ │ (Auth check) │ │ │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────────────────┐ │ │ │ SERVICE LAYER │ │ │ │ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │ │ │ │ │ Permission │ │ Validation │ │ Business │ │ │ │ │ │ Checker │ │ (Zod) │ │ Logic (43) │ │ │ │ │ └───────────────┘ └───────────────┘ └───────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────────────────┐ │ │ │ DATA LAYER │ │ │ │ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │ │ │ │ │ Supabase │ │ PostgreSQL │ │ Row Level │ │ │ │ │ │ Clients (3) │ │ (33 tables) │ │ Security │ │ │ │ │ └───────────────┘ └───────────────┘ └───────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────────────┘
Project Structure
movalab/
├── app/ # Next.js App Router
│ ├── (auth)/ # Auth routes (unprotected)
│ │ ├── login/ # Login page
│ │ ├── register/ # Registration page
│ │ └── forgot-password/ # Password reset
│ ├── (dashboard)/ # Protected dashboard routes
│ │ ├── projects/ # Project management
│ │ │ ├── [id]/ # Project detail (dynamic route)
│ │ │ │ ├── page.tsx # Project overview
│ │ │ │ ├── tasks/ # Task management
│ │ │ │ ├── timeline/ # Project timeline
│ │ │ │ └── settings/ # Project settings
│ │ │ └── new/ # Create project
│ │ ├── accounts/ # Client accounts
│ │ │ ├── [id]/ # Account detail
│ │ │ │ ├── projects/ # Account projects
│ │ │ │ ├── members/ # Account members
│ │ │ │ └── kanban/ # Kanban configuration
│ │ ├── admin/ # Admin panel
│ │ │ ├── roles/ # Role management + org chart
│ │ │ ├── users/ # User management
│ │ │ ├── departments/ # Department management
│ │ │ └── workflows/ # Workflow templates
│ │ ├── capacity/ # Capacity planning
│ │ ├── time-entries/ # Time tracking
│ │ ├── workflows/ # Workflow builder (React Flow)
│ │ └── pipeline/ # Project pipeline view
│ └── api/ # API routes (83 endpoints)
│ ├── projects/ # Project endpoints
│ ├── tasks/ # Task endpoints
│ ├── accounts/ # Account endpoints
│ ├── workflows/ # Workflow endpoints
│ ├── time-entries/ # Time tracking endpoints
│ ├── capacity/ # Capacity endpoints
│ └── admin/ # Admin endpoints
├── components/ # React components (107 total)
│ ├── ui/ # shadcn/ui base components (27)
│ ├── workflow-editor/ # React Flow workflow components
│ ├── org-chart/ # Org chart components
│ ├── time-entries/ # Time tracking components
│ ├── capacity/ # Capacity components
│ └── [feature]/ # Feature-specific components
├── lib/ # Core libraries and services
│ ├── supabase.ts # Client-side Supabase (singleton)
│ ├── supabase-server.ts # Server-side Supabase clients
│ ├── permissions.ts # Permission enum (40 permissions)
│ ├── permission-checker.ts # Hybrid permission engine
│ ├── rbac.ts # RBAC helper functions
│ ├── rbac-types.ts # TypeScript types
│ ├── auth.ts # Authentication utilities
│ ├── validation-schemas.ts # Zod validation schemas
│ ├── debug-logger.ts # Logging utilities
│ ├── config.ts # App configuration
│ ├── rate-limit.ts # Rate limiting
│ └── services/ # Business logic services
│ ├── capacity-service.ts # Capacity calculations
│ ├── time-entry-service.ts # Time entry logic
│ └── availability-service.ts # Availability management
├── lib/ # Additional service files (43 total)
│ ├── account-service.ts # Account CRUD + members
│ ├── assignment-service.ts # Project/task assignments
│ ├── client-portal-service.ts # Client portal features
│ ├── department-service.ts # Department management
│ ├── form-service.ts # Form templates + responses
│ ├── newsletter-service.ts # Newsletter management
│ ├── organization-service.ts # Org chart service
│ ├── project-issues-service.ts # Project issue tracking
│ ├── role-management-service.ts # Role CRUD
│ ├── workflow-service.ts # Workflow template management
│ └── workflow-execution-service.ts # Workflow execution engine
├── supabase/
│ └── migrations/ # Database migrations (5 files)
│ ├── 20250123000000_schema_base.sql # Table definitions
│ ├── 20250123010000_functions_fixed.sql # Database functions
│ ├── 20250123020000_views.sql # Views
│ ├── 20250123030000_rls_policies_fixed.sql # RLS policies
│ └── 20250123040000_triggers.sql # Triggers
├── scripts/ # Setup and utility scripts
│ ├── first-time-setup.sh # Mac/Linux setup
│ ├── first-time-setup.bat # Windows setup
│ └── seed-data.sql # Demo data
└── tests/ # Test files
└── e2e/ # Playwright E2E testsSupabase Client Patterns (Critical)
MovaLab uses three distinct Supabase client types. Using the wrong client type is a common source of bugs, especially with RLS policies.
┌─────────────────────────────────────────────────────────────────────────────┐
│ SUPABASE CLIENT PATTERNS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. CLIENT COMPONENT (createClientSupabase) │
│ ───────────────────────────────────────── │
│ File: lib/supabase.ts │
│ Use in: Browser-side code, Client Components │
│ Auth: Uses browser cookies automatically │
│ │
│ const supabase = createClientSupabase(); │
│ // Returns singleton instance in browser │
│ // Returns null on server (SSR) - must use createServerSupabase instead │
│ │
│ Key Features: │
│ - Singleton pattern (avoids multiple GoTrueClient instances) │
│ - Auto session refresh on token expiry │
│ - Auth state change listener for logout/refresh │
│ │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 2. SERVER COMPONENT (createServerSupabase) │
│ ───────────────────────────────────────── │
│ File: lib/supabase-server.ts │
│ Use in: React Server Components (RSC), page.tsx data fetching │
│ Auth: Uses cookies() from next/headers │
│ │
│ const supabase = await createServerSupabase(); │
│ // Async function - must await │
│ // Uses cookies() which must be awaited in Next.js 15 │
│ │
│ Key Features: │
│ - Reads auth cookies from request │
│ - 30-second fetch timeout to prevent hanging │
│ - Safe cookie handling (catches errors in API routes) │
│ │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 3. API ROUTE (createApiSupabaseClient) │
│ ────────────────────────────────────── │
│ File: lib/supabase-server.ts │
│ Use in: API routes (route.ts files), Route Handlers │
│ Auth: Parses cookies from request headers │
│ │
│ export async function POST(request: NextRequest) { │
│ const supabase = createApiSupabaseClient(request); │
│ // REQUIRED: Pass request object for cookie parsing │
│ // Must pass this client to hasPermission() for proper RLS │
│ } │
│ │
│ CRITICAL: cookies() from next/headers CANNOT be used in Route Handlers │
│ We must parse cookies from request.headers.get('cookie') instead │
│ │
│ Key Features: │
│ - Parses cookies from request.headers and request.cookies │
│ - URL decodes cookie values │
│ - 30-second fetch timeout │
│ - ALWAYS pass to hasPermission() for correct RLS context │
│ │
└─────────────────────────────────────────────────────────────────────────────┘Common Supabase Client Mistakes
- ❌ Using createClientSupabase() in API routes → RLS sees unauthenticated user
- ❌ Not passing supabase to hasPermission() → Permission checks fail silently
- ❌ Calling createServerSupabase() without await → Returns Promise, not client
- ❌ Using cookies() in Route Handlers → Next.js 15 throws error
Core Architectural Patterns
1. Service Layer Pattern
All business logic is centralized in service files (43 services). API routes are thin wrappers that delegate to services. This provides reusability, testability, and consistent error handling.
// lib/account-service.ts - Service encapsulates business logic
export async function createAccount(
supabase: SupabaseClient,
data: CreateAccountData,
userId: string
): Promise<{ success: boolean; account?: Account; error?: string }> {
// 1. Validation
const validation = accountSchema.safeParse(data);
if (!validation.success) {
return { success: false, error: validation.error.message };
}
// 2. Business logic (check for duplicates, etc.)
const { data: existing } = await supabase
.from('accounts')
.select('id')
.eq('name', data.name)
.single();
if (existing) {
return { success: false, error: 'Account with this name already exists' };
}
// 3. Database operation
const { data: account, error } = await supabase
.from('accounts')
.insert({ ...data, created_by: userId })
.select()
.single();
if (error) {
return { success: false, error: error.message };
}
// 4. Side effects (add creator as member)
await supabase.from('account_members').insert({
account_id: account.id,
user_id: userId,
role: 'owner'
});
return { success: true, account };
}
// app/api/accounts/route.ts - API route is thin wrapper
export async function POST(request: NextRequest) {
const supabase = createApiSupabaseClient(request);
const { data: { user } } = await supabase.auth.getUser();
const body = await request.json();
const result = await createAccount(supabase, body, user.id);
if (!result.success) {
return NextResponse.json({ error: result.error }, { status: 400 });
}
return NextResponse.json({ account: result.account }, { status: 201 });
}Key services: account-service.ts, assignment-service.ts, workflow-execution-service.ts, capacity-service.ts, time-entry-service.ts, role-management-service.ts
2. Zod Schema Validation
All API inputs are validated using Zod schemas. Schemas are defined in lib/validation-schemas.ts and reused across client and server.
// lib/validation-schemas.ts
import { z } from 'zod';
export const createProjectSchema = z.object({
name: z.string().min(1, 'Project name is required').max(255),
description: z.string().optional(),
accountId: z.string().uuid('Invalid account ID'),
status: z.enum(['planning', 'in_progress', 'review', 'complete']).default('planning'),
start_date: z.string().datetime().optional(),
end_date: z.string().datetime().optional(),
budget: z.number().positive().optional(),
assigned_user_id: z.string().uuid().optional(),
});
export type CreateProjectData = z.infer<typeof createProjectSchema>;
// Helper for API routes
export function validateRequestBody<T>(schema: z.Schema<T>, data: unknown) {
const result = schema.safeParse(data);
if (!result.success) {
return { success: false, error: result.error.format() };
}
return { success: true, data: result.data };
}3. Consistent Error Handling
All API routes follow a consistent error response format. The debug logger captures errors for debugging while respecting production safety.
// Standard error response format
{
"error": "Human-readable error message",
"details": "Technical details (only in development)" // if config.errors.exposeDetails
}
// lib/debug-logger.ts usage
import { logger } from '@/lib/debug-logger';
// API route pattern
export async function POST(request: NextRequest) {
try {
// ... route logic
} catch (error: unknown) {
logger.error('Error in POST /api/projects', { action: 'create_project' }, error as Error);
return NextResponse.json({
error: 'Internal server error',
...(config.errors.exposeDetails && { details: (error as Error).message })
}, { status: 500 });
}
}4. Authentication Flow
Supabase Auth handles authentication. JWTs are stored in HTTP-only cookies. Middleware protects dashboard routes.
// Authentication flow
1. User submits login form
↓
2. supabase.auth.signInWithPassword({ email, password })
↓
3. Supabase validates credentials, returns JWT + refresh token
↓
4. Tokens stored in HTTP-only cookies automatically
↓
5. Subsequent requests include cookies
↓
6. Supabase client reads cookies, sets auth.uid() in RLS context
↓
7. RLS policies use auth.uid() to filter data
// middleware.ts - Protects dashboard routes
export const config = {
matcher: [
'/((?!_next/static|_next/image|favicon.ico|public|login|register|api/auth).*)',
],
};
export async function middleware(request: NextRequest) {
const supabase = createMiddlewareClient(request);
const { data: { session } } = await supabase.auth.getSession();
if (!session && !request.nextUrl.pathname.startsWith('/login')) {
return NextResponse.redirect(new URL('/login', request.url));
}
return NextResponse.next();
}Complete Request Lifecycle
┌─────────────────────────────────────────────────────────────────────────────┐
│ REQUEST LIFECYCLE (API ROUTE) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. USER ACTION │
│ └── Click "Create Project" button │
│ │
│ 2. CLIENT COMPONENT │
│ └── fetch('/api/projects', { method: 'POST', body: JSON.stringify(...) })│
│ │
│ 3. MIDDLEWARE (middleware.ts) │
│ ├── Check auth cookies │
│ └── Redirect to /login if not authenticated │
│ │
│ 4. API ROUTE (app/api/projects/route.ts) │
│ ├── Create Supabase client: createApiSupabaseClient(request) │
│ ├── Get authenticated user: supabase.auth.getUser() │
│ └── Parse request body: await request.json() │
│ │
│ 5. VALIDATION (lib/validation-schemas.ts) │
│ ├── validateRequestBody(createProjectSchema, body) │
│ └── Return 400 if validation fails │
│ │
│ 6. PERMISSION CHECK (lib/permission-checker.ts) │
│ ├── hasPermission(userProfile, Permission.MANAGE_PROJECTS, { accountId })│
│ ├── Layer 1: Superadmin bypass? │
│ ├── Layer 2: Base permission check │
│ ├── Layer 3: Override or context check │
│ └── Return 403 if permission denied │
│ │
│ 7. SERVICE LAYER (lib/account-service.ts, etc.) │
│ ├── Execute business logic │
│ ├── Call supabase.from('projects').insert(...) │
│ └── Handle side effects (notifications, audit log) │
│ │
│ 8. DATABASE (Supabase PostgreSQL) │
│ ├── RLS policy evaluation: auth.uid() checked │
│ ├── INSERT executed if RLS allows │
│ └── Triggers fire (updated_at, etc.) │
│ │
│ 9. RESPONSE │
│ ├── Service returns { success: true, project } │
│ ├── API route returns NextResponse.json({ project }, { status: 201 }) │
│ └── Client receives response, updates UI │
│ │
└─────────────────────────────────────────────────────────────────────────────┘Dynamic Department Membership
Unlike static org charts, MovaLab departments derive from active project work. This enables accurate capacity tracking.
Dynamic Department Membership Flow:
1. User is assigned to a project
└── project_assignments.insert({ user_id, project_id, role_in_project })
2. User has a role (e.g., "Graphic Designer")
└── user_roles.role_id → roles.id
3. Role belongs to a department (e.g., "Graphics")
└── roles.department_id → departments.id
4. User is now "in" Graphics department
└── Calculated at query time, not stored statically
└── getUserDepartments(userProfile) returns ["Graphics"]
5. User removed from all Graphics projects
└── No longer appears in Graphics department
└── Capacity calculations update automatically
// Query to get user's current departments
SELECT DISTINCT d.name
FROM user_roles ur
JOIN roles r ON ur.role_id = r.id
JOIN departments d ON r.department_id = d.id
WHERE ur.user_id = :userId;
// This is why departments are "dynamic" - no explicit assignment table neededFile Naming Conventions
Next.js App Router
page.tsx- Page component (route entry)layout.tsx- Shared layout wrapperloading.tsx- Loading state UIerror.tsx- Error boundary UIroute.ts- API route handler[id]/- Dynamic route segment(group)/- Route group (no URL impact)
Library Files
*-service.ts- Business logic service*-types.ts- TypeScript type definitions*-client-service.ts- Client-side only serviceuse*.ts- React hookvalidation-schemas.ts- Zod schemas
Database Schema
MovaLab uses PostgreSQL via Supabase with 33 tables, 100+ Row Level Security policies, 7 database functions, 1 view, and automated triggers. All schema files are in /supabase/migrations/.
Schema at a Glance
33
Tables
100+
RLS Policies
7
Functions
17
Triggers
Entity Relationships
┌─────────────────────────────────────────────────────────────────────────────┐ │ MOVALAB DATABASE SCHEMA │ ├─────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ auth.users │────►│user_profiles │◄───►│ user_roles │ │ │ └──────────────┘ └──────────────┘ └──────┬───────┘ │ │ │ │ │ │ ▼ ▼ │ │ ┌──────────────┐ ┌──────────────┐ │ │ │account_members│ │ roles │ │ │ └──────┬───────┘ └──────┬───────┘ │ │ │ │ │ │ ▼ ▼ │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ accounts │◄────│ projects │────►│ departments │ │ │ └──────────────┘ └──────┬───────┘ └──────────────┘ │ │ │ │ │ ┌────────────────────┼────────────────────┐ │ │ ▼ ▼ ▼ │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ tasks │ │project_assign│ │ time_entries │ │ │ └──────┬───────┘ └──────────────┘ └──────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │task_week_ │ │clock_sessions│ │workflow_ │ │ │ │allocations │ └──────────────┘ │instances │ │ │ └──────────────┘ └──────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────────────┘
Complete Table Reference
User Management Tables (5 tables)
user_profiles
Extends Supabase auth.users with application-specific profile data. Primary key references auth.users(id) with CASCADE delete.
| Column | Type | Constraints | Description |
|---|---|---|---|
| id | UUID | PK, FK→auth.users CASCADE | Matches Supabase auth user ID |
| TEXT | NOT NULL, UNIQUE | User email address | |
| name | TEXT | NOT NULL | Display name |
| image | TEXT | NULLABLE | Avatar URL |
| bio | TEXT | NULLABLE | User biography |
| skills | TEXT[] | NULLABLE | Array of skill tags |
| workload_sentiment | TEXT | CHECK IN (comfortable, stretched, overwhelmed) | Self-reported workload status |
| is_superadmin | BOOLEAN | DEFAULT false, NOT NULL | Fast path for superadmin checks |
| created_at | TIMESTAMPTZ | DEFAULT NOW(), NOT NULL | Creation timestamp |
| updated_at | TIMESTAMPTZ | DEFAULT NOW(), NOT NULL | Auto-updated via trigger |
roles
Defines organizational roles with permission sets. Supports hierarchy via hierarchy_level and org chart positioning.
| Column | Type | Constraints | Description |
|---|---|---|---|
| id | UUID | PK, DEFAULT uuid_generate_v4() | Primary key |
| name | TEXT | NOT NULL, UNIQUE | Role name (e.g., "Project Manager") |
| department_id | UUID | FK→departments SET NULL | Optional department scope |
| description | TEXT | NULLABLE | Role description |
| permissions | JSONB | DEFAULT '{}', NOT NULL | Permission key-value pairs |
| is_system_role | BOOLEAN | DEFAULT false, NOT NULL | Protected from deletion |
| hierarchy_level | INTEGER | DEFAULT 0 | 0-100, higher = more senior |
| display_order | INTEGER | DEFAULT 0 | UI sort order |
| reporting_role_id | UUID | FK→roles SET NULL (self-ref) | Reports to this role |
| chart_position_x | FLOAT | NULLABLE | Org chart X position |
| chart_position_y | FLOAT | NULLABLE | Org chart Y position |
permissions JSONB Schema:
{
"view_projects": true,
"manage_projects": true,
"view_accounts": true,
"manage_time": false,
"view_all_time_entries": false,
// ... ~40 possible permission keys
}user_roles
Many-to-many mapping between users and roles. Users can have multiple roles. Critical table for permission checks.
| Column | Type | Constraints | Description |
|---|---|---|---|
| id | UUID | PK | Primary key |
| user_id | UUID | FK→user_profiles CASCADE, NOT NULL | User being assigned |
| role_id | UUID | FK→roles CASCADE, NOT NULL | Role being assigned |
| assigned_at | TIMESTAMPTZ | DEFAULT NOW() | When role was assigned |
| assigned_by | UUID | FK→user_profiles | Who assigned the role |
departments
Organizational departments. Used for role scoping and capacity aggregation.
| Column | Type | Constraints |
|---|---|---|
| id | UUID | PK |
| name | TEXT | NOT NULL, UNIQUE |
| description | TEXT | NULLABLE |
| created_at, updated_at | TIMESTAMPTZ | DEFAULT NOW() |
role_hierarchy_audit
Audit trail for org chart changes. Tracks reporting structure modifications.
| Column | Type | Description |
|---|---|---|
| role_id | UUID FK | Role that was modified |
| changed_by | UUID FK | User who made change |
| action | TEXT | Type of change |
| old_reporting_role_id | UUID | Previous reporting role |
| new_reporting_role_id | UUID | New reporting role |
| metadata | JSONB | Additional change context |
Accounts & Projects Tables (8 tables)
accounts
Client accounts with tiering and status. Central entity for client data isolation via RLS.
| Column | Type | Constraints |
|---|---|---|
| id | UUID | PK |
| name | TEXT | NOT NULL, UNIQUE |
| description | TEXT | NULLABLE |
| primary_contact_email | TEXT | NULLABLE |
| primary_contact_name | TEXT | NULLABLE |
| account_manager_id | UUID | FK→user_profiles |
| service_tier | TEXT | CHECK IN (basic, premium, enterprise) |
| status | TEXT | DEFAULT 'active', CHECK IN (active, inactive, suspended) |
projects
Core project entity. Always belongs to an account. Cascade deletes tasks, assignments, time entries.
| Column | Type | Constraints |
|---|---|---|
| id | UUID | PK |
| name | TEXT | NOT NULL |
| account_id | UUID | FK→accounts CASCADE, NOT NULL |
| status | TEXT | CHECK IN (planning, in_progress, review, complete, on_hold) |
| priority | TEXT | CHECK IN (low, medium, high, urgent) |
| start_date, end_date | DATE | NULLABLE |
| estimated_hours | NUMERIC(10,2) | NULLABLE |
| actual_hours | NUMERIC(10,2) | DEFAULT 0 |
| created_by | UUID | FK→user_profiles |
| assigned_user_id | UUID | FK→user_profiles (primary assignee) |
account_members
Many-to-many: user_id, account_id. Controls account-level access.
project_assignments
project_id, user_id, role_in_project. Soft delete via removed_at.
project_stakeholders
Non-working observers: project_id, user_id, role.
project_updates
Status journal: project_id, content, created_by.
project_issues
Blockers: project_id, content, status (open/in_progress/resolved).
account_kanban_configs
Per-account kanban columns as JSONB array.
Tasks Tables (2 tables)
tasks
Individual work items within projects. Supports Kanban boards and time tracking.
| Column | Type | Constraints |
|---|---|---|
| id | UUID | PK |
| name | TEXT | NOT NULL |
| project_id | UUID | FK→projects CASCADE, NOT NULL |
| status | TEXT | CHECK IN (backlog, todo, in_progress, review, done, blocked) |
| priority | TEXT | CHECK IN (low, medium, high, urgent) |
| assigned_to | UUID | FK→user_profiles |
| estimated_hours | NUMERIC(10,2) | NULLABLE |
| actual_hours | NUMERIC(10,2) | DEFAULT 0 |
| due_date | DATE | NULLABLE |
task_dependencies
Defines task ordering for Gantt charts. Supports 4 dependency types.
finish_to_start
B starts when A finishes (most common)
start_to_start
B starts when A starts
finish_to_finish
B finishes when A finishes
start_to_finish
B finishes when A starts
Time Tracking & Capacity Tables (4 tables)
time_entries
Actual hours logged against tasks/projects. Links to clock sessions. 14-day edit window enforced via RLS.
| Column | Type | Constraints |
|---|---|---|
| id | UUID | PK |
| user_id | UUID | FK→user_profiles CASCADE, NOT NULL |
| project_id | UUID | FK→projects CASCADE, NOT NULL |
| task_id | UUID | FK→tasks CASCADE (optional) |
| hours_logged | NUMERIC(5,2) | CHECK (>0 AND <=24), NOT NULL |
| entry_date | DATE | NOT NULL |
| week_start_date | DATE | NOT NULL (Monday of week) |
| clock_session_id | UUID | FK→clock_sessions (links to clock in/out) |
| is_auto_clock_out | BOOLEAN | DEFAULT false |
clock_sessions
Active work tracking. Auto-closes after 16 hours. is_active, clock_in_time, clock_out_time.
user_availability
Weekly capacity. Unique on (user_id, week_start_date). Default 40 hours.
task_week_allocations
Planned hours per task/week. Unique on (task_id, week_start_date, assigned_user_id).
Workflow System Tables (8 tables)
workflow_nodes
Individual steps in a workflow. 8 node types with type-specific settings in JSONB.
| Column | Type | Description |
|---|---|---|
| workflow_template_id | UUID FK | Parent template (CASCADE) |
| node_type | TEXT | start, department, role, approval, form, conditional, sync, end |
| entity_id | UUID | Optional reference (dept, role, form) |
| label | TEXT | Display label |
| settings | JSONB | Type-specific configuration |
| form_template_id | UUID FK | For form nodes |
| position_x, position_y | FLOAT | Canvas position |
settings JSONB by node_type:
// approval node
{ "required_approvers": 2, "timeout_hours": 48, "escalation_role_id": "..." }
// conditional node
{ "condition_field": "project.priority", "operator": "eq", "value": "high" }
// form node
{ "auto_assign_submitter": true, "require_all_fields": true }workflow_templates
Reusable workflow definitions. name, is_active, created_by.
workflow_connections
Edges between nodes: from_node_id, to_node_id, condition (JSONB).
workflow_instances
Running workflow execution. Links to project/task. status: active/completed/cancelled.
workflow_history
Audit trail. transition_type: normal, out_of_order, auto.
workflow_active_steps
Current active nodes. branch_id enables parallel execution.
form_templates / form_responses
Dynamic forms. schema (JSONB) defines fields. response_data stores submissions.
Client Portal & Supporting Tables (7 tables)
deliverables
Project deliverables. status: draft, pending_review, approved, rejected. version tracking.
client_portal_invitations
Invite clients to portal. status: pending, accepted, expired. expires_at for cleanup.
client_feedback
Client ratings (1-5) and feedback text per project.
notifications
User notifications. title, message, read (boolean), link.
newsletters
Company newsletters. is_published, published_at.
milestones
Gantt chart milestones. name, date, color.
Database Functions (7 functions)
Functions use SECURITY DEFINER to bypass RLS and prevent circular dependency issues.
user_has_permission(permission_name TEXT) → BOOLEAN
Checks if current user (auth.uid()) has a specific permission via their roles. Uses SECURITY DEFINER to avoid RLS circular dependency.
-- Usage in RLS policy
CREATE POLICY "projects_select" ON projects
FOR SELECT USING (
user_is_superadmin()
OR user_has_permission('view_all_projects')
OR (user_has_permission('view_projects') AND ...)
);user_is_superadmin() → BOOLEAN
Two-stage check: first checks is_superadmin flag on user_profiles (fast path), then falls back to checking for Superadmin role (legacy support).
user_can_view_workflow(workflow_instance_id UUID) → BOOLEAN
Helper for workflow RLS policies. Prevents nested RLS performance issues by using SECURITY DEFINER to bypass nested queries.
user_can_manage_workflow(workflow_instance_id UUID) → BOOLEAN
Checks if user can manage (update, transition) a workflow instance. Requires execute_workflows permission and project assignment.
get_week_start_date(input_date DATE) → DATE
Returns Monday of the week for any date (ISO 8601 standard). Used throughout for consistent week calculations in capacity planning.
-- Returns: 2024-01-15 (Monday) for any date in that week
SELECT get_week_start_date('2024-01-18'); -- Thursday
-- Result: 2024-01-15auto_clock_out_stale_sessions() → VOID
Called by cron job to close clock sessions active for 16+ hours. Prevents overnight sessions from corrupting data. Sets is_auto_clock_out = true, clock_out_time = clock_in + 16 hours.
is_superadmin(user_id UUID) → BOOLEAN
Backwards compatibility wrapper that takes user_id parameter instead of using auth.uid().
Database View
weekly_capacity_summary
Aggregates user availability, task allocations, and actual hours logged per week. Powers capacity planning dashboards.
CREATE VIEW weekly_capacity_summary AS
SELECT
ua.user_id,
ua.week_start_date,
ua.available_hours,
COALESCE(SUM(twa.allocated_hours), 0) AS allocated_hours,
COALESCE(SUM(te.hours_logged), 0) AS actual_hours,
-- Utilization rate: (actual / available) * 100
CASE WHEN ua.available_hours > 0
THEN (COALESCE(SUM(te.hours_logged), 0) / ua.available_hours * 100)
ELSE 0 END AS utilization_rate,
-- Remaining: available - actual
ua.available_hours - COALESCE(SUM(te.hours_logged), 0) AS remaining_capacity,
-- Over-allocated flag
COALESCE(SUM(twa.allocated_hours), 0) > ua.available_hours AS is_over_allocated,
-- Active task count
COUNT(DISTINCT twa.task_id) AS active_task_count
FROM user_availability ua
LEFT JOIN task_week_allocations twa
ON twa.assigned_user_id = ua.user_id
AND twa.week_start_date = ua.week_start_date
LEFT JOIN time_entries te
ON te.user_id = ua.user_id
AND te.week_start_date = ua.week_start_date
GROUP BY ua.user_id, ua.week_start_date, ua.available_hours;RLS Policy Patterns
Each table gets exactly 4 policies (SELECT, INSERT, UPDATE, DELETE). Policies use SECURITY DEFINER functions to avoid circular dependencies.
Pattern 1: User owns resource
USING (user_id = auth.uid())
Pattern 2: Superadmin bypass
USING (user_is_superadmin()
OR user_has_permission('...'))Pattern 3: Context + permission
USING (
user_has_permission('view_projects')
AND EXISTS (
SELECT 1 FROM project_assignments pa
WHERE pa.project_id = projects.id
AND pa.user_id = auth.uid()
)
)Pattern 4: Time-limited edits
USING (
user_id = auth.uid()
AND entry_date >=
CURRENT_DATE - INTERVAL '14 days'
)Triggers
on_auth_user_created
Creates user_profiles entry when new auth.users record is inserted.
update_*_updated_at
17 triggers that auto-set updated_at = NOW() on row modifications.
Cascade Delete Chains
auth.users → user_profiles → user_roles, account_members, project_assignments, time_entries accounts → projects → tasks → time_entries, task_week_allocations, task_dependencies workflow_templates → workflow_nodes → workflow_connections workflow_instances → workflow_history, workflow_active_steps
Permissions & RBAC
MovaLab implements a 3-layer hybrid RBAC (Role-Based Access Control) system. With ~40 granular permissions, the system provides fine-grained control over what users can do and what resources they can access. Permissions are stored in the roles table as JSONB and evaluated through the permission-checker service.
Key Design Principle: Never hardcode role names. Roles are dynamically created by admins, and their names can change. Permissions are the contract - always use permission-based checks, not role name checks.
The 3-Layer Hybrid Permission System
Each permission check evaluates three layers in sequence. Understanding this architecture is critical for building features correctly.
┌─────────────────────────────────────────────────────────────────────────────┐ │ HYBRID PERMISSION SYSTEM (3 LAYERS) │ ├─────────────────────────────────────────────────────────────────────────────┤ │ │ │ LAYER 1: SUPERADMIN BYPASS │ │ ───────────────────────── │ │ Check: userProfile.is_superadmin === true │ │ OR user has "Superadmin" system role │ │ Result: ALLOW (skip all other checks) │ │ │ │ LAYER 2: BASE PERMISSION CHECK │ │ ──────────────────────────── │ │ Check: Does any of the user's roles have this permission set to TRUE? │ │ Logic: OR across all roles (if ANY role has it, permission is granted) │ │ Result: If FALSE → DENY immediately │ │ If TRUE → Continue to Layer 3 │ │ │ │ LAYER 3: CONTEXT-AWARE CHECK (for resource-specific permissions) │ │ ──────────────────────────────────────────────────────────────── │ │ a) Check OVERRIDE permissions first: │ │ - VIEW_ALL_PROJECTS bypasses project assignment check │ │ - MANAGE_ALL_PROJECTS bypasses project assignment check │ │ - VIEW_ALL_DEPARTMENTS bypasses department membership check │ │ - EXECUTE_ANY_WORKFLOW bypasses workflow node assignment │ │ If user has override → ALLOW │ │ │ │ b) Check CONTEXT assignment: │ │ - projectId: Is user assigned to this project? │ │ - accountId: Is user an account member or manager? │ │ - departmentId: Does user have a role in this department? │ │ - workflowInstanceId: Is user assigned to current workflow node? │ │ If context matches → ALLOW │ │ If context doesn't match → DENY │ │ │ └─────────────────────────────────────────────────────────────────────────────┘
Complete Permission Reference (~40 Permissions)
All permissions are defined in lib/permissions.ts as an enum. Permissions marked with ⚡ are OVERRIDE permissions that bypass context checks.
Role Management
MANAGE_USER_ROLESCreate/edit/delete roles, assign/remove users from roles, approve registrations. Full role and user-role management.
MANAGE_USERSFull user management - view, edit, delete users. Required for admin dashboard access.
Department Management
MANAGE_DEPARTMENTSCreate, edit, and delete departments.
MANAGE_USERS_IN_DEPARTMENTSAssign and remove users from departments.
VIEW_DEPARTMENTSView departments the user belongs to (context-aware).
VIEW_ALL_DEPARTMENTS ⚡OVERRIDE: View all departments organization-wide, regardless of membership.
Account Management
MANAGE_ACCOUNTSCreate, edit, and delete client accounts.
MANAGE_USERS_IN_ACCOUNTSAssign and remove users from accounts (account_members table).
VIEW_ACCOUNTSView accounts user has access to (via membership or project assignment).
VIEW_ALL_ACCOUNTS ⚡OVERRIDE: View all accounts organization-wide.
Project Management
MANAGE_PROJECTSCreate, edit, and delete projects in assigned accounts. Requires account context.
VIEW_PROJECTSView projects user is assigned to (via project_assignments or task assignment).
VIEW_ALL_PROJECTS ⚡OVERRIDE: View all projects outside of assigned ones.
MANAGE_ALL_PROJECTS ⚡OVERRIDE: Create, edit, and delete any project regardless of assignment.
Workflow Management
MANAGE_WORKFLOWSCreate, edit, and delete workflow templates.
EXECUTE_WORKFLOWSHand off work in workflows. Context-aware: checks node assignment.
EXECUTE_ANY_WORKFLOW ⚡OVERRIDE: Execute any workflow without node assignment check. Unblocks stuck workflows.
SKIP_WORKFLOW_NODESHand off work out-of-order. Admin-only for innovation tracking.
MANAGE_ALL_WORKFLOWS ⚡OVERRIDE: Manage any workflow organization-wide.
Capacity & Time Tracking
EDIT_OWN_AVAILABILITYSet and manage personal weekly work availability.
MANAGE_TIMELog and edit own time entries.
VIEW_TIME_ENTRIESView time entries (context-aware: own or team based on assignment).
EDIT_TIME_ENTRIESEdit time entries (context-aware: own or team).
VIEW_ALL_TIME_ENTRIES ⚡OVERRIDE: View all time entries organization-wide.
VIEW_TEAM_CAPACITYView capacity metrics for team/department members.
VIEW_ALL_CAPACITY ⚡OVERRIDE: View organization-wide capacity metrics.
Analytics (Tiered Hierarchy)
Analytics permissions follow a 3-tier hierarchy: Department → Account → Organization. Higher tiers include lower tier access.
VIEW_ALL_DEPARTMENT_ANALYTICS ⚡View analytics for entire department (all projects and users in department).
VIEW_ALL_ACCOUNT_ANALYTICS ⚡View analytics for entire account. Includes VIEW_ALL_DEPARTMENT_ANALYTICS.
VIEW_ALL_ANALYTICS ⚡View organization-wide analytics. Includes all lower-tier analytics permissions.
Other Permissions
MANAGE_UPDATESCreate/edit/delete project status updates.
VIEW_UPDATESView updates (context-aware).
MANAGE_ISSUESCreate/edit/delete project issues.
VIEW_ISSUESView project issues and blockers.
MANAGE_NEWSLETTERSCreate/edit/delete company newsletters.
VIEW_NEWSLETTERSView newsletters on welcome page.
MANAGE_CLIENT_INVITESSend client invitations and view feedback.
Implementing Permission Checks
Use the hasPermission function from lib/rbac.ts. Always pass the authenticated Supabase client on server-side for proper RLS context.
import { hasPermission, isSuperadmin } from '@/lib/rbac';
import { Permission } from '@/lib/permissions';
// ============================================
// EXAMPLE 1: Check permission without context
// ============================================
const canManageAccounts = await hasPermission(
userProfile,
Permission.MANAGE_ACCOUNTS,
undefined, // No context needed
supabaseClient // REQUIRED on server-side, optional on client
);
// ============================================
// EXAMPLE 2: Check permission WITH context
// ============================================
const canEditProject = await hasPermission(
userProfile,
Permission.MANAGE_PROJECTS,
{ projectId: project.id, accountId: project.account_id }, // Context
supabaseClient
);
// This checks:
// 1. Is user superadmin? → ALLOW
// 2. Has MANAGE_ALL_PROJECTS? → ALLOW (override)
// 3. Has MANAGE_PROJECTS AND is assigned to this project? → ALLOW
// 4. Otherwise → DENY
// ============================================
// EXAMPLE 3: Check multiple permissions (ANY)
// ============================================
import { hasAnyPermission } from '@/lib/rbac';
const canAccessAdmin = await hasAnyPermission(
userProfile,
[Permission.MANAGE_USERS, Permission.MANAGE_USER_ROLES, Permission.MANAGE_ACCOUNTS],
undefined,
supabaseClient
);
// ============================================
// EXAMPLE 4: Check multiple permissions (ALL)
// ============================================
import { hasAllPermissions } from '@/lib/rbac';
const canFullyManageWorkflows = await hasAllPermissions(
userProfile,
[Permission.MANAGE_WORKFLOWS, Permission.EXECUTE_WORKFLOWS],
undefined,
supabaseClient
);
// ============================================
// EXAMPLE 5: Superadmin check (use sparingly)
// ============================================
if (isSuperadmin(userProfile)) {
// Bypass all permission checks
// Only use for truly admin-only features
}Permission Context (PermissionContext)
| Context Field | Type | Used For | Validation Method |
|---|---|---|---|
| projectId | UUID | Project-specific permissions | isAssignedToProject() - checks project_assignments, tasks, created_by |
| accountId | UUID | Account-specific permissions | hasAccountAccess() - checks account_members, account_manager_id, projects |
| departmentId | UUID | Department-specific permissions | managesDepartment() - checks if user has role in department |
| workflowInstanceId | UUID | Workflow execution permissions | isAssignedToWorkflowNode() - checks workflow_active_steps, node entity_id |
Override Permission Mappings
Override permissions bypass context checks. When checking a base permission, the system also checks if the user has the corresponding override.
// Override permission mappings in permission-checker.ts
const overridePermissions = {
// Projects
VIEW_PROJECTS: → VIEW_ALL_PROJECTS
MANAGE_PROJECTS: → MANAGE_ALL_PROJECTS
// Departments
VIEW_DEPARTMENTS: → VIEW_ALL_DEPARTMENTS
MANAGE_DEPARTMENTS: → VIEW_ALL_DEPARTMENTS
// Accounts
VIEW_ACCOUNTS: → VIEW_ALL_ACCOUNTS
MANAGE_ACCOUNTS: → VIEW_ALL_ACCOUNTS
// Updates
VIEW_UPDATES: → VIEW_ALL_UPDATES
MANAGE_UPDATES: → VIEW_ALL_UPDATES
// Time Tracking
VIEW_TIME_ENTRIES: → VIEW_ALL_TIME_ENTRIES
EDIT_TIME_ENTRIES: → VIEW_ALL_TIME_ENTRIES
MANAGE_TIME: → VIEW_ALL_TIME_ENTRIES
// Workflows
MANAGE_WORKFLOWS: → MANAGE_ALL_WORKFLOWS
EXECUTE_WORKFLOWS: → [EXECUTE_ANY_WORKFLOW, MANAGE_ALL_WORKFLOWS]
// Analytics (tiered)
VIEW_ALL_DEPARTMENT_ANALYTICS: → [VIEW_ALL_ACCOUNT_ANALYTICS, VIEW_ALL_ANALYTICS]
VIEW_ALL_ACCOUNT_ANALYTICS: → VIEW_ALL_ANALYTICS
// Capacity
VIEW_TEAM_CAPACITY: → VIEW_ALL_CAPACITY
}Permission Caching
Cache Settings
- TTL: 5 minutes
- Key format: userId:permission:context
- Storage: In-memory Map
- Auto-cleanup: Before each check
Cache Invalidation
- Automatic: Expires after 5 min
- clearPermissionCache(): Clears all
- clearPermissionCache(userId): Clears user
- When to clear: Role changes, permission updates
Critical Implementation Rules
Always pass supabaseClient on server-side
Why: Without it, RLS policies see unauthenticated requests and permission checks fail incorrectly
hasPermission(userProfile, Permission.VIEW_PROJECTS, { projectId }, supabase)Never hardcode role names in permission checks
Why: Role names are user-configurable and can change. Permissions are the contract.
Use hasPermission() not role.name === 'Executive'Always include context for resource-specific permissions
Why: Without context, only base permission is checked - user may not have access to that specific resource
hasPermission(user, MANAGE_PROJECTS, { projectId, accountId }, supabase)Use OR logic across roles
Why: If ANY role has permission=true, user has the permission. A false in one role doesn't override true in another.
User has Designer (view_projects: true) + Intern (view_projects: false) = CAN view projectsCheck superadmin only for truly admin-only features
Why: Most features should use permission checks. Superadmin bypass is automatic in hasPermission().
Only use isSuperadmin() for system settings, not regular resource accessDebugging Permissions
The permission system includes detailed logging. Enable debug logging to trace permission evaluation.
// Permission check logs include:
{
permission: "MANAGE_PROJECTS",
userId: "uuid",
result: true,
reason: "override_permission" | "base_permission" | "context_match" | "superadmin",
duration: 12, // milliseconds
context: { projectId: "uuid", accountId: "uuid" },
cached: true
}
// Common log reasons:
// "superadmin" - User is superadmin (bypass)
// "no_base_permission" - User's roles don't have the permission
// "override_permission" - User has VIEW_ALL_* or similar override
// "context_match" - Base permission + assigned to resource
// "no_context_match" - Has permission but not assigned to resourceSetting Up Roles
Roles define permission sets and organizational hierarchy. Stored in the roles table with JSONB permissions.
Default Role Hierarchy
| Level | Role | Description | Key Permissions |
|---|---|---|---|
| 100 | Superadmin | Full system access | All permissions (bypass) |
| 90 | Executive Director | Organization-wide | VIEW_ALL_*, analytics, capacity |
| 80 | Account Director | Multi-account | MANAGE_ACCOUNTS, VIEW_ALL_PROJECTS |
| 70 | Account Manager | Account management | MANAGE_ACCOUNTS, MANAGE_PROJECTS |
| 60 | Project Manager | Project-level | MANAGE_PROJECTS, assignments |
| 50 | Senior | Full project access | VIEW_PROJECTS, MANAGE_DELIVERABLES |
| 40 | Mid-level | Standard access | VIEW_PROJECTS, time tracking |
| 30 | Junior | Limited access | VIEW_PROJECTS (assigned only) |
| 20 | Contributor | Part-time | Specific task access |
| 10 | Client | Portal only | Client portal permissions |
Creating a Role via SQL
INSERT INTO roles (name, level, permissions, department_id, description) VALUES (
'Content Strategist',
45,
'{"view_projects": true, "edit_own_tasks": true, "view_accounts": true}'::jsonb,
(SELECT id FROM departments WHERE name = 'Marketing'),
'Creates and manages content strategy'
);Role Table Schema
| Column | Type | Description |
|---|---|---|
| name | TEXT UNIQUE | Role name (e.g., 'Project Manager') |
| permissions | JSONB | Permission map: {permission: boolean} |
| hierarchy_level | INTEGER | 0-100 level for hierarchy |
| department_id | UUID | Optional department scope |
| reporting_role_id | UUID | Parent role in org chart |
| is_system_role | BOOLEAN | True for Superadmin, Unassigned |
| display_order | INTEGER | UI ordering |
| chart_position_x/y | FLOAT | Org chart visual position |
Org Chart Integration
The visual org chart editor at /admin/roles allows drag-and-drop hierarchy editing. Changes are audited in role_hierarchy_audit.
Setting Up Accounts
Accounts represent clients. Each account can have multiple projects, team members, and custom Kanban configurations.
Account Fields
| Field | Type | Description |
|---|---|---|
| name | TEXT UNIQUE | Account/client name |
| description | TEXT | Account description |
| service_tier | ENUM | basic, premium, enterprise |
| status | ENUM | active, inactive, suspended |
| account_manager_id | UUID | Assigned account manager |
| primary_contact_email | TEXT | Client contact email |
| primary_contact_name | TEXT | Client contact name |
Service Tiers
Basic
- Standard support
- 5 projects
- Basic reporting
Premium
- Priority support
- Unlimited projects
- Advanced analytics
Enterprise
- Dedicated support
- Custom workflows
- White-label options
Account Isolation (RLS)
Row Level Security ensures users only see accounts they're members of or manage:
-- Users see accounts they're members of or manage auth.uid() IN ( SELECT user_id FROM account_members WHERE account_id = accounts.id ) OR auth.uid() = account_manager_id
Setting Up Departments
Departments group users by function. Roles can be scoped to departments. User department membership is dynamic, derived from project assignments.
Default Departments
Engineering
Design
Marketing
Operations
Sales
Customer Success
Dynamic Department Membership
Users belong to departments based on their active project work:
-- Get user's departments based on current project assignments SELECT DISTINCT d.id, d.name FROM departments d JOIN roles r ON r.department_id = d.id JOIN user_roles ur ON ur.role_id = r.id JOIN project_assignments pa ON pa.user_id = ur.user_id WHERE ur.user_id = $1 AND pa.removed_at IS NULL;
Workflows
MovaLab's workflow engine enables visual process automation through a drag-and-drop builder. Workflows define how projects move through stages, who handles each stage, and what approvals or data collection are required. The system uses @xyflow/react for the visual builder and maintains workflow independence through snapshots.
Important: MovaLab enforces single-pathway workflows. Parallel execution is disabled. Each workflow follows one path at a time, with branching only through approval decisions or conditional routing. Sync nodes are deprecated.
Workflow Architecture
The workflow system consists of four database tables and two service layers that work together to manage workflow definitions, executions, and history.
┌─────────────────────────────────────────────────────────────────────────────┐ │ WORKFLOW ARCHITECTURE │ ├─────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌────────────────────┐ ┌─────────────────────┐ ┌──────────────────┐ │ │ │ workflow_templates │───▶│ workflow_nodes │───▶│workflow_connections│ │ │ │ (definitions) │ │ (steps/stages) │ │ (transitions) │ │ │ └────────────────────┘ └─────────────────────┘ └──────────────────┘ │ │ │ │ │ ▼ │ │ ┌────────────────────┐ ┌─────────────────────┐ ┌──────────────────┐ │ │ │workflow_instances │───▶│ workflow_history │───▶│workflow_approvals │ │ │ │ (executions) │ │ (transition log) │ │ (decisions) │ │ │ └────────────────────┘ └─────────────────────┘ └──────────────────┘ │ │ │ │ │ ▼ │ │ ┌────────────────────────────────────────────────────────────────────────┐ │ │ │ SNAPSHOT SYSTEM │ │ │ │ started_snapshot: Captured when workflow starts - workflow changes │ │ │ │ don't affect in-progress projects │ │ │ │ completed_snapshot: Captured at completion - preserves final state │ │ │ └────────────────────────────────────────────────────────────────────────┘ │ │ │ │ SERVICE LAYERS: │ │ ├── workflow-service.ts: Template CRUD, node/connection management │ │ └── workflow-execution-service.ts: Start, progress, complete workflows │ │ │ └─────────────────────────────────────────────────────────────────────────────┘
Core Database Tables
workflow_templates
Stores workflow definitions. Each template can be active or inactive.
id, name, description, created_by, is_active, created_at, updated_atworkflow_nodes
Individual steps within a workflow. Each node has a type, position, and configuration.
id, workflow_template_id, node_type, entity_id, position_x, position_y, label, requires_form, form_template_id, settings (JSONB)workflow_connections
Defines transitions between nodes. Can include conditions for branching.
id, workflow_template_id, from_node_id, to_node_id, condition (JSONB)workflow_instances
Running workflow executions linked to projects. Stores snapshots for independence.
id, workflow_template_id, project_id, task_id, current_node_id, status, started_at, completed_at, started_snapshot (JSONB), completed_snapshot (JSONB)Node Types (8 Types)
Each node type serves a specific purpose in the workflow. The node_type field uses the enum: 'start' | 'department' | 'role' | 'approval' | 'form' | 'conditional' | 'sync' | 'end'.
start
Entry PointEvery workflow begins at the Start node. There must be exactly one per workflow. It has no incoming connections (only outgoing). When a project starts a workflow, execution immediately moves to the next connected node.
Configuration:
No configuration needed. Entity_id is null.role
Primary AssignmentAssigns the project to users with a specific role. When the workflow reaches this node, all users with the specified role are added to project_assignments and can see the project. Department is auto-assigned based on the role's department_id.
Configuration (settings JSONB):
{
"roleId": "uuid", // Required: The role to assign
"roleName": "Designer" // Display name for UI
}Authorization:
Only users with this role can advance the workflow from this node.approval
Decision GateRequires a user decision before proceeding. Supports Approve, Reject, and optional Needs Changes decisions. Each decision can route to a different next node. Rejections auto-create project issues.
Configuration (settings JSONB):
{
"approverRoleId": "uuid", // Role that can approve
"approverRoleName": "PM", // Display name
"requiredApprovals": 1, // How many approvals needed
"allowFeedback": true, // Enable feedback field
"allowSendBack": true // Enable "Needs Changes" option
}Decision Routing:
Connections can have condition.decision = "approved" or "rejected" for different paths.form
Data CollectionCollects structured data from users via forms. Forms can be linked from form_templates or defined inline. Form responses are stored and can drive conditional routing.
Configuration (settings JSONB):
{
// Option 1: Link to form_templates
"formTemplateId": "uuid",
"formTemplateName": "Project Brief",
// Option 2: Inline form definition
"formFields": [
{ "id": "budget", "type": "number", "label": "Budget", "required": true },
{ "id": "deadline", "type": "date", "label": "Deadline" }
],
"formName": "Quick Brief",
"formDescription": "Enter project details",
"isDraftForm": false,
"allowAttachments": true
}conditional
Smart RoutingRoutes the workflow based on conditions. Can evaluate approval decisions, form values, or custom conditions. Only ONE path is taken (not parallel). Conditional nodes are "invisible" to users - the system auto-advances through them.
Configuration (settings JSONB):
{
"conditionType": "approval_decision" | "form_value" | "custom",
// For form_value conditions:
"sourceFormNodeId": "uuid", // Which form node to read from
"sourceFormFieldId": "budget", // Which field to evaluate
// Condition branches (displayed as colored handles)
"conditions": [
{ "label": "High Budget", "value": "high", "color": "#22c55e" },
{ "label": "Low Budget", "value": "low", "color": "#ef4444" }
]
}Condition Evaluation Operators:
equals, contains, starts_with, ends_with, is_empty, is_not_empty, greater_than, less_than, greater_or_equal, less_or_equal, between, before, after, is_checked, is_not_checkedend
CompletionMarks the workflow as complete. When reached, the workflow_instance status becomes "completed", completed_at is set, and the project is marked complete via completeProject(). Multiple end nodes are allowed for different completion paths.
Behavior:
No outgoing connections allowed. Captures completed_snapshot for historical reference.department, sync, client
Legacy/Deprecateddepartment: Legacy handoff node. Use 'role' instead - departments are auto-assigned. sync: Deprecated - parallel workflows are disabled. client: Reserved for client portal approvals (separate from internal approvals).
Workflow Execution Lifecycle
┌─────────────────────────────────────────────────────────────────────────────┐ │ WORKFLOW EXECUTION LIFECYCLE │ ├─────────────────────────────────────────────────────────────────────────────┤ │ │ │ 1. START WORKFLOW (startWorkflowForProject) │ │ ├── Validate template exists and is_active = true │ │ ├── Verify template has at least Start and End nodes │ │ ├── Create started_snapshot (nodes + connections frozen) │ │ ├── Create workflow_instance with status = 'active' │ │ ├── Link to project (projects.workflow_instance_id) │ │ ├── Create initial workflow_history entry │ │ ├── Create workflow_active_steps entry for first node │ │ └── assignProjectToNode() - add users via project_assignments │ │ │ │ 2. PROGRESS WORKFLOW (progressWorkflow) │ │ ├── AUTHORIZATION CHECKS: │ │ │ ├── Superadmin bypass (isUserSuperadmin) │ │ │ ├── Project assignment check (isUserAssignedToProject) │ │ │ └── Role/department validation based on node.entity_id │ │ ├── Determine next node based on: │ │ │ ├── Normal: findNextNode() - follows connection │ │ │ ├── Approval: findDecisionBasedNextNode() - uses decision │ │ │ └── Conditional: findConditionalNextNode() - evaluates conditions │ │ ├── If approval node: record decision in workflow_approvals │ │ ├── Update workflow_instance.current_node_id │ │ ├── AUTO-ADVANCE: If landed on conditional, immediately route through │ │ ├── Add user to project_contributors │ │ ├── Create workflow_history entry │ │ ├── If rejected: auto-create project_issue │ │ ├── Always: create project_update for timeline visibility │ │ └── assignProjectToNode() for next node (removes previous assignments) │ │ │ │ 3. COMPLETE WORKFLOW │ │ ├── Reached 'end' node OR status = 'completed' │ │ ├── Capture completed_snapshot (preserves final state) │ │ ├── Set completed_at timestamp │ │ └── completeProject() - marks project as complete │ │ │ │ 4. CANCEL WORKFLOW │ │ ├── Set status = 'cancelled' │ │ └── Project remains but workflow is stopped │ │ │ └─────────────────────────────────────────────────────────────────────────────┘
Workflow Authorization Model
Progressing a workflow step requires passing multiple authorization checks. This ensures only the correct users can advance projects.
// Authorization flow in progressWorkflow()
// 1. SUPERADMIN BYPASS
const isSuperadmin = await isUserSuperadmin(supabase, currentUserId);
if (isSuperadmin) { /* Skip all other checks */ }
// 2. PROJECT ASSIGNMENT CHECK (for non-superadmins)
// User must be in project_assignments (not just created_by or assigned_user_id)
const isAssigned = await isUserAssignedToProject(supabase, currentUserId, projectId);
if (!isAssigned) {
return { error: 'You must be assigned to this project to advance the workflow' };
}
// 3. NODE-SPECIFIC VALIDATION
const entityId = currentNode.entity_id;
const nodeType = currentNode.node_type;
if (nodeType === 'role' || nodeType === 'approval') {
// For role/approval nodes, entity_id is a role_id
const hasRequiredRole = await userHasRole(supabase, currentUserId, entityId);
if (!hasRequiredRole) {
return { error: 'Only users with the "[roleName]" role can advance this step' };
}
}
if (nodeType === 'department') {
// For department nodes, check if user has any role in this department
const { data: userDeptRoles } = await supabase
.from('user_roles')
.select('roles!inner(department_id)')
.eq('user_id', currentUserId)
.eq('roles.department_id', entityId);
if (!userDeptRoles?.length) {
return { error: 'Only users in the "[deptName]" department can advance this step' };
}
}Connection Conditions (JSONB)
Workflow connections (transitions) can have conditions that determine when they are followed. Conditions are stored in the condition JSONB field.
// workflow_connections.condition JSONB structure
// APPROVAL-BASED ROUTING (from approval nodes)
{
"decision": "approved", // or "rejected"
"conditionValue": "approved" // Alternative field (both checked)
}
// FORM-BASED ROUTING (from conditional nodes)
{
"sourceFormFieldId": "budget", // Field ID from form
"conditionType": "greater_than", // Evaluation operator
"value": "10000", // Comparison value
"value2": "50000" // For 'between' operator
}
// LABELED BRANCHES (for conditional node UI)
{
"label": "High Budget",
"color": "#22c55e",
"conditionValue": "high"
}
// DEFAULT PATH (no condition - fallback route)
null // or {}
Workflow Snapshots
Snapshots ensure that changes to workflow templates don't affect in-progress projects. When a workflow starts, the entire template is frozen into started_snapshot.
started_snapshot
Captured when workflow starts. Contains complete node and connection definitions.
{
"nodes": [...],
"connections": [...],
"template_name": "Production",
"captured_at": "2025-01-15T..."
}completed_snapshot
Captured when workflow completes. Includes history and user assignments.
{
"nodes": [...],
"connections": [...],
"history": [...],
"nodeAssignments": {
"node_1": { "userId": "...", "userName": "..." }
}
}Workflow History & Audit Trail
Every workflow transition is recorded in workflow_history for complete audit trail. This enables timeline views, debugging, and compliance reporting.
-- workflow_history table structure CREATE TABLE workflow_history ( id UUID PRIMARY KEY, workflow_instance_id UUID REFERENCES workflow_instances(id), from_node_id UUID, -- Previous node (null if start) to_node_id UUID NOT NULL, -- Node transitioned to handed_off_by UUID, -- User who triggered transition handed_off_to UUID, -- User assigned at new node handed_off_at TIMESTAMPTZ DEFAULT NOW(), out_of_order BOOLEAN DEFAULT FALSE, -- Was this a skip? form_response_id UUID, -- Link to form data if applicable notes TEXT, -- JSON for inline form data branch_id TEXT DEFAULT 'main', -- For parallel tracking (legacy) approval_decision TEXT, -- 'approved' or 'rejected' approval_feedback TEXT, -- Feedback message project_update_id UUID, -- Link to project_updates entry project_issue_id UUID -- Link to project_issues (if rejected) ); -- Indexes for efficient querying CREATE INDEX idx_workflow_history_instance ON workflow_history(workflow_instance_id); CREATE INDEX idx_workflow_history_handed_off_at ON workflow_history(handed_off_at DESC);
Workflow Validation Rules
Before a workflow can be activated, it undergoes validation to ensure proper structure. Validation is performed by workflow-validation.ts.
NO_START
Workflow must have exactly one Start node
MULTIPLE_STARTS
Only one Start node allowed per workflow
NO_END
Warning: No End node may prevent termination
SYNC_NOT_ALLOWED
Sync nodes are deprecated (parallel disabled)
PARALLEL_NOT_ALLOWED
Non-branching nodes can only have one outgoing edge
ORPHANED_NODE
Warning: Node has no connections
CYCLE_DETECTED
Cycles only allowed via rejection paths
APPROVAL_NO_EDGES
Approval nodes need at least one outgoing path
ROLE_NO_USERS
Role nodes must have at least one user assigned
Example Workflows
1. Simple Approval Flow
Basic workflow: assign to designer, get PM approval, complete.
Start → Designer (role) → PM Approval → End
↓
(if rejected)
↓
Designer (loops back)2. Multi-Stage Production
Video production with multiple stages and client approval.
Start → Brief Form → Scriptwriter → Script Approval
↓ (approved)
Video Editor → Editor Approval
↓ (approved)
Motion Graphics → Final Review → Client Approval → End3. Budget-Based Routing
Different paths based on project budget collected via form.
Start → Budget Form → Conditional (form_value: budget)
↓ (budget > 10000) ↓ (budget <= 10000)
Senior Designer Junior Designer
↓ ↓
Director Approval PM Approval
↓ ↓
└──────► End ◄────────┘Workflow API Endpoints
| Endpoint | Method | Description |
|---|---|---|
| /api/workflows/instances/start | POST | Start workflow for a project |
| /api/workflows/instances/[id] | GET | Get workflow instance details |
| /api/workflows/instances/[id]/handoff | POST | Progress to next node |
| /api/workflows/instances/[id]/next-nodes | GET | Get available next nodes |
| /api/workflows/instances/[id]/history | GET | Get transition history |
| /api/workflows/instances/[id]/active-steps | GET | Get current active steps |
| /api/workflows/my-pipeline | GET | Projects in user's workflow queue |
| /api/workflows/my-approvals | GET | Pending approvals for user |
| /api/workflows/progress | GET | Workflow progress analytics |
| /api/admin/workflows/templates | GET/POST | List/create templates |
| /api/admin/workflows/templates/[id] | GET/PUT/DELETE | Manage template |
| /api/admin/workflows/templates/[id]/nodes | GET/POST | Manage nodes |
| /api/admin/workflows/templates/[id]/connections | GET/POST | Manage connections |
Common Workflow Issues & Solutions
"Workflow has no nodes configured"
Cause: Template exists but has no nodes added
Fix: Open workflow editor and add at least Start and End nodes
"Workflow is not active"
Cause: Template is_active = false
Fix: Toggle the Active switch in workflow editor settings
"No users have the [Role] role"
Cause: Role node references a role with zero user_roles entries
Fix: Assign at least one user to the role in Admin → Roles
"You must be assigned to this project"
Cause: User not in project_assignments for current node
Fix: Workflow should auto-assign; check previous node configuration
"Only users with [Role] can advance"
Cause: Current user lacks the role specified in node.entity_id
Fix: Assign user to the required role or use superadmin
Workflow stuck at conditional node
Cause: No condition matched and no default path
Fix: Add a default connection (no condition) from the conditional node
Time Tracking
MovaLab's time tracking system combines clock in/out sessions with detailed time entries. The system is designed to be forgiving (auto clock-out after 16 hours) while providing accurate data for capacity planning and billing.
Time Tracking Architecture
┌─────────────────────────────────────────────────────────────────────────────┐ │ TIME TRACKING SYSTEM │ ├─────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌────────────────────┐ ┌─────────────────────┐ │ │ │ clock_sessions │────▶│ time_entries │ │ │ │ (clock in/out) │ │ (hours per task) │ │ │ └────────────────────┘ └─────────────────────┘ │ │ │ │ │ │ │ ▼ │ │ │ ┌─────────────────────┐ │ │ │ │ weekly_capacity │ │ │ │ │ _summary │ │ │ │ │ (view) │ │ │ │ └─────────────────────┘ │ │ │ │ │ ▼ │ │ ┌────────────────────────────────────────────────────────────────────────┐ │ │ │ AUTO CLOCK-OUT SYSTEM │ │ │ │ - Runs before each time entry creation │ │ │ │ - Closes sessions older than 16 hours │ │ │ │ - Sets is_auto_clock_out = true for tracking │ │ │ │ - Clock out time = clock in + 16 hours │ │ │ └────────────────────────────────────────────────────────────────────────┘ │ │ │ │ KEY RELATIONSHIPS: │ │ time_entries.task_id → tasks.id │ │ time_entries.project_id → projects.id │ │ time_entries.user_id → user_profiles.id │ │ time_entries.clock_session_id → clock_sessions.id (optional) │ │ │ └─────────────────────────────────────────────────────────────────────────────┘
Clock Sessions
Clock sessions track when users are actively working. Users clock in to start a session, and clock out when done. Sessions are automatically closed after 16 hours to prevent forgotten clocks from accumulating.
| Column | Type | Description |
|---|---|---|
| id | UUID PRIMARY KEY | Unique session identifier |
| user_id | UUID NOT NULL | User who clocked in |
| clock_in_time | TIMESTAMPTZ NOT NULL | When user clocked in |
| clock_out_time | TIMESTAMPTZ | When user clocked out (null if active) |
| is_active | BOOLEAN DEFAULT true | True if session is ongoing |
| is_auto_clock_out | BOOLEAN DEFAULT false | True if system auto-closed after 16hrs |
| notes | TEXT | Optional notes for the session |
16-Hour Auto Clock-Out Protection
Why 16 hours? This covers a "forgot to clock out at end of day" scenario while still capturing reasonable work duration. Entries with is_auto_clock_out = true are flagged for review.
-- Auto clock-out function called before time entry creation
CREATE OR REPLACE FUNCTION auto_clock_out_stale_sessions()
RETURNS void AS $$
BEGIN
-- Find all active sessions older than 16 hours and close them
UPDATE clock_sessions
SET
clock_out_time = clock_in_time + INTERVAL '16 hours',
is_active = false,
is_auto_clock_out = true
WHERE is_active = true
AND clock_in_time < NOW() - INTERVAL '16 hours';
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- This function runs automatically via:
-- 1. Trigger on time_entries INSERT
-- 2. Scheduled cron job (if using pg_cron)
-- 3. Application code before creating entriesTime Entries Table
Each time entry represents hours worked on a specific task. Time entries are linked to projects and optionally to clock sessions.
| Column | Type | Constraints | Description |
|---|---|---|---|
| id | UUID | PRIMARY KEY | Unique entry identifier |
| user_id | UUID | NOT NULL, FK user_profiles | User who logged time |
| task_id | UUID | FK tasks | Task worked on (optional) |
| project_id | UUID | NOT NULL, FK projects | Project for this entry |
| hours_logged | DECIMAL(5,2) | 0 < hours <= 24 | Hours logged (max 24/day) |
| entry_date | DATE | NOT NULL | Date of the work |
| week_start_date | DATE | NOT NULL | Monday of ISO week (calculated) |
| description | TEXT | - | What was worked on |
| clock_session_id | UUID | FK clock_sessions | Linked clock session |
| is_billable | BOOLEAN | DEFAULT true | Billable to client? |
| created_at | TIMESTAMPTZ | DEFAULT NOW() | When entry was created |
| updated_at | TIMESTAMPTZ | - | Last modification time |
Week Start Date (ISO Monday)
// TimeEntryService.getWeekStartDate()
getWeekStartDate(date: Date = new Date()): string {
const d = new Date(date);
const day = d.getDay();
// Sunday = 0, Monday = 1, ... Saturday = 6
// We want Monday as start of week (ISO standard)
const diff = d.getDate() - day + (day === 0 ? -6 : 1);
const monday = new Date(d.setDate(diff));
return monday.toISOString().split('T')[0]; // YYYY-MM-DD
}
// Examples:
// Date: 2025-01-15 (Wednesday) → week_start_date: 2025-01-13 (Monday)
// Date: 2025-01-19 (Sunday) → week_start_date: 2025-01-13 (Monday)
// Date: 2025-01-20 (Monday) → week_start_date: 2025-01-20 (Monday)Time Entry Service (lib/services/time-entry-service.ts)
logTime(taskId, userId, projectId, hours, date, description?)
Returns: TimeEntry | null
Create a new time entry. Calculates week_start_date automatically.
getUserTimeEntries(userId, startDate?, endDate?)
Returns: TimeEntryWithDetails[]
Get all time entries for a user, optionally filtered by date range. Includes task and project details.
getTaskTimeEntries(taskId)
Returns: TimeEntryWithDetails[]
Get all time entries for a specific task.
getProjectTimeEntries(projectId, weekStartDate?)
Returns: TimeEntryWithDetails[]
Get all time entries for a project, optionally filtered by week.
getUserWeeklySummary(userId, weekStartDate)
Returns: { totalHours, entriesCount }
Get aggregated hours for a user in a specific week.
updateTimeEntry(entryId, updates)
Returns: TimeEntry | null
Update an existing time entry. Subject to 14-day edit window.
deleteTimeEntry(entryId)
Returns: boolean
Delete a time entry. Subject to 14-day edit window.
14-Day Edit Window
Edit Restriction: Time entries older than 14 days become read-only to prevent historical data manipulation.
// Check if entry is editable
const isEditable = (entryDate: string): boolean => {
const entry = new Date(entryDate);
const cutoff = new Date();
cutoff.setDate(cutoff.getDate() - 14);
return entry >= cutoff;
};
// RLS policy enforces this at database level
CREATE POLICY "Users can only edit recent entries"
ON time_entries FOR UPDATE
USING (
auth.uid() = user_id
AND entry_date >= CURRENT_DATE - INTERVAL '14 days'
);Time Entries Page Features
Summary Statistics
- Hours this week (with bar visualization)
- Hours this month (cumulative)
- Daily average (last 30 days)
- Billable vs non-billable breakdown
List View
- Date range filter (quick: today, week, month)
- Project and task filters
- Pagination (20 entries per page)
- Inline edit for recent entries
Visualizations
- Daily hours bar chart (last 14 days)
- Hours by project (horizontal bars)
- Distribution pie chart
- Week-over-week comparison
Clock Status
- Active session indicator in header
- Quick clock in/out button
- Session duration timer
- Auto clock-out warnings
Time Tracking API Endpoints
| Endpoint | Method | Description |
|---|---|---|
| /api/time-entries | GET | List time entries with filters (userId, startDate, endDate, projectId) |
| /api/time-entries | POST | Create new time entry |
| /api/time-entries/[id] | GET | Get single time entry |
| /api/time-entries/[id] | PUT | Update time entry (14-day window) |
| /api/time-entries/[id] | DELETE | Delete time entry (14-day window) |
| /api/time-entries/summary | GET | Get aggregated summary statistics |
| /api/clock-sessions | GET | List user's clock sessions |
| /api/clock-sessions/current | GET | Get active clock session (if any) |
| /api/clock-sessions/clock-in | POST | Start new clock session |
| /api/clock-sessions/clock-out | POST | End active clock session |
Common Time Tracking Issues
"Forgot to clock out" - session shows 16+ hours
Cause: User left clock running overnight
Fix: System auto-closes at 16hrs. Edit the time entry to correct hours, or delete and re-create.
Cannot edit time entry older than 14 days
Cause: Edit window has passed
Fix: Contact admin with MANAGE_TIME permission to make historical edits.
Time entry not appearing in capacity calculations
Cause: week_start_date mismatch
Fix: Verify entry_date is correct. week_start_date is auto-calculated from entry_date.
Duplicate clock sessions
Cause: Multiple browser tabs or app instances
Fix: Only one active session per user is allowed. Check for stale sessions.
Capacity Planning
MovaLab implements a sophisticated capacity planning system that prevents over-commitment through proportional allocation. The system tracks availability at the user level, aggregates it at department and organization levels, and provides real-time utilization metrics across all dimensions.
Capacity System Architecture
┌─────────────────────────────────────────────────────────────────────────┐ │ CAPACITY PLANNING SYSTEM │ ├─────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ │ │ user_availability│ │ task_week_ │ │ time_entries │ │ │ │ │ │ allocations │ │ │ │ │ │ - available_hrs │ │ - allocated_hrs │ │ - hours_logged │ │ │ │ - schedule_data │ │ - task_id │ │ - entry_date │ │ │ │ - week_start │ │ - user_id │ │ - week_start │ │ │ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │ │ │ │ │ │ │ └──────────────────────┴──────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────┐ │ │ │ CapacityService │ │ │ │ │ │ │ │ getUserCapacityMetrics()│ │ │ │ getDeptCapacityMetrics()│ │ │ │ getOrgCapacityMetrics() │ │ │ │ getCapacityTrend() │ │ │ └────────────┬────────────┘ │ │ │ │ │ ┌─────────────────────┼─────────────────────┐ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ │ │ UserCapacity │ │ DeptCapacity │ │ OrgCapacity │ │ │ │ Metrics │ │ Metrics │ │ Metrics │ │ │ │ │ │ │ │ │ │ │ │ - availableHrs │ │ - teamSize │ │ - totalUsers │ │ │ │ - allocatedHrs │ │ - totalAvail │ │ - totalAvail │ │ │ │ - actualHrs │ │ - totalActual │ │ - avgUtilization│ │ │ │ - utilization% │ │ - utilization% │ │ - deptMetrics[] │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────────┘
The Problem with Traditional Systems
Traditional agency management tools have a fundamental capacity tracking flaw. Consider this scenario:
Example: Sarah works 40 hours/week
- • Assigned to Account A (Client Projects)
- • Assigned to Account B (Internal Projects)
- • Assigned to Account C (Partner Projects)
Traditional System
Counts Sarah as 40 hrs for each account
= 120 hrs total capacity (WRONG)
MovaLab System
Splits Sarah's 40 hrs proportionally
= 40 hrs total (13.3 per account)
User Availability Table
The user_availability table tracks how many hours each user is available to work per week. This forms the foundation of all capacity calculations.
| Column | Type | Description |
|---|---|---|
| id | uuid | Primary key, auto-generated |
| user_id | uuid | FK to user_profiles, user this availability applies to |
| week_start_date | date | Monday of the week (ISO week start) |
| available_hours | decimal(5,2) | Total hours user can work this week (0-168) |
| schedule_data | jsonb | Optional day-by-day breakdown (see below) |
| notes | text | Optional notes (vacation, reduced hours reason) |
| created_at | timestamptz | Record creation timestamp |
| updated_at | timestamptz | Last modification timestamp |
Schedule Data JSONB Schema
The schedule_data field allows granular per-day availability tracking:
interface WeeklySchedule {
monday?: number; // Hours available Monday (default: 8)
tuesday?: number; // Hours available Tuesday (default: 8)
wednesday?: number; // Hours available Wednesday (default: 8)
thursday?: number; // Hours available Thursday (default: 8)
friday?: number; // Hours available Friday (default: 8)
saturday?: number; // Hours available Saturday (default: 0)
sunday?: number; // Hours available Sunday (default: 0)
}
// Example: Part-time employee working Mon-Wed
{
"monday": 8,
"tuesday": 8,
"wednesday": 8,
"thursday": 0,
"friday": 0,
"saturday": 0,
"sunday": 0
}
// available_hours would be 24 for this weekTask Week Allocations Table
The task_week_allocations table tracks planned hours for each task per week. This enables forward-looking capacity planning and over-allocation detection.
| Column | Type | Description |
|---|---|---|
| id | uuid | Primary key |
| task_id | uuid | FK to tasks, the task being allocated |
| assigned_user_id | uuid | FK to user_profiles, who is allocated |
| week_start_date | date | Monday of the week |
| allocated_hours | decimal(5,2) | Hours planned for this task this week |
| notes | text | Optional allocation notes |
| created_at | timestamptz | When allocation was created |
| updated_at | timestamptz | Last update timestamp |
Capacity Service
The CapacityService (lib/services/capacity-service.ts) provides methods to calculate capacity metrics at user, department, project, and organization levels.
// User capacity metrics interface
interface UserCapacityMetrics {
userId: string;
userName: string;
userEmail: string;
weekStartDate: string;
availableHours: number; // From user_availability
allocatedHours: number; // From task_week_allocations
actualHours: number; // From time_entries
utilizationRate: number; // (actual / available) * 100
remainingCapacity: number; // available - actual
}
// Department capacity aggregation
interface DepartmentCapacityMetrics {
departmentId: string;
departmentName: string;
weekStartDate: string;
teamSize: number;
totalAvailableHours: number;
totalAllocatedHours: number;
totalActualHours: number;
utilizationRate: number;
remainingCapacity: number;
userMetrics: UserCapacityMetrics[];
}
// Organization-wide metrics
interface OrgCapacityMetrics {
weekStartDate: string;
totalUsers: number;
totalAvailableHours: number;
totalAllocatedHours: number;
totalActualHours: number;
avgUtilizationRate: number;
totalRemainingCapacity: number;
departmentMetrics: DepartmentCapacityMetrics[];
}Capacity Service Methods
| Method | Parameters | Returns |
|---|---|---|
| getUserCapacityMetrics | userId, weekStartDate | UserCapacityMetrics | null |
| getDepartmentCapacityMetrics | departmentId, weekStartDate | DepartmentCapacityMetrics | null |
| getProjectCapacityMetrics | projectId, weekStartDate | ProjectCapacityMetrics | null |
| getOrgCapacityMetrics | weekStartDate | OrgCapacityMetrics | null |
| getUserCapacityTrend | userId, numberOfWeeks (default: 8) | UserCapacityMetrics[] |
| getDepartmentCapacityTrend | departmentId, numberOfWeeks | DepartmentCapacityMetrics[] |
Allocation Calculation Algorithm
The capacity service uses a multi-source allocation algorithm that takes the maximum of three sources to avoid double-counting while ensuring all commitments are captured:
// Three sources of allocation data: // 1. Week-level allocations (most accurate) const weekAllocatedHours = task_week_allocations .filter(a => a.assigned_user_id === userId && a.week_start_date === week) .reduce((sum, a) => sum + a.allocated_hours, 0); // 2. Project-level allocations (from remaining task hours) const projectAllocatedHours = projects .filter(p => user is assigned && p.status !== 'complete') .flatMap(p => p.tasks.filter(t => t.status !== 'done')) .reduce((sum, t) => sum + (t.remaining_hours ?? t.estimated_hours ?? 0), 0); // 3. Task-level allocations (individual task assignments) const taskAllocatedHours = tasks .filter(t => t.assigned_to === userId && t.status !== 'done') .reduce((sum, t) => sum + (t.remaining_hours ?? t.estimated_hours ?? 0), 0); // Final allocation = max of all sources (avoids double-counting) const allocatedHours = Math.max( weekAllocatedHours, projectAllocatedHours, taskAllocatedHours );
Availability Service
The AvailabilityService (lib/services/availability-service.ts) manages user availability records and provides helper methods for capacity calculations.
| Method | Description |
|---|---|
| getWeekStartDate(date?) | Returns ISO week start (Monday) for the given date |
| getUserAvailability(userId, weekStartDate) | Get availability for a specific week |
| getUserAvailabilityRange(userId, startWeek, endWeek) | Get availability for date range |
| setUserAvailability(userId, weekStartDate, hours, schedule?, notes?) | Create or update availability |
| deleteUserAvailability(userId, weekStartDate) | Remove availability record |
| copyAvailabilityToWeeks(userId, sourceWeek, targetWeeks) | Copy availability pattern to multiple weeks |
| calculateTotalHours(schedule) | Sum hours from WeeklySchedule object |
| getDepartmentAvailability(departmentId, weekStartDate) | Get all user availability for a department |
Key Capacity Metrics
| Metric | Calculation | Description |
|---|---|---|
| Available Hours | user_availability.available_hours | Hours user can work this week |
| Allocated Hours | MAX(week, project, task allocations) | Planned work for the week |
| Actual Hours | SUM(time_entries.hours_logged) | Hours already logged this week |
| Utilization Rate | (actual_hours / available_hours) * 100 | Percentage of capacity used |
| Remaining Capacity | available_hours - actual_hours | Hours still available |
| Over-Allocation | allocated_hours - available_hours (if > 0) | Hours overbooked |
Utilization Bands
MovaLab uses five utilization bands to categorize capacity health. These are used throughout the UI for visual indicators and capacity reports.
< 60%
Under-utilized
Team member has significant available capacity
60-80%
Healthy
Optimal utilization with buffer for unexpected work
80-95%
High
Near capacity, limited flexibility
95-110%
Over-allocated
Exceeding capacity, risk of burnout
> 110%
Critical
Severely overbooked, immediate action required
Capacity Trend Analysis
The capacity service provides trend analysis to identify patterns over time:
// Get 8-week utilization trend for a user
const trend = await capacityService.getUserCapacityTrend(
userId,
8 // numberOfWeeks
);
// Returns array of weekly metrics:
// [
// { weekStartDate: "2024-01-01", utilizationRate: 75, ... },
// { weekStartDate: "2024-01-08", utilizationRate: 82, ... },
// { weekStartDate: "2024-01-15", utilizationRate: 68, ... },
// ...
// ]
// Use for charts and forecasting
const avgUtilization = trend.reduce((sum, w) => sum + w.utilizationRate, 0) / trend.length;
const isOverworked = trend.filter(w => w.utilizationRate > 95).length > 3;Capacity API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/capacity/user/[id] | Get user capacity for current week |
| GET | /api/capacity/user/[id]/week/[date] | Get user capacity for specific week |
| GET | /api/capacity/user/[id]/trend | Get 8-week capacity trend |
| GET | /api/capacity/department/[id] | Get department capacity summary |
| GET | /api/capacity/department/[id]/trend | Get department trend |
| GET | /api/capacity/project/[id] | Get project capacity metrics |
| GET | /api/capacity/org | Get organization-wide capacity |
| GET | /api/availability/[userId] | Get user availability |
| PUT | /api/availability/[userId] | Set user availability |
| POST | /api/availability/[userId]/copy | Copy availability to weeks |
| GET | /api/allocations/task/[taskId] | Get task allocations |
| POST | /api/allocations/task/[taskId] | Create task allocation |
| PUT | /api/allocations/[id] | Update allocation |
| DELETE | /api/allocations/[id] | Delete allocation |
Capacity UI Features
Capacity Dashboard
Organization-wide capacity overview with department breakdowns and utilization heat maps
User Capacity Cards
Individual capacity indicators showing available, allocated, and remaining hours
Availability Calendar
Weekly availability editor with copy-to-weeks functionality for recurring patterns
Over-Allocation Alerts
Automatic warnings when assigning tasks would exceed user capacity
Trend Charts
8-week utilization trends for users and departments to identify patterns
Project Capacity View
Project-specific capacity showing team allocation and progress
Common Capacity Issues
User shows 0% utilization despite logged hours
Cause: No availability record for the week
Fix: Create user_availability record for the week with available_hours > 0
Department capacity not updating
Cause: User not linked to department via role
Fix: Assign user a role with the correct department_id
Over-allocation not detected
Cause: Task allocations not created for the week
Fix: Create task_week_allocations records for planned work
Trend shows gaps
Cause: Missing availability records for some weeks
Fix: Use copyAvailabilityToWeeks to fill in availability patterns
Project capacity shows wrong hours
Cause: Tasks missing remaining_hours or estimated_hours
Fix: Ensure all tasks have estimated_hours set. System uses remaining_hours if available.
Client Portal
Separate access for clients to view projects, approve deliverables, and provide feedback.
Client Role Permissions
- View projects they're associated with
- View deliverables submitted for review
- Approve or reject deliverables
- Provide feedback and ratings
- View project updates
Invitation Flow
- Admin creates invitation via
client_portal_invitations - Email sent to client with invitation link
- Client registers/logs in and accepts invitation
- Client added to
account_memberswith Client role - RLS policies grant appropriate access
API Reference
83 REST API endpoints using Next.js App Router API routes. All endpoints require authentication via Supabase JWT tokens. Permissions are checked at the API layer using the RBAC system.
API at a Glance
83
Endpoints
9
Categories
JWT
Auth Method
JSON
Response Format
Authentication
All API requests require a valid Supabase JWT token in the Authorization header. Tokens are obtained via Supabase Auth and automatically refreshed by the client SDK.
Getting an Access Token
// Login and get token
const { data, error } = await supabase.auth.signInWithPassword({
email: 'user@example.com',
password: 'password123'
});
if (data.session) {
const accessToken = data.session.access_token;
// Token is valid for 1 hour, auto-refreshed by SDK
}Using the Token
// All API requests include the token automatically via Supabase client
// For direct fetch requests:
fetch('/api/projects', {
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json'
}
});Error Responses
All endpoints return consistent error responses with HTTP status codes and JSON bodies.
400 Bad Request
{
"error": "userId is required"
}401 Unauthorized
{
"error": "Unauthorized"
}403 Forbidden
{
"error": "Insufficient permissions to create projects"
}500 Server Error
{
"error": "Internal server error",
"details": "..." // Only in dev mode
}Projects API (15 endpoints)
List projects accessible to the current user. Filters by permission level.
userId (required),limit (default: 10)Response:
{ "success": true, "projects": [{ "id": "...", "name": "...", "status": "...", "account": {...} }] }Create a new project. Requires MANAGE_PROJECTS permission for the account.
Request body:
{
"name": "New Website Redesign",
"description": "Complete overhaul of client website",
"accountId": "uuid",
"status": "planning",
"start_date": "2024-02-01",
"end_date": "2024-04-30",
"budget": 50000,
"assigned_user_id": "uuid" // optional, defaults to creator
}Update project details
Delete project (cascades)
Mark project complete (read-only)
Reopen completed project
List project team members
Add team member to project
List project status updates
List project blockers
Tasks API (12 endpoints)
Permission model: Task permissions inherit from project access. If a user has access to a project, they can create/edit/delete tasks within it. Completed projects are read-only.
Create a new task. User must have project access.
Request body:
{
"name": "Design homepage mockup",
"description": "Create initial Figma designs",
"project_id": "uuid",
"status": "backlog", // backlog|todo|in_progress|review|done|blocked
"priority": "high", // low|medium|high|urgent
"estimated_hours": 8,
"due_date": "2024-02-15",
"assigned_to": "uuid"
}List tasks for a project
Update task (status, assignee, etc.)
Delete task
Add task dependency (Gantt)
Accounts API (10 endpoints)
List accessible accounts
Create new account
List account members
Add member to account
Get kanban configuration
Update kanban columns
Time Tracking API (8 endpoints)
Start a clock session. Returns active session.
// Request: { "project_id": "uuid" (optional) }
// Response: { "success": true, "session": { "id": "...", "clock_in_time": "..." } }End clock session, create time entry
Get current clock session (if any)
List time entries (filterable)
Manual time entry (max 24h)
Edit entry (14-day window)
Delete entry (14-day window)
Workflows API (12 endpoints)
List workflow templates
Create workflow template
Add node to template
Connect nodes
Start workflow instance
Progress to next node
Get transition history
User's workflow history
Admin & Other APIs
Org Structure (6 endpoints)
/api/org-structure/roles- CRUD roles/api/org-structure/departments- CRUD departments/api/auth/permissions- Get user permissions
Capacity (6 endpoints)
/api/capacity/weekly- Weekly summary/api/capacity/user/[id]- User capacity/api/availability- User availability
Deliverables (5 endpoints)
/api/deliverables- CRUD deliverables/api/deliverables/[id]/approve- Approve/api/deliverables/[id]/reject- Reject
Client Portal (4 endpoints)
/api/client-portal/invite- Send invite/api/client-portal/feedback- Submit feedback/api/admin/client-feedback- View all
Common Patterns
Pagination
GET /api/projects?limit=20&offset=0
// Response includes total count
{ "projects": [...], "total": 45 }Filtering
GET /api/tasks?projectId=xxx&status=in_progress // Multiple filters via query params
Includes/Expands
// Related data included automatically
// e.g., projects include account
{ "project": { ..., "account": {...} } }Soft Deletes
// project_assignments use soft delete
// removed_at is set instead of DELETE
PUT /api/.../assignments/[id]
{ "removed_at": "2024-01-15T..." }Components Reference
MovaLab contains 107 React components organized by feature. The component architecture follows Next.js 15 patterns with a clear separation between Server Components (data fetching) and Client Components (interactivity). shadcn/ui provides the base component library with consistent styling via Tailwind CSS.
Component Architecture
┌─────────────────────────────────────────────────────────────────┐ │ COMPONENT HIERARCHY │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ Page Components (RSC) │ │ └── Fetch data, check permissions, render layout │ │ │ │ │ ├── Layout Components │ │ │ └── Sidebar, Header, Navigation │ │ │ │ │ ├── Feature Components (Client) │ │ │ └── Interactive UI, forms, state management │ │ │ │ │ │ │ └── Base UI Components (shadcn/ui) │ │ │ └── Button, Card, Dialog, Form, Table, etc. │ │ │ │ │ └── Data Display Components │ │ └── Tables, charts, lists (may be client or server) │ │ │ └─────────────────────────────────────────────────────────────────┘
Component Categories
UI Components
27components/ui/shadcn/ui base components with Tailwind styling
Examples: Button, Card, Dialog, Form, Table, Tabs, Select
Layout
5components/App shell and navigation structure
Examples: Sidebar, Header, DashboardLayout, BreadCrumb
Accounts
8components/Client account management UI
Examples: AccountList, AccountOverview, AccountCreateDialog
Projects
12components/Project and task management
Examples: AssignedProjectsSection, ProjectCard, TaskBoard
Workflows
10components/workflows/React Flow based workflow builder
Examples: WorkflowEditor, NodeCanvas, NodePalette
Time Tracking
8components/Clock in/out and time entry UI
Examples: ClockWidget, ClockOutDialog, TimeEntriesList
Capacity
5components/Capacity and availability visualization
Examples: CapacityDashboard, CapacityTrendChart, AvailabilityCalendar
Admin
10components/admin/Administration and configuration
Examples: DepartmentAdminTabs, RoleEditor, OrgChart
Key Components
| Component | Type | Description | Key Props |
|---|---|---|---|
| ClockWidget | Client | Clock in/out with real-time timer | userId, onClockChange |
| AccountOverview | Client | Full account detail view with tabs | accountId, initialData |
| WorkflowEditor | Client | React Flow canvas for workflow design | templateId, onSave |
| CapacityTrendChart | Client | 8-week utilization line chart | userId, departmentId |
| DragAvailabilityCalendar | Client | Weekly availability with drag-to-set | userId, weekStartDate |
| GanttChart | Client | Project timeline visualization | projects, tasks, dateRange |
| OrgChart | Client | Interactive organization hierarchy | departmentId, showReports |
| AccountList | Client | Filterable account card grid | accounts, onAccountSelect |
| TimeEntriesList | Client | Paginated time entry table | userId, dateRange |
| BreadCrumb | Client | Dynamic navigation breadcrumbs | items, separator |
Component Patterns
Server Component Data Fetching
Page components fetch data server-side, pass to client components as props
Example: page.tsx fetches projects, passes to ProjectList client component
Client Wrapper Pattern
Thin client wrapper that imports and renders client component with server-fetched data
Example: AccountsClientWrapper wraps AccountList for client interactivity
Dialog Pattern
Dialogs manage own open state, receive onSuccess callback, use react-hook-form
Example: AccountCreateDialog, DepartmentDeleteDialog
List + Detail Pattern
List component with selection triggers detail panel or navigation
Example: AccountList > AccountOverview, ProjectList > ProjectDetail
Form Components
Forms use react-hook-form with zod validation. shadcn/ui Form components provide consistent styling.
// Standard form pattern in MovaLab
import { useForm } from "react-hook-form";
import { zodResolver } from "@hookform/resolvers/zod";
import { z } from "zod";
import { Form, FormField, FormItem, FormLabel, FormControl } from "@/components/ui/form";
const schema = z.object({
name: z.string().min(1, "Name is required"),
email: z.string().email("Invalid email"),
});
export function ExampleForm({ onSubmit }) {
const form = useForm({
resolver: zodResolver(schema),
defaultValues: { name: "", email: "" },
});
return (
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)}>
<FormField
control={form.control}
name="name"
render={({ field }) => (
<FormItem>
<FormLabel>Name</FormLabel>
<FormControl>
<Input {...field} />
</FormControl>
</FormItem>
)}
/>
<Button type="submit">Submit</Button>
</form>
</Form>
);
}shadcn/ui Components
Base UI components from shadcn/ui, customized via components/ui/:
Services Reference
MovaLab uses a service layer architecture with 43 service files encapsulating all business logic. Services are responsible for database operations, validation, and coordination between components. All database operations go through services, never directly from components.
Service Architecture
┌─────────────────────────────────────────────────────────────────┐ │ SERVICE LAYER PATTERN │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ API Route / Server Action │ │ │ │ │ │ 1. Validate request (zod) │ │ │ 2. Check permissions (PermissionChecker) │ │ │ 3. Call service method │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ SERVICE LAYER │ │ │ │ │ │ │ │ • Encapsulates business logic │ │ │ │ • Handles database operations │ │ │ │ • Manages transactions │ │ │ │ • Returns typed responses │ │ │ │ │ │ │ └─────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ SUPABASE CLIENT │ │ │ │ • createServerSupabase() - server operations │ │ │ │ • createClientSupabase() - client operations │ │ │ │ • RLS enforced at database level │ │ │ └─────────────────────────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────┘
Service Categories
Core Business
12Primary business entity management
Examples: AccountService, ProjectService, TaskService
Time & Capacity
3Time tracking and resource planning
Examples: TimeEntryService, CapacityService, AvailabilityService
Workflow
4Workflow templates and execution
Examples: WorkflowService, WorkflowExecutionService
Auth & Permissions
5RBAC and access control
Examples: PermissionChecker, RoleManagementService
Organization
3Org structure and departments
Examples: DepartmentService, OrganizationService
Utilities
8Supporting features and integrations
Examples: FormService, MilestoneService, NewsletterService
All Services
| Service | Location | Responsibility | Key Methods |
|---|---|---|---|
| AccountService | lib/account-service.ts | Account CRUD, membership | getAccounts, createAccount, addMember |
| CapacityService | lib/services/capacity-service.ts | Capacity metrics calculation | getUserCapacityMetrics, getOrgCapacity |
| TimeEntryService | lib/services/time-entry-service.ts | Time entry management | logTime, getUserTimeEntries, updateEntry |
| AvailabilityService | lib/services/availability-service.ts | User availability | getUserAvailability, setAvailability |
| WorkflowService | lib/workflow-service.ts | Workflow templates | createTemplate, updateTemplate, getNodes |
| WorkflowExecutionService | lib/workflow-execution-service.ts | Workflow runtime | startInstance, transitionNode, complete |
| RoleManagementService | lib/role-management-service.ts | Role administration | createRole, assignRole, getPermissions |
| PermissionChecker | lib/permission-checker.ts | Permission evaluation | hasPermission, checkContext, getUserRoles |
| DepartmentService | lib/department-service.ts | Department management | getDepartments, createDept, assignUser |
| OrganizationService | lib/organization-service.ts | Org structure | getOrgChart, getHierarchy |
| AssignmentService | lib/assignment-service.ts | Resource assignments | assignToProject, removeAssignment |
| FormService | lib/form-service.ts | Dynamic forms | createForm, getFormData, submitResponse |
| MilestoneService | lib/milestone-service.ts | Project milestones | createMilestone, updateProgress |
| ClientPortalService | lib/client-portal-service.ts | Client access | createInvitation, getClientProjects |
| ProjectUpdatesService | lib/project-updates-service.ts | Project updates | createUpdate, getUpdates |
| ProjectIssuesService | lib/project-issues-service.ts | Issue tracking | createIssue, resolveIssue |
Service Pattern
Services follow a consistent pattern: singleton class with methods that interact with Supabase.
// Standard service pattern in MovaLab
import { createClientSupabase } from '../supabase';
interface EntityWithDetails extends Entity {
relatedData?: { id: string; name: string };
}
class ExampleService {
/**
* Get entity by ID with related data
*/
async getById(id: string): Promise<EntityWithDetails | null> {
const supabase = createClientSupabase();
if (!supabase) return null;
const { data, error } = await supabase
.from('entities')
.select(`
*,
related:related_table(id, name)
`)
.eq('id', id)
.single();
if (error) {
console.error('Error fetching entity:', error);
return null;
}
return data as EntityWithDetails;
}
/**
* Create new entity
*/
async create(data: EntityInsert): Promise<Entity | null> {
const supabase = createClientSupabase();
if (!supabase) return null;
const { data: result, error } = await supabase
.from('entities')
.insert([data])
.select()
.single();
if (error) {
console.error('Error creating entity:', error);
return null;
}
return result;
}
}
// Export singleton
export const exampleService = new ExampleService();Utility Services
AccessControlServer
lib/access-control-server.tsServer-side permission checks with request context
AuthServer
lib/auth-server.tsServer-side authentication utilities
DatabaseCheck
lib/database-check.tsDatabase health checks and connectivity tests
DebugLogger
lib/debug-logger.tsStructured logging with levels
RateLimit
lib/rate-limit.tsUpstash Redis rate limiting
ServerGuards
lib/server-guards.tsRoute protection utilities
Service Dependencies
Some services depend on others for complex operations:
- • CapacityService depends on AvailabilityService for week start calculation
- • WorkflowExecutionService depends on WorkflowService for template data
- • TimeEntryService uses CapacityService week start date logic
- • ClientPortalService uses AccountService for account context
Configuration
Environment Variables
See Getting Started for complete environment variable reference.
Configuration Files
| File | Purpose |
|---|---|
| next.config.ts | Next.js configuration, redirects, headers |
| tailwind.config.ts | Tailwind CSS theme, colors, fonts |
| tsconfig.json | TypeScript configuration (strict mode) |
| eslint.config.mjs | ESLint v9 flat config |
| playwright.config.ts | E2E test configuration |
| components.json | shadcn/ui component settings |
Testing
Test Commands
npm run test:playwright # Run Playwright E2E tests npm run test:unit # Run unit tests (permission system) npm run test:integration # Run integration tests npm run test:permissions # Run all permission tests + validation
Utility Scripts
npm run debug:permissions # Debug permission issues for users npm run validate:permissions # Validate permission consistency npm run fix:permissions # Fix common permission problems npm run check:users # Check user status and roles npm run setup:test-roles # Set up comprehensive test roles
Deployment & Setup
Complete guide to setting up MovaLab for local development and deploying to production. Follow these steps from zero to fully operational.
Development Environment Overview
MovaLab can run in two environments. Understanding this is critical before you begin setup.
| Path | Use Case | Database | Best For |
|---|---|---|---|
| Local Docker | Development, testing, demos | PostgreSQL in Docker on your machine | Developers, evaluators, contributors |
| Cloud Supabase | Production, staging | Supabase Cloud (managed PostgreSQL) | Live deployments, real users |
Architecture Overview
LOCAL DEVELOPMENT:
[Your Machine]
├── Docker Desktop
│ └── Supabase Containers
│ ├── PostgreSQL (port 54322)
│ ├── Auth (GoTrue)
│ ├── PostgREST API (port 54321)
│ └── Studio UI (port 54323)
└── Next.js App (port 3000)
└── Connects to localhost:54321
CLOUD PRODUCTION:
[Vercel / Your Host]
└── Next.js App
└── Connects to xxx.supabase.co
└── [Supabase Cloud]
├── Managed PostgreSQL
├── Auth
└── APIPrerequisites Checklist
Before starting, ensure you have all required software installed.
| Software | Version | How to Check | Download |
|---|---|---|---|
| Node.js | 18+ | node --version | nodejs.org |
| npm | 9+ | npm --version | Included with Node.js |
| Git | 2.x | git --version | git-scm.com |
| Docker Desktop | Latest | docker --version | docker.com |
System Requirements
| OS | RAM | Disk Space | Notes |
|---|---|---|---|
| macOS | 8GB+ (4GB for Docker) | 10GB free | macOS 12+ recommended |
| Windows | 8GB+ (4GB for Docker) | 10GB free | Windows 10/11, WSL2 required |
| Linux | 8GB+ (4GB for Docker) | 10GB free | Ubuntu 20.04+ recommended |
Pre-Flight Verification Script
# Run this to check all prerequisites are installed echo "Node.js: $(node --version 2>/dev/null || echo 'NOT INSTALLED')" echo "npm: $(npm --version 2>/dev/null || echo 'NOT INSTALLED')" echo "Git: $(git --version 2>/dev/null || echo 'NOT INSTALLED')" echo "Docker: $(docker --version 2>/dev/null || echo 'NOT INSTALLED')" docker ps >/dev/null 2>&1 && echo "Docker: Running" || echo "Docker: NOT RUNNING"
Docker Desktop Installation
Docker is required to run the local Supabase database. Follow the instructions for your operating system.
macOS Installation
Step 1: Download Docker Desktop
- • Go to docker.com/products/docker-desktop
- • Click "Download for Mac"
- • Choose: Apple Silicon (M1/M2/M3) OR Intel chip
- • How to know which chip: Apple menu → About This Mac → Chip
Step 2: Install Docker Desktop
- Open the downloaded Docker.dmg file
- Drag the Docker icon to Applications folder
- Close the installer window
- Eject the Docker disk image (right-click → Eject)
Step 3: First Launch
- Open Applications folder
- Double-click Docker.app
- Click "Open" if macOS asks to confirm
- Enter your Mac password when prompted (grants network access)
- Wait for Docker to initialize (whale icon animates in menu bar)
Step 4: Verify Installation
# Open Terminal and run: docker --version # Expected: Docker version 24.x.x or higher docker ps # Expected: CONTAINER ID IMAGE COMMAND ... (empty list is OK) # If "Cannot connect to Docker daemon" - Docker isn't running yet
macOS Troubleshooting
Docker Desktop requires macOS 12+
Update macOS or use older Docker version
Unable to start Docker
Restart Mac, try again
Docker very slow
Increase memory in Settings → Resources
Operation not permitted
System Preferences → Security → Allow Docker
Windows Installation
Step 1: Enable WSL2 (Required)
# Open PowerShell as Administrator: # Right-click Start button → Windows Terminal (Admin) wsl --install # This installs: WSL2, Virtual Machine Platform, Ubuntu # Expected output: "Installing: Windows Subsystem for Linux..."
Step 2: Restart Computer
Required after WSL2 installation. Save all work first!
Step 3: Complete Ubuntu Setup (After Restart)
- Ubuntu terminal opens automatically (or search "Ubuntu" in Start)
- Wait for "Installing, this may take a few minutes..."
- Create a username (lowercase, no spaces)
- Create a password (won't show as you type)
- Confirm password
Step 4: Download & Install Docker Desktop
- Go to docker.com/products/docker-desktop
- Click "Download for Windows"
- Run the installer (Docker Desktop Installer.exe)
- IMPORTANT: Check "Use WSL 2 instead of Hyper-V"
- Click "Ok" and wait for installation
- Click "Close and restart" when prompted
Step 5: Configure WSL Integration
- Open Docker Desktop
- Click Settings (gear icon)
- Go to "Resources" → "WSL Integration"
- Enable "Ubuntu" (or your distro)
- Click "Apply & Restart"
Step 6: Verify Installation
# Open PowerShell or Windows Terminal: docker --version # Expected: Docker version 24.x.x docker ps # Expected: CONTAINER ID IMAGE COMMAND ... (empty is OK)
Windows Troubleshooting
WSL 2 installation incomplete
Run `wsl --update` in PowerShell Admin
Hardware virtualization not enabled
Enable VT-x/AMD-V in BIOS settings
Docker won't start after install
Restart computer, try again
Very slow performance
Ensure WSL2: `wsl -l -v` should show VERSION 2
BIOS Virtualization Fix (If Needed)
- Restart computer
- Press F2/F10/Del during boot (varies by manufacturer)
- Find "Virtualization Technology" or "VT-x" or "SVM Mode"
- Enable it, save and exit BIOS
Linux Installation (Ubuntu/Debian)
# Step 1: Remove old Docker versions
sudo apt-get remove docker docker-engine docker.io containerd runc 2>/dev/null || true
# Step 2: Update package index
sudo apt-get update
# Step 3: Install prerequisites
sudo apt-get install -y ca-certificates curl gnupg lsb-release
# Step 4: Add Docker's official GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Step 5: Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Step 6: Install Docker Engine
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Step 7: Add your user to docker group (avoids needing sudo)
sudo usermod -aG docker $USER
newgrp docker # Or log out and back in
# Step 8: Start Docker service
sudo systemctl start docker
sudo systemctl enable docker # Start on boot
# Step 9: Verify installation
docker --version
docker ps
docker run hello-world # Should print "Hello from Docker!"Linux Troubleshooting
Permission denied after group add
Log out and back in, or run `newgrp docker`
Cannot connect to Docker daemon
`sudo systemctl start docker`
E: Unable to locate package
Double-check repository was added correctly
Using Debian (not Ubuntu)
Replace `ubuntu` with `debian` in repository URL
Linux Installation (Fedora/RHEL)
# Remove old versions sudo dnf remove docker docker-client docker-client-latest docker-common # Add repository sudo dnf -y install dnf-plugins-core sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo # Install Docker sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin # Start and enable sudo systemctl start docker sudo systemctl enable docker # Add user to group sudo usermod -aG docker $USER newgrp docker # Verify docker --version && docker ps
Docker Resource Configuration
MovaLab runs PostgreSQL + Supabase services in Docker. Insufficient resources = slow or failing containers.
| Resource | Minimum | Recommended | Why |
|---|---|---|---|
| Memory | 2GB | 4GB | PostgreSQL + multiple services |
| CPUs | 1 | 2+ | Faster builds and queries |
| Disk | 5GB | 10GB | Database data + images |
Signs of Insufficient Resources
- • Containers randomly stopping
- • "Out of memory" errors
- • Very slow queries
- •
npm run docker:starttiming out
Local Development Setup
Complete first-time setup and daily workflow for local development.
Clone the Repository
git clone https://github.com/itigges22/movalab.git cd movalab
First-Time Setup - Automated Script
macOS / Linux
./scripts/first-time-setup.shWindows PowerShell
.\scripts\first-time-setup.ps1Windows Git Bash
bash scripts/first-time-setup.shWhat the Setup Script Does
| Step | Action | Why |
|---|---|---|
| 1 | Check Node.js version | Ensures v18+ is installed |
| 2 | Check Docker running | Supabase needs Docker |
| 3 | Verify migration file exists | supabase/migrations/20250129000000_baseline.sql |
| 4 | Install npm dependencies | npm install |
| 5 | Start Supabase containers | npx supabase start |
| 6 | Apply migrations | Automatically runs on start |
| 7 | Create seed users | Demo users for testing |
| 8 | Print status | URLs and credentials |
Expected Output
✓ Node.js v18+ detected ✓ Docker is running ✓ Migration file found ✓ Dependencies installed ✓ Starting Supabase... Supabase started successfully! API URL: http://127.0.0.1:54321 Studio URL: http://127.0.0.1:54323 DB URL: postgresql://postgres:postgres@127.0.0.1:54322/postgres Demo Users Created: - superadmin@test.local / Test1234! - exec@test.local / Test1234! - manager@test.local / Test1234! - pm@test.local / Test1234! - designer@test.local / Test1234! - dev@test.local / Test1234!
Manual Setup (If Script Fails)
# Step 1: Install dependencies npm install # Step 2: Start Supabase npx supabase start # Wait for "Started supabase local development setup" # Step 3: Verify status npx supabase status # Should show all services running with URLs # Step 4: Seed database (optional) npm run docker:seed
Daily Commands Reference
| Command | Description | When to Use |
|---|---|---|
npm run docker:start | Start Supabase containers, apply migrations | Beginning of dev session |
npm run docker:stop | Stop Supabase containers | End of dev session, save RAM |
npm run docker:reset | Stop, clear DB, restart, apply migrations | Schema issues, clean slate |
npm run docker:seed | Reset + create demo users/data | After reset, for demo data |
npm run docker:health | Check all services status | Troubleshooting |
npm run docker:studio | Open Supabase Studio in browser | View/edit data visually |
npm run docker:clean | Stop + remove all Docker volumes | Nuclear option, full reset |
Typical Daily Workflow
# Morning: Start development npm run docker:start # Wait 30-60 seconds npm run dev # Start Next.js # Open: http://localhost:3000 # Login: exec@test.local / Test1234! # Evening: Stop to save RAM npm run docker:stop
Environment Variables for Local Development
Create a .env.local file with these values:
# Local Supabase (defaults from npx supabase start) NEXT_PUBLIC_SUPABASE_URL=http://127.0.0.1:54321 NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0 # Optional: Enable demo mode for testing NEXT_PUBLIC_DEMO_MODE=true
Tip: Run npx supabase status to see all URLs and keys.
Verification Checklist
| Check | Command | Expected Result |
|---|---|---|
| Docker running | docker ps | Shows supabase containers |
| Supabase healthy | npx supabase status | All services 'running' |
| API accessible | curl http://127.0.0.1:54321 | JSON response |
| App starts | npm run dev | 'Ready' on localhost:3000 |
| Can login | Open app, use exec@test.local | Redirects to dashboard |
Database Schema Overview
MovaLab uses a single consolidated baseline migration approach. One file contains the complete schema — no ordering issues, portable, and consistent across environments.
Migration File Location
supabase/migrations/20250129000000_baseline.sql
What the Baseline Contains
| Category | Count | Contents |
|---|---|---|
| Tables | 36 | All columns, constraints, foreign keys |
| Functions | 15+ | PostgreSQL functions with SECURITY DEFINER |
| RLS Policies | 100+ | Row Level Security for every table |
| Triggers | 10+ | Auto-update timestamps, profile creation |
| Indexes | 20+ | Performance optimization |
| Views | 1 | weekly_capacity_summary |
Tables by Category (36 Total)
User Management (5)
user_profiles, user_roles, roles, departments, role_hierarchy_audit
Accounts & Projects (9)
accounts, account_members, projects, project_assignments, project_stakeholders, project_updates, project_issues, account_kanban_configs, milestones
Tasks (3)
tasks, task_dependencies, task_week_allocations
Time Tracking (3)
time_entries, clock_sessions, user_availability
Workflows (7)
workflow_templates, workflow_nodes, workflow_connections, workflow_instances, workflow_history, workflow_active_steps, workflow_node_assignments
Forms & Other (9)
form_templates, form_responses, deliverables, newsletters, notifications, client_feedback, client_portal_invitations, user_dashboard_preferences, api_keys
Key Database Functions
| Function | Purpose |
|---|---|
user_is_superadmin(uuid) | Check if user is superadmin |
user_has_permission(uuid, text) | Check specific permission |
user_is_account_manager(uuid, uuid) | Check account manager role |
user_is_account_member(uuid, uuid) | Check account access |
user_has_project_access(uuid, uuid) | Check project access |
auto_clock_out_stale_sessions() | Close sessions >16h old |
Why SECURITY DEFINER?
RLS policies that query other RLS-protected tables cause infinite recursion. SECURITY DEFINER functions run as the function owner (postgres), bypassing RLS internally while the outer query still enforces RLS.
Cloud Deployment
Complete guide to deploying MovaLab to Supabase Cloud for production.
Why Not Local Docker for Production?
- • No automatic backups — Docker volumes can be lost
- • Single point of failure — Your machine is the only copy
- • Computer sleeps = app offline
- • Consumes 2-4GB RAM constantly
Always use Cloud Supabase for production data.
Cloud Commands Reference
| Command | What It Does | When to Use |
|---|---|---|
npm run cloud:link | Connect local project to cloud | First-time setup |
npm run cloud:migrate | Push pending migrations to cloud | After creating migrations |
npm run cloud:status | Show which migrations are applied | Verify state |
npm run cloud:diff | Compare local vs cloud schema | Troubleshooting |
npm run cloud:reset | DROP ALL TABLES and rerun migrations | Never in prod! |
New Cloud Project Setup
Step 1: Create Supabase Project
- Go to supabase.com/dashboard
- Click "New Project"
- Enter project name (e.g., "movalab-prod")
- Set database password (SAVE THIS!)
- Select region closest to your users
- Wait 2-3 minutes for provisioning
Step 2: Get Your Project Reference
Your project URL looks like:
https://supabase.com/dashboard/project/abcdefghijklmnop
^^^^^^^^^^^^^^^^
This is your project ref
Or find it at: Project Settings → General → Reference IDStep 3: Login & Link
# Login to Supabase CLI npx supabase login # Link to your project npm run cloud:link -- --project-ref YOUR_PROJECT_REF # When prompted, enter your database password
Step 4: Push Migrations & Verify
# Push migrations npm run cloud:migrate # Expected: "Applying migration 20250129000000_baseline.sql...done" # Verify npm run cloud:status # Should show the baseline migration as applied
Step 5: Update Environment Variables
# Update .env.local for cloud NEXT_PUBLIC_SUPABASE_URL=https://YOUR_PROJECT_REF.supabase.co NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY=your-publishable-key # Get these from: Project Settings → API
Creating New Migrations
When adding new features that require schema changes, follow this workflow.
# Step 1: Create migration file npx supabase migration new add_feature_name # Creates: supabase/migrations/YYYYMMDDHHMMSS_add_feature_name.sql # Step 2: Write your SQL (see example below) # Step 3: Test locally npm run docker:reset # Resets and applies all migrations npm run docker:health # Verify everything works # Step 4: Push to cloud npm run cloud:migrate npm run cloud:status # Verify it applied
Example Migration SQL
-- Create new table
CREATE TABLE IF NOT EXISTS feature_flags (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name TEXT NOT NULL UNIQUE,
enabled BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- ALWAYS enable RLS
ALTER TABLE feature_flags ENABLE ROW LEVEL SECURITY;
-- Add policies
CREATE POLICY "Superadmins can manage feature flags"
ON feature_flags FOR ALL
USING (
EXISTS (
SELECT 1 FROM user_profiles
WHERE id = auth.uid()
AND is_superadmin = true
)
);Best Practices
One migration = one feature
Easy to rollback specific changes
Always enable RLS
Security by default
Add policies immediately
Tables without policies = no access
Use IF NOT EXISTS
Migrations can run multiple times
Use CREATE OR REPLACE FUNCTION
Avoids 'already exists' errors
Test locally before cloud
Catch errors early
RLS Security Architecture
Row Level Security (RLS) enforces data access at the PostgreSQL level. Even if application code has a bug, the database blocks unauthorized access.
Policy Patterns Used
1. Superadmin Bypass
CREATE POLICY "Superadmins have full access" ON table_name FOR ALL USING (user_is_superadmin(auth.uid()));
2. Assignment-Based Access
CREATE POLICY "Users can view assigned projects"
ON projects FOR SELECT
USING (
id IN (
SELECT project_id FROM project_assignments
WHERE user_id = auth.uid()
)
);3. Ownership-Based Access
CREATE POLICY "Users can edit own time entries" ON time_entries FOR UPDATE USING (user_id = auth.uid());
4. Permission-Based Access
CREATE POLICY "Users with permission can view all" ON projects FOR SELECT USING (user_has_permission(auth.uid(), 'view_all_projects'));
Security Guarantees
- • Application bugs can't leak data: RLS enforces at DB level
- • Direct SQL respects RLS: Even psql connections obey policies
- • Service role bypasses RLS: Only for trusted server-side operations
Troubleshooting
Local Development Issues
| Issue | Solution |
|---|---|
| Docker not running | Start Docker Desktop |
| Supabase won't start | npm run docker:clean then npm run docker:start |
| Port already in use | Kill other containers or change ports |
| function X does not exist | npm run docker:reset |
| permission denied for table | Check user roles/assignments |
| Very slow | Increase Docker memory to 4GB+ |
| relation already exists | Use IF NOT EXISTS in migration |
Cloud Deployment Issues
| Issue | Solution |
|---|---|
| Project not linked | npm run cloud:link -- --project-ref YOUR_REF |
| Not logged in | npx supabase login |
| Migration already applied | This is OK - check npm run cloud:status |
| Permission denied | Check database password |
| Schema drift detected | npm run cloud:diff to diagnose |
| Function already exists | Use CREATE OR REPLACE |
| Connection refused | Check network, unpause project if free tier |
Complete Diagnostic Commands
# Check Docker docker --version && docker ps # Check Supabase npx supabase status # Check Node node --version && npm --version # Check migrations ls supabase/migrations/ npm run cloud:status # Full reset (nuclear option) npm run docker:clean rm -rf node_modules npm install npm run docker:start
Vercel Deployment
- Connect GitHub repository to Vercel
- Set environment variables in Vercel dashboard
- Configure build command:
npm run build - Deploy on push to main branch
Important: Email Confirmation Settings
By default, Supabase requires email confirmation for new user signups. The confirmation email contains a link that redirects to your Supabase project URL — not your actual app domain. You must configure one of these options:
Option 1: Disable Email Confirmation (Easier)
Go to Supabase Dashboard → Authentication → Providers → Email and disable "Confirm email". Users can sign in immediately after registration.
Option 2: Update Redirect URL (Recommended for Production)
Go to Supabase Dashboard → Authentication → URL Configuration and set the "Site URL" to your actual domain (e.g., https://yourdomain.com). Also add your domain to "Redirect URLs".
First-Time Superadmin Setup
For fresh cloud deployments, you need to create the first superadmin account. MovaLab includes a secure one-time setup mechanism that only works when zero superadmins exist.
Step 1: Run Base Schema SQL
Open Supabase Dashboard → SQL Editor and run the contents of supabase/setup-base-schema.sql
Creates 5 departments, 15 roles, and system roles (Superadmin, Client, No Assigned Role).
Step 2: Configure Setup Secret
# Generate a secure secret openssl rand -hex 32 # Add to Vercel Environment Variables: SETUP_SECRET=your-generated-secret
Step 3: Create Account & Become Superadmin
- Sign up with your email at your deployment URL
- Navigate to
/setup?key=YOUR_SETUP_SECRET - Enter your secret and click "Become Superadmin"
- You'll be redirected to the Admin panel
Security Features
- • One-time use: Automatically disables after first superadmin is created
- • Secret verification: Requires exact match with environment variable
- • Authentication required: Must be logged in with verified email
Troubleshooting
Configuration Required
Add SETUP_SECRET to environment variables and redeploy
Setup Complete
A superadmin exists. Use Admin → Role Management to add more.
Create Account First
Sign up/login first, then return to /setup?key=...
Invalid Setup Secret
Double-check SETUP_SECRET value, no extra spaces
Demo Mode
Demo mode allows you to showcase MovaLab without risking data corruption or requiring user signup. Perfect for public demonstrations, testing, and evaluation.
Critical: Database Architecture
Local Docker Demo and Local Docker "Production" use the SAME database. The Supabase CLI creates a single set of containers. Demo mode is purely a UI/API protection layer — it does NOT create a separate database.
Local Docker Database (ONE database)
├── With NEXT_PUBLIC_DEMO_MODE=true → Demo UI, blocked actions
└── With NEXT_PUBLIC_DEMO_MODE=false → Full UI, all actions allowed
↑ Same data in both cases!For true data isolation, use Cloud Supabase for production.
When to Use Demo Mode
| Scenario | Use Demo Mode? |
|---|---|
| Public product demo | Yes |
| Internal testing | Yes |
| Evaluating MovaLab | Yes |
| Development with test data | Optional |
| Production deployment | No |
| Staging with real data | No |
Quick Start
Option 1: Using npm run dev:demo (Recommended)
# One command starts everything npm run dev:demo # This will: # 1. Check Docker is installed and running # 2. Start Supabase containers (PostgreSQL, Auth, API, etc.) # 3. Wait for services to be ready # 4. Start Next.js with demo mode enabled # When done, stop Docker to free RAM (2-4 GB typical usage) npm run docker:stop
Option 2: Manual Setup with .env.local
# If Supabase containers are already running: # 1. Add to .env.local: NEXT_PUBLIC_DEMO_MODE=true # 2. Start the dev server: npm run dev
What Demo Mode Does
Quick-Login Buttons
One-click login as different roles — no signup required
Blocks Destructive Actions
Delete, remove, and dangerous operations are blocked
Hides Superadmin Access
Prevents exposure of sensitive admin features
Protects Demo Data
Keeps the demo environment clean for the next user
Demo Users
Demo mode provides 6 pre-configured users. Password for all: Test1234!
| User | Role | Access | |
|---|---|---|---|
| Alex Executive | exec@test.local | Executive Director | Full visibility |
| Morgan Manager | manager@test.local | Account Manager | Manages accounts/teams |
| Pat ProjectManager | pm@test.local | Project Manager | Project execution |
| Dana Designer | designer@test.local | Senior Designer | Creative work |
| Dev Developer | dev@test.local | Senior Developer | Technical work |
| Chris Client | client@test.local | Client | Portal only |
Note: Superadmin (superadmin@test.local) is intentionally excluded from demo mode to protect sensitive admin features. It still exists in the database.
Blocked Actions in Demo Mode
Switching from Demo to Production
The correct way to transition from demo/testing to production:
# 1. Create a Cloud Supabase project at supabase.com # Name it "movalab-prod" (separate from any demo project) # 2. Link and push migrations npx supabase link --project-ref YOUR_PROJECT_REF npx supabase db push # 3. Create real users via Supabase dashboard or app invites # Do NOT use demo seed users in production! # 4. Update .env.local for cloud NEXT_PUBLIC_SUPABASE_URL=https://your-project-id.supabase.co NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY=your-cloud-key # NEXT_PUBLIC_DEMO_MODE=true # <-- Remove or set to false # 5. Stop Docker (no longer needed) npm run docker:stop # 6. Start in production mode npm run dev
Cloud Demo Setup (Vercel)
For production demos (shareable URL, no Docker required):
# Set these environment variables in Vercel dashboard: NEXT_PUBLIC_SUPABASE_URL=https://your-demo-project.supabase.co NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY=your-publishable-key NEXT_PUBLIC_DEMO_MODE=true NEXT_PUBLIC_APP_URL=https://demo.your-domain.com
Recommended: Use separate Supabase projects: movalab-prod for production, movalab-demo for public demos.
Automatic Daily Reset (Cron Job)
For cloud demo deployments, MovaLab includes a Vercel cron job that automatically resets demo data daily at midnight UTC. This keeps the demo fresh with current-date-relative sample data.
Runs Daily at Midnight UTC
Configured in vercel.json
Only in Demo Mode
Won't execute on production deployments
Fresh Dates
Projects, tasks, time entries use relative dates
Preserves Structure
Resets activity data, keeps users and accounts
Setup Steps
# 1. Generate a CRON_SECRET openssl rand -hex 32 # 2. Get your Supabase service_role key from: # Supabase Dashboard → Settings → API → service_role (secret) # 3. Add to Vercel Environment Variables: NEXT_PUBLIC_DEMO_MODE=true DEMO_MODE=true CRON_SECRET=<your-generated-secret> DEMO_SUPABASE_SERVICE_ROLE_KEY=<your-service-role-key> # 4. Verify in Vercel Dashboard → Settings → Crons # You should see: /api/cron/reset-demo-data at 0 0 * * *
Test the Cron Job
curl -X GET 'https://demo.your-domain.com/api/cron/reset-demo-data' \
-H 'Authorization: Bearer YOUR_CRON_SECRET'
# Success: {"success":true,"message":"Demo data reset successfully",...}
# Not demo mode: {"error":"Demo mode is not enabled",...} (403)
# Bad secret: {"error":"Unauthorized"} (401)What Gets Reset:
Security Considerations
Do NOT Use Local Docker for Production
- • No automatic backups — Docker volumes can be lost
- • Single point of failure — Your machine is the only copy
- • No disaster recovery — Hard drive dies = data gone
- • No high availability — Computer sleeps = app unavailable
- • Resource intensive — Consumes 2-4 GB RAM constantly
Always use Cloud Supabase for production data.
Demo mode is NOT a security boundary. It provides UI convenience and accidental deletion prevention, but NOT database isolation or API security. For true security, use separate Cloud Supabase projects.
Technical Implementation
| File | Purpose |
|---|---|
lib/demo-mode.ts | Core demo mode logic and configuration |
lib/api-demo-guard.ts | API route protection |
components/demo-login-form.tsx | Quick-login UI component |
scripts/start-demo.js | Smart startup script |
Using Demo Mode in Code
// Client-side detection
import { isDemoMode } from '@/lib/demo-mode';
if (isDemoMode()) {
// Show demo UI
}
// Blocking actions (client)
import { isActionBlocked, getBlockedActionMessage } from '@/lib/demo-mode';
if (isActionBlocked('delete_project')) {
toast.error(getBlockedActionMessage('delete_project'));
return;
}
// Blocking actions (API)
import { checkDemoModeForDestructiveAction } from '@/lib/api-demo-guard';
export async function DELETE(request: NextRequest) {
const blocked = checkDemoModeForDestructiveAction('delete_project');
if (blocked) return blocked;
// Continue with delete...
}
// Adding new blocked actions:
// 1. Add type to lib/demo-mode.ts BlockedAction
// 2. Add message to BLOCKED_MESSAGES
// 3. Use in your component or API routeReset & Troubleshooting
# Full database reset (destroys ALL local data) npm run docker:seed # Reset Docker completely if corrupted npm run docker:clean # stops + prunes containers npm run dev:demo # start fresh # Demo mode not working? Check: # 1. NEXT_PUBLIC_DEMO_MODE=true in .env.local # 2. Restart dev server (env read at startup) # 3. Clear browser cache (Ctrl+Shift+R) # High memory usage? npm run docker:stop # Stop when not in use # Or use cloud demo mode instead
FAQ
Can demo users create data?
Yes! They can create projects, tasks, log time, etc. They just can't delete data.
Is demo data persistent?
Yes, until you run npm run docker:seed (local) or manually reset (cloud).
Can I customize demo users?
Yes, edit lib/demo-mode.ts to change users, colors, descriptions, and blocked actions.
How do I add more demo data?
Edit supabase/seed.sql or scripts/create-seed-users.ts.
What if I disable demo mode locally?
You get full access to delete/admin features, but it's the SAME database — no separate 'local production' exists.
Contributing
Ways to Contribute
Share Workflows
Your processes become features
Request Features
Open a GitHub issue
Report Bugs
Test and report issues
Improve Docs
Fix typos, add examples
Contribute Code
PRs welcome
Join Discussions
Help in Discord
Pull Request Process
# 1. Fork and clone git clone https://github.com/YOUR_USERNAME/movalab.git # 2. Create branch git checkout -b feature/your-feature-name # 3. Make changes and test npm run build npm run test # 4. Push and create PR git push origin feature/your-feature-name
Code Style
- TypeScript with proper types (no any)
- Functional components with hooks
- Follow existing code patterns
- Include RLS policies for new tables
- Run npm run build before PR
AI Development
This codebase is optimized for Claude Code. The repository includes a CLAUDE.md file (2,139 lines) that provides full project context.
Using Claude Code
# Install Claude Code npm install -g @anthropic-ai/claude-code # Navigate to project cd movalab # Start Claude Code claude # Claude reads CLAUDE.md automatically and understands: # - Project structure (90k+ lines) # - Database schema (33 tables, 100+ RLS policies) # - Permission system (~40 permissions) # - API patterns (83+ endpoints) # - Coding conventions
Recommended MCP Servers
Fetch up-to-date docs for Next.js, Supabase, Tailwind
claude mcp add context7Query database, manage migrations, inspect schema
claude mcp add supabaseCreate PRs, manage issues directly from Claude
claude mcp add githubDirect database access for complex queries
claude mcp add postgresBest Practices
- Be specific in prompts - reference existing files
- Always include RLS when creating tables
- Reference existing patterns in the codebase
- Test with npm run build before PRs
Troubleshooting
Comprehensive troubleshooting guide for common issues encountered during development, deployment, and runtime. Each section provides specific symptoms, causes, and step-by-step solutions.
Setup & Installation Issues
Docker not starting
Symptoms: Container fails to start, docker-compose up hangs, or port binding errors
Causes: Docker Desktop not running, Existing containers using ports, Insufficient disk space
Solution:
1. Ensure Docker Desktop is running (check menu bar icon) 2. Run: docker ps -a to see all containers 3. Clean up: docker-compose down -v && docker system prune 4. Restart Docker Desktop 5. Re-run: npm run docker:start
Port 3000 already in use
Symptoms: EADDRINUSE error, dev server won't start
Causes: Previous Next.js process still running, Another application using port 3000
Solution:
1. Quick fix: npm run dev:fresh (kills port and restarts) 2. Manual: lsof -i :3000 to find PID, then kill <PID> 3. Alternative: PORT=3001 npm run dev
Database connection failed
Symptoms: Supabase client errors, queries timeout, auth fails
Causes: Wrong environment variables, Docker not running, Network issues
Solution:
1. Check .env.local has correct values: - NEXT_PUBLIC_SUPABASE_URL=http://127.0.0.1:54321 - NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY=<your-key> 2. Verify Docker: docker ps | grep supabase 3. Test connection: curl http://127.0.0.1:54321/rest/v1/
Migrations failing
Symptoms: Tables not created, foreign key errors, schema mismatch
Causes: Previous failed migration, Migration order issues, Syntax errors in SQL
Solution:
1. Full reset: npm run docker:reset 2. This stops Docker, removes volumes, restarts, and reruns all migrations 3. If still failing, check supabase/migrations/ for syntax errors 4. Run migrations manually: npx supabase db push
npm install fails
Symptoms: Dependency resolution errors, peer dependency conflicts
Causes: Node.js version mismatch, Corrupted node_modules, Lock file issues
Solution:
1. Verify Node.js: node -v (requires 18.17+) 2. Clean install: rm -rf node_modules package-lock.json && npm install 3. If peer dep issues: npm install --legacy-peer-deps
TypeScript errors on startup
Symptoms: Type errors during npm run dev, red underlines in IDE
Causes: Missing type definitions, Outdated types, Supabase types not generated
Solution:
1. Regenerate Supabase types: npx supabase gen types typescript 2. Restart TypeScript server: Cmd+Shift+P > "TypeScript: Restart" 3. If still issues: rm -rf .next && npm run dev
Docker Hub rate limit exceeded
Symptoms: "rate exceeded" or "too many requests" errors when pulling images
Causes: Anonymous Docker Hub access limited to 100 pulls/6hrs, Multiple dev environments sharing IP
Solution:
1. Create free Docker Hub account: hub.docker.com/signup 2. Login to Docker Hub: docker login # Enter username and password/access token 3. This increases limit to 200 pulls/6hrs 4. For CI: use docker login with access token
Supabase containers won't stop
Symptoms: npm run docker:stop hangs, containers still running, port conflicts
Causes: Zombie processes, Volume locks, Corrupted container state
Solution:
1. Force stop: npm run docker:clean 2. If still stuck: docker stop $(docker ps -q --filter "name=supabase") docker rm $(docker ps -aq --filter "name=supabase") 3. Nuclear option (WARNING: deletes all Docker data): docker system prune -af docker volume prune -f
Windows: stdout is not a tty
Symptoms: Interactive prompts fail, script hangs waiting for input
Causes: Git Bash TTY incompatibility, Windows CMD limitations
Solution:
1. Use Windows CMD or PowerShell instead of Git Bash 2. Run the .bat script: scripts\first-time-setup.bat 3. Or prefix with winpty: winpty ./scripts/first-time-setup.sh 4. For VS Code terminal: use PowerShell as default shell
Authentication Issues
Login redirects to login page repeatedly
Symptoms: Clicking login redirects back to login, session doesn't persist
Causes: Cookie settings incorrect, JWT secret mismatch, Session expired
Solution:
1. Check browser cookies are enabled for localhost 2. Clear cookies: DevTools > Application > Cookies > Clear 3. Verify JWT_SECRET in .env matches Supabase config 4. Check session refresh: auth.onAuthStateChange() is registered
Auth not working - wrong key
Symptoms: 401 Unauthorized on all requests, auth.getUser() returns null
Causes: Using anon key instead of publishable key, Key mismatch between env and Supabase
Solution:
1. Open Supabase Dashboard: http://127.0.0.1:54323 2. Go to Settings > API 3. Copy "anon public" key (NOT service_role!) 4. Set NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY in .env.local 5. Restart dev server
User gets 403 Forbidden after login
Symptoms: Login succeeds but API calls return 403
Causes: User has no roles assigned, RLS policies blocking access, Permission cache stale
Solution:
1. Check user_roles table: SELECT * FROM user_roles WHERE user_id = 'xxx' 2. Assign default role: INSERT INTO user_roles (user_id, role_id) VALUES (...) 3. Clear permission cache: Wait 5 minutes or restart server 4. Verify RLS: SELECT * FROM check_user_permission(user_id, 'VIEW_PROJECTS')
Password reset email not sending
Symptoms: No email received, no error shown
Causes: Email provider not configured, Wrong redirect URL, SMTP settings missing
Solution:
1. Local dev uses Inbucket: http://127.0.0.1:54324 2. Check Inbucket for test emails 3. For production: configure SMTP in Supabase Dashboard 4. Verify redirect URL in auth.resetPasswordForEmail()
Permission & RLS Issues
User can't see resources they should access
Symptoms: Empty lists, 404 errors on valid resources, missing data
Causes: Missing project assignment, RLS policy too restrictive, Account isolation
Solution:
1. Check project_assignments: Does user have active assignment?
SELECT * FROM project_assignments WHERE user_id = 'xxx' AND removed_at IS NULL
2. Check account_members: Is user member of the account?
3. Verify RLS with explain: EXPLAIN (ANALYZE, VERBOSE) SELECT * FROM projects
4. Test as user: SET request.jwt.claims TO '{"sub": "user-id"}'Permission checks always returning false
Symptoms: All permission checks fail, user can't perform any actions
Causes: User has no roles, Roles have no permissions, check_permission function error
Solution:
1. Debug permission chain: SELECT * FROM user_roles WHERE user_id = 'xxx'; SELECT * FROM role_permissions WHERE role_id = 'yyy'; SELECT * FROM permissions WHERE id = 'zzz'; 2. Run permission debugger: npm run debug:permissions 3. Check for permission overrides: SELECT * FROM permission_overrides WHERE user_id = 'xxx'
RLS policies causing query timeouts
Symptoms: Slow queries, timeouts on large tables, high database CPU
Causes: Complex subqueries in RLS, Missing indexes, N+1 policy checks
Solution:
1. Check query plan: EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM table 2. Add indexes on foreign keys used in RLS: CREATE INDEX idx_project_assignments_user ON project_assignments(user_id) 3. Simplify RLS policies to use direct lookups 4. Consider caching permission results in application layer
Permission Debugging Tools
# Interactive permission debugger
npm run debug:permissions
# Prompts for user ID and permission name
# Shows: user roles, inherited permissions, context checks, final result
# Validate all permissions in the system
npm run validate:permissions
# Checks: orphaned permissions, circular dependencies, missing defaults
# Fix common permission problems
npm run fix:permissions
# Repairs: missing role assignments, default role setup, permission cache
# Test RLS policies directly in psql
docker exec -it supabase_db_movalab psql -U postgres -d postgres
# Check permission for specific user
SELECT check_user_permission('user-uuid', 'MANAGE_PROJECTS');
# List all effective permissions for a user
SELECT p.name, p.description
FROM permissions p
JOIN role_permissions rp ON p.id = rp.permission_id
JOIN user_roles ur ON rp.role_id = ur.role_id
WHERE ur.user_id = 'user-uuid';Workflow Issues
Workflow won't progress to next node
Symptoms: Transition button disabled, workflow stuck on current node
Causes: No valid connections, User not assigned, Missing EXECUTE_WORKFLOWS permission
Solution:
1. Check connections exist: SELECT * FROM workflow_connections WHERE source_node_id = 'current-node' 2. Verify user assignment: SELECT * FROM workflow_instance_assignments WHERE node_id = 'target-node' 3. Check permission: SELECT check_user_permission(user_id, 'EXECUTE_WORKFLOWS') 4. Verify condition (if conditional transition): Check workflow_connections.condition JSON matches instance data
Workflow snapshot not matching template
Symptoms: Instance shows old workflow version, changes not reflected
Causes: Snapshot frozen at instance creation, Template edited after instance started
Solution:
1. This is expected behavior - snapshots are intentional 2. View snapshot: SELECT snapshot FROM workflow_instances WHERE id = 'xxx' 3. To apply new template: Complete current instance, start new one 4. For critical fixes: UPDATE workflow_instances SET snapshot = (template JSON)
Parallel branches not completing correctly
Symptoms: Some branches show complete, workflow doesn't advance
Causes: Branch ID mismatch, Missing node transitions, Incomplete parallel tracking
Solution:
1. Check all branch nodes completed: SELECT * FROM workflow_node_transitions WHERE instance_id = 'xxx' AND branch_id = 'branch-id' 2. Verify all branches finished: SELECT DISTINCT branch_id, status FROM workflow_node_transitions WHERE instance_id = 'xxx' 3. Check join node configuration matches branch count
Time Tracking & Capacity Issues
Time entries not saving
Symptoms: Save button does nothing, entry disappears after refresh
Causes: Hours > 24 validation failure, Invalid date format, RLS blocking insert
Solution:
1. Check constraint: hours_logged must be 0-24 2. Verify entry_date format: YYYY-MM-DD 3. Check project assignment exists for the user 4. Verify insert RLS policy: EXPLAIN INSERT INTO time_entries (...) VALUES (...)
Capacity showing 0% despite logged hours
Symptoms: Utilization always 0, remaining capacity infinite
Causes: No user_availability record, Wrong week_start_date, available_hours = 0
Solution:
1. Check availability record exists:
SELECT * FROM user_availability
WHERE user_id = 'xxx' AND week_start_date = 'YYYY-MM-DD'
2. Create if missing:
INSERT INTO user_availability (user_id, week_start_date, available_hours)
VALUES ('xxx', '2024-01-01', 40)
3. Verify week_start_date is a MondayClock session not ending automatically
Symptoms: Session runs forever, no 16-hour auto clock-out
Causes: Scheduled function not running, Function error, Clock time misconfigured
Solution:
1. Check scheduled function: pg_cron.schedule 2. Manually run cleanup: SELECT auto_end_long_clock_sessions() 3. Verify function exists: \df auto_end_long_clock_sessions 4. Check for errors in function logs
Build & Deployment Issues
Build fails with type errors
Symptoms: npm run build fails, TypeScript errors in production build
Causes: Strict mode catching issues, Missing type imports, Outdated generated types
Solution:
1. Run type check: npx tsc --noEmit 2. Regenerate Supabase types: npx supabase gen types typescript --local > lib/database.types.ts 3. Fix strict null checks: Add proper null handling 4. Check for 'any' types: grep -r ": any" src/
Vercel deployment fails
Symptoms: Build succeeds locally but fails on Vercel
Causes: Missing environment variables, Different Node version, Build cache issues
Solution:
1. Check all env vars are set in Vercel dashboard 2. Match Node version: engines.node in package.json 3. Clear Vercel cache: Redeploy with "Clear Cache" 4. Check build logs for specific error
Production database connection refused
Symptoms: Site deployed but can't connect to Supabase
Causes: Wrong production URL, Supabase project not set up, Connection pooling issues
Solution:
1. Verify Supabase production project exists 2. Set correct env vars in Vercel: - NEXT_PUBLIC_SUPABASE_URL (your project URL) - NEXT_PUBLIC_SUPABASE_PUBLISHABLE_DEFAULT_KEY 3. Check connection pooling is enabled for production
Performance Issues
Slow page loads
Symptoms: Pages take 5+ seconds to load, high TTFB
Causes: Large data fetching, Missing pagination, Waterfall requests
Solution:
1. Check network tab for slow requests 2. Add pagination to list queries: .range(0, 19) // First 20 items 3. Parallelize independent queries: Promise.all([fetchA(), fetchB()]) 4. Add loading states with Suspense
Database queries timing out
Symptoms: 504 Gateway Timeout, queries > 30s
Causes: Missing indexes, Complex RLS policies, Large table scans
Solution:
1. Add indexes on frequently queried columns: CREATE INDEX idx_tasks_project ON tasks(project_id) 2. Analyze query plans: EXPLAIN (ANALYZE, BUFFERS) SELECT ... 3. Simplify RLS policies where possible 4. Add query result caching
Debug Commands Reference
# Development Commands npm run dev # Start development server npm run dev:fresh # Kill port 3000 and restart npm run build # Production build (catches type errors) npm run lint # Run ESLint npx tsc --noEmit # Type check without build # Docker Commands npm run docker:start # Start Supabase locally npm run docker:stop # Stop Supabase npm run docker:reset # Reset database completely npm run docker:logs # View Docker logs # Database Commands npx supabase db push # Apply migrations npx supabase db reset # Reset and reseed npx supabase gen types typescript # Regenerate types # Debug Commands npm run debug:permissions # Interactive permission debugger npm run validate:permissions # Check permission consistency npm run test:rls # Test RLS policies # Useful psql Commands (inside Docker) docker exec -it supabase_db_movalab psql -U postgres -d postgres \dt # List tables \d table_name # Describe table \df function_name # Describe function SELECT * FROM pg_stat_activity; # Active queries
Getting Help
Ready to Contribute?
Clone the repo, run Claude Code, and start building.