Migration Guide
UberLotto v2 uses Supabase migrations to manage database schema changes. This guide covers the migration directory structure, existing migrations, and how to create new ones.
Migration Directory
uberlotto-v2/
└── supabase/
└── migrations/
├── 20251113002700_create_security_tables.sql
├── 20251113002700_create_security_tables_rollback.sql
└── 20260218000000_payment_logging_system.sqlMigrations follow the Supabase CLI naming convention:
<YYYYMMDDHHMMSS>_<description>.sqlExisting Migrations
20251113002700_create_security_tables.sql
Date: 2025-11-13 Purpose: Plisio Payment Gateway Hardening — replay attack prevention and security logging.
Tables Created
| Table | Primary Key | Description |
|---|---|---|
webhook_nonces | id BIGSERIAL | Replay attack prevention via SHA-256 nonce hashes |
security_events | id BIGSERIAL | Security audit log with GDPR-compliant data masking |
WARNING
Both tables use BIGSERIAL primary keys (auto-incrementing bigint), not UUID. This differs from the other tables in the database which use BIGINT GENERATED BY DEFAULT AS IDENTITY.
Indexes Created
webhook_nonces:
idx_webhook_nonces_nonce_hash— fast duplicate lookupidx_webhook_nonces_expires_at— efficient cleanupidx_webhook_nonces_txn_id— partial index ontxn_ididx_webhook_nonces_created_at— descending time order
security_events:
idx_security_events_event_typeidx_security_events_severityidx_security_events_created_at— descending time orderidx_security_events_txn_id— partial indexidx_security_events_client_ip— partial indexidx_security_events_statusidx_security_events_sourceidx_security_events_type_created— composite index(event_type, created_at DESC)
RLS Policies
Both tables enable Row Level Security with:
- Service role full access —
auth.role() = 'service_role' - Admin read access (security_events only) — authenticated users with
role = 'admin'in JWT
The migration uses conditional policy creation (IF NOT EXISTS via pg_policies check) to safely handle re-runs.
Database Functions Created
| Function | Purpose | Schedule |
|---|---|---|
cleanup_expired_nonces() | Deletes nonces past their expires_at | Every 5 minutes |
cleanup_old_security_events() | Removes events older than 1 year | Monthly |
get_recent_security_events() | Queries recent events with filtering | On demand |
All functions are created with SECURITY DEFINER and granted EXECUTE permission to the service_role.
Rollback
A companion rollback script is provided at:
supabase/migrations/20251113002700_create_security_tables_rollback.sqlIt drops all functions and tables created by the migration:
DROP FUNCTION IF EXISTS get_recent_security_events(TEXT, TEXT, INTEGER, INTEGER) CASCADE;
DROP FUNCTION IF EXISTS cleanup_old_security_events() CASCADE;
DROP FUNCTION IF EXISTS cleanup_expired_nonces() CASCADE;
DROP TABLE IF EXISTS security_events CASCADE;
DROP TABLE IF EXISTS webhook_nonces CASCADE;DANGER
Running the rollback deletes all security event logs and webhook nonces permanently. Back up data before running.
20260218000000_payment_logging_system.sql
Date: 2026-02-18 Purpose: Unified Payment Transaction Logging — replaces wallet_load_credit and pending_transactions with a single payment_transactions table and an immutable payment_transaction_events audit log.
What It Drops
| Object | Type | Reason |
|---|---|---|
wallet_load_credit | Table | Replaced by payment_transactions |
pending_transactions | Table | Replaced by payment_transactions |
pending_transactions_id_seq | Sequence | Orphaned after table drop |
DANGER
The migration drops the legacy tables. Existing data is migrated into payment_transactions before the drop — see Data Migration Logic below.
Tables Created
| Table | Primary Key | Description |
|---|---|---|
payment_transactions | id UUID (gen_random_uuid) | Unified payment records for MoonPay + Plisio |
payment_transaction_events | id BIGINT (GENERATED ALWAYS AS IDENTITY) | Immutable status change audit log |
Indexes Created (10)
payment_transactions:
idx_pt_user_email— user email lookupsidx_pt_user_id— partial index on user_id (WHERE NOT NULL)idx_pt_active_status— partial index on active statuses (pending/processing/awaiting_confirmation)idx_pt_provider_status— composite(provider, status)idx_pt_created_at—created_at DESCidx_pt_provider_txn_id— partial index (WHERE NOT NULL)idx_pt_order_number— order number lookupsidx_pt_expires_pending— partial index onexpires_at(WHERE status = 'pending')
payment_transaction_events:
idx_pte_transaction_id— event lookups by transactionidx_pte_txn_created— composite(transaction_id, created_at DESC)
Functions Created
| Function | Type | Description |
|---|---|---|
set_updated_at() | Trigger | Auto-sets updated_at on row update |
fn_log_transaction_status_change() | Trigger | Logs status transitions to events table; reads app.changed_by and app.change_reason session vars |
fn_guard_max_events_per_transaction() | Trigger | Prevents more than 50 events per transaction |
fn_guard_immutable_completed() | Trigger | Prevents reopening completed/refunded transactions |
expire_stale_transactions() | Utility | Cron: marks pending transactions past expires_at as expired |
archive_old_events(interval) | Utility | Cleanup: removes events older than specified interval (default 90 days) |
get_transaction_summary(email) | Utility | Returns aggregated stats for admin dashboards |
check_cancellation_rate(email) | Utility | Anti-abuse: returns FALSE if 10+ cancels/hour |
Triggers Created
| Trigger | Table | Event | Function |
|---|---|---|---|
trg_payment_transactions_updated_at | payment_transactions | BEFORE UPDATE | set_updated_at() |
trg_log_status_change | payment_transactions | AFTER UPDATE (status) | fn_log_transaction_status_change() |
trg_guard_max_events | payment_transaction_events | BEFORE INSERT | fn_guard_max_events_per_transaction() |
trg_guard_immutable_completed | payment_transactions | BEFORE UPDATE (status) | fn_guard_immutable_completed() |
RLS Policies
Both tables enable Row Level Security with:
- Service role full access —
service_rolebypasses RLS automatically - Authenticated user read — users can read their own transactions (matched by
user_email = auth.jwt()->>'email') - Authenticated user events read — users can read events for their own transactions (via subquery on
payment_transactions)
Data Migration Logic
The migration includes data migration from the legacy tables:
From
pending_transactions: Mapscustomer_email→user_email,customer_name→user_name,customer_lastname→user_lastname,customer_id→user_id,plisio_txn_id→provider_txn_id,plisio_invoice_url→provider_invoice_url,amount_usd→amount. Setsprovider = 'plisio'for all migrated rows.From
wallet_load_credit: Mapsuser_email,user_name,user_lastname,user_wallet_address→wallet_address,tx_hash,tx_amount→amount,tx_currency→crypto_currency,tx_amount_usd→amount_usd,tx_conversion_rate→conversion_rate,tx_status→status(mapped: 'confirmed' → 'completed'). Setsprovider = 'plisio'and generatesorder_numberas'LEGACY-' || idfor each row.
TIP
Both data migration steps use INSERT ... SELECT for atomicity. If the legacy tables are empty, the migration still succeeds — the INSERTs simply insert zero rows.
No Rollback Script
Unlike the security tables migration, this migration does not include a rollback script because the data migration is destructive. If a rollback is needed, restore from a database backup taken before running the migration.
Tables Without Migrations
The following tables were created directly in the Supabase Dashboard (no migration files exist):
| Table | Notes |
|---|---|
jackpots | Created manually; stores current jackpot amounts |
past_drawings | Created manually; stores historical drawing results |
TIP
If you need to recreate these tables in a new environment, refer to the table definitions in the Supabase Database Guide.
How to Run Migrations
Prerequisites
Install the Supabase CLI:
npm install -g supabaseLink your project:
supabase link --project-ref iwmsnkgimodfucvwecvcApply Migrations
Push all pending migrations to the remote database:
supabase db pushTo see migration status:
supabase migration listApply a Specific Migration Manually
If you need to run a migration manually (e.g., in the Supabase SQL Editor):
- Open Supabase Dashboard > SQL Editor
- Paste the contents of the migration file
- Click Run
Creating New Migrations
Using the Supabase CLI
Generate a new migration file:
supabase migration new <description>This creates a timestamped file in supabase/migrations/:
supabase/migrations/20260210120000_<description>.sqlMigration Best Practices
Use
IF NOT EXISTS/IF EXISTS— Makes migrations idempotent and safe to re-runAlways create a rollback script — Name it
<timestamp>_<description>_rollback.sqlInclude verification queries — Add commented-out queries at the bottom for testing:
sql-- Verify: SELECT table_name FROM information_schema.tables -- WHERE table_schema = 'public' AND table_name = 'your_table';Enable RLS immediately — Every new table should have RLS enabled with at minimum a service role policy:
sqlALTER TABLE your_table ENABLE ROW LEVEL SECURITY; CREATE POLICY "Service role full access" ON your_table FOR ALL USING (auth.role() = 'service_role');Add indexes for query patterns — Index columns used in
WHERE,ORDER BY, andJOINclausesAdd table/column comments — Document purpose directly in the schema:
sqlCOMMENT ON TABLE your_table IS 'Description of what this table stores'; COMMENT ON COLUMN your_table.column IS 'Description of this column';Test locally first — Use
supabase startto run a local Supabase instance and validate migrations before pushing to production
Migration Template
-- =====================================================================
-- Migration: <Description>
-- Purpose: <What this migration does>
-- Date: <YYYY-MM-DD>
-- =====================================================================
-- Create table
CREATE TABLE IF NOT EXISTS your_table (
id BIGSERIAL PRIMARY KEY,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
-- ... columns
);
-- Indexes
CREATE INDEX IF NOT EXISTS idx_your_table_created_at
ON your_table(created_at DESC);
-- RLS
ALTER TABLE your_table ENABLE ROW LEVEL SECURITY;
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_policies
WHERE schemaname = 'public'
AND tablename = 'your_table'
AND policyname = 'Service role full access'
) THEN
CREATE POLICY "Service role full access" ON your_table
FOR ALL USING (auth.role() = 'service_role');
END IF;
END $$;
-- Comments
COMMENT ON TABLE your_table IS 'Description';
-- Verification (run manually)
-- SELECT table_name FROM information_schema.tables
-- WHERE table_schema = 'public' AND table_name = 'your_table';