high priority low complexity database pending database specialist Tier 1

Acceptance Criteria

Table `duplicate_detection_config` exists in Supabase with columns: id (uuid, PK), organization_id (uuid, FK to organizations), time_window_hours (integer, NOT NULL, default 24), similarity_threshold (numeric 0.0–1.0, NOT NULL, default 0.8), duration_tolerance_minutes (integer, NOT NULL, default 30), is_active (boolean, NOT NULL, default true), created_at (timestamptz), updated_at (timestamptz)
Each organization has at most one config row (UNIQUE constraint on organization_id)
RLS policy `SELECT` allows only rows where organization_id matches the authenticated user's organization
RLS policy `INSERT`/`UPDATE` allows only users with admin or coordinator role in the same organization
Migration script is idempotent (safe to run multiple times, uses IF NOT EXISTS guards)
Default config row is seeded for all existing organizations at migration time
Updated_at column is automatically set via a trigger on every UPDATE
Column constraints reject time_window_hours <= 0 and similarity_threshold outside [0.0, 1.0]
Schema is documented in a migration comment explaining each field's purpose

Technical Requirements

frameworks
Supabase (PostgreSQL 15+)
Flutter (Dart client)
apis
Supabase REST API
Supabase RLS policies
data models
duplicate_detection_config
organizations
user_roles
performance requirements
Table must support O(1) lookup by organization_id via unique index
Migration must complete in under 5 seconds on production-sized dataset
security requirements
RLS must be enabled on the table — no row should be accessible without a matching organization_id claim in the JWT
Only org admins (role = 'admin') can INSERT or UPDATE config rows
Service role key must never be used from client-side Flutter code

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Create the migration as a numbered SQL file (e.g. `20260329_001_duplicate_detection_config.sql`). Use `CREATE TABLE IF NOT EXISTS` and wrap policy creation in `DO $$ BEGIN ... EXCEPTION WHEN duplicate_object THEN null; END $$` blocks for idempotency.

Seed defaults using `INSERT INTO duplicate_detection_config (organization_id, ...) SELECT id, 24, 0.8, 30, true FROM organizations ON CONFLICT (organization_id) DO NOTHING`. For the updated_at trigger, reuse the shared `set_updated_at()` function if it exists in the project, otherwise create it. Expose the config to Flutter via a `DuplicateDetectionConfigRepository` class that reads from Supabase and caches the result in Riverpod state to avoid repeated round-trips.

Testing Requirements

Write Supabase SQL unit tests (pgTAP or equivalent) verifying: (1) INSERT with valid data succeeds; (2) INSERT with similarity_threshold outside [0,1] raises a CHECK violation; (3) a second INSERT for the same organization_id raises a UNIQUE violation; (4) a user authenticated as org A cannot SELECT a row belonging to org B (RLS isolation test); (5) an authenticated coordinator can read their org's config but cannot insert or update it. Integration test in Flutter: fetch config via Supabase client with a mocked JWT and assert the returned object matches the seeded defaults.

Component
Duplicate Activity Detector
infrastructure high
Epic Risks (3)
medium impact high prob technical

Fingerprint-based similarity matching may produce high false-positive rates for common activity types (e.g., weekly group sessions with the same participants), causing alert fatigue among coordinators and undermining trust in the detection system.

Mitigation & Contingency

Mitigation: Start with conservative, high-confidence thresholds (exact peer mentor match + same date + same activity type) before adding looser fuzzy matching. Allow NHF administrators to tune thresholds based on observed false-positive rates. Log all detection decisions for retrospective threshold calibration.

Contingency: Introduce a snooze mechanism allowing coordinators to dismiss false positives for a configurable period. Track dismissal rates per activity type and automatically raise the similarity threshold for activity types with high dismissal rates.

medium impact medium prob technical

A database trigger on the activities insert path adds synchronous overhead to every activity registration. For HLF peer mentors with 380 annual registrations or coordinators doing bulk proxy registration, this could create perceptible latency or lock contention.

Mitigation & Contingency

Mitigation: Implement the trigger as a DEFERRED constraint trigger (fires after the transaction commits) or replace it with a LISTEN/NOTIFY pattern that queues detection work asynchronously via an Edge Function, completely decoupling detection from the registration write path.

Contingency: Disable the synchronous trigger entirely and rely solely on the scheduled Edge Function for batch detection. Accept a detection delay of up to the scheduling interval (e.g., 15 minutes) in exchange for zero impact on registration latency.

medium impact medium prob dependency

The duplicate detection logic must be validated and approved by NHF before go-live, including agreement on threshold values and the review workflow. NHF stakeholder availability for sign-off may delay this epic's release independently of technical readiness.

Mitigation & Contingency

Mitigation: Gate the feature behind the NHF-specific feature flag so technical deployment can proceed independently of business approval. Involve an NHF administrator in threshold calibration sessions during QA, reducing the formal sign-off surface to policy and workflow rather than technical details.

Contingency: Release the detection system in 'silent mode' — flagging duplicates internally without surfacing notifications to coordinators — until NHF approves the workflow. Use the silent period to collect real data on false-positive rates and refine thresholds before activating notifications.