high priority medium complexity integration pending backend specialist Tier 5

Acceptance Criteria

publishMilestoneEvent is called exactly once per confirmed attribution that crosses a threshold boundary — no duplicate events for the same threshold crossing
Milestone thresholds (1, 5, 10, 25, 50) are defined as a sealed constant list; crossing multiple thresholds in one batch emits one event per threshold crossed
The emitted MilestoneEvent struct contains: mentor_id (UUID), organisation_id (UUID), milestone_threshold (int), cumulative_confirmed_count (int), event_timestamp (DateTime UTC), and event_type string 'recruitment_milestone'
Event is published only after the attribution record is persisted to Supabase — no phantom events for rolled-back attributions
BadgeCriteriaIntegration.onMilestoneEvent callback receives the event within the same Dart microtask queue cycle as publication
If BadgeCriteriaIntegration is null or not yet initialised, the event is buffered and replayed when the integration registers — no silent drops
getCumulativeConfirmedCount query is O(1) via the pre-computed aggregate column (from task-009) — no full table scan per event
Method is idempotent: calling publishMilestoneEvent twice with the same attribution_id does not emit a second event
All event payloads are validated against the MilestoneEvent schema before dispatch; malformed events throw MilestoneEventValidationException
Unit tests achieve 100% branch coverage on publishMilestoneEvent including threshold boundary conditions

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase PostgreSQL 15
Supabase Edge Functions (Deno)
data models
assignment
badge_definition
activity
performance requirements
Milestone threshold check must complete in < 5ms using the pre-computed aggregate — no additional DB round-trip
Event dispatch to BadgeCriteriaIntegration must not block the attribution confirmation write path — use Dart StreamController or callback post-await
Buffer for unregistered listeners must not grow unbounded — cap at 100 events with FIFO eviction and warning log
security requirements
MilestoneEvent payload must not include PII beyond mentor_id UUID — no names, phone numbers, or national identity numbers
Event bus operates in-process only; milestone events are never transmitted over the network directly from the mobile client
Organisation_id included in every event to enforce multi-tenant boundary at the badge layer
RLS on Supabase ensures cumulative count query only returns rows for the authenticated mentor's organisation

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Define MilestoneEvent as an immutable Dart class with copyWith. Use a static const List kMilestoneThresholds = [1, 5, 10, 25, 50] in a shared constants file so the badge layer can reference the same list. The publish method should: (1) read the pre-computed confirmed_count from the attribution aggregate (already updated by task-009), (2) find all thresholds in kMilestoneThresholds where previous_count < threshold <= new_count, (3) for each crossed threshold emit a separate event. Use a StreamController.broadcast() owned by the ReferralAttributionService Riverpod provider; BadgeCriteriaIntegration subscribes on its own provider initialisation.

Avoid tight coupling — BadgeCriteriaIntegration should only depend on the abstract MilestoneEventSource interface, not on ReferralAttributionService directly. Ensure the StreamController is closed in the provider's onDispose to prevent memory leaks.

Testing Requirements

Unit tests (flutter_test): test publishMilestoneEvent with mocked BadgeCriteriaIntegration using Mockito-style fakes. Scenarios: count exactly at threshold (1, 5, 10), count one below threshold (no event), count jumps over threshold in one increment (single event for crossed threshold), duplicate attribution_id (no second event), BadgeCriteriaIntegration not yet registered (buffer then replay on register), integration throws during onMilestoneEvent (exception isolated, attribution not rolled back). All tests run in < 2 seconds total. No network calls — fully synchronous with in-memory fakes.

Component
Referral Attribution Service
service high
Epic Risks (3)
high impact medium prob integration

Confirmed registration events originate from the membership system (Dynamics portal for HLF), which may call back asynchronously with significant delay. If the attribution service only accepts synchronous confirmation at registration time, late callbacks will fail to match the originating referral code, resulting in under-counted conversions.

Mitigation & Contingency

Mitigation: Design the attribution confirmation path as a webhook endpoint (Supabase Edge Function) that accepts a referral_code + new_member_id pair at any time after click. The service matches by code string, not by session. Persist pending_signup events immediately at onboarding screen submission so there is always a record to upgrade to 'confirmed' when the webhook fires.

Contingency: If the membership system cannot reliably call the webhook, implement a polling reconciliation job (Supabase pg_cron, daily) that queries the membership system for recently registered members and back-fills any unmatched attribution records.

medium impact medium prob technical

If confirmRegistration() is called more than once for the same new member (e.g., idempotency retry from the webhook), duplicate milestone events could be emitted, causing the badge system to award badges multiple times.

Mitigation & Contingency

Mitigation: Use a UNIQUE constraint on (referral_code_id, new_member_id) in the referral_events table for confirmed events. The confirmRegistration() method uses upsert semantics; milestone evaluation reads the confirmed count from the aggregation query rather than counting individual calls.

Contingency: If duplicate awards occur in production, the badge system should support idempotent award checks (query existing badges before awarding). Add a deduplication guard in BadgeCriteriaIntegration as a secondary defence.

medium impact medium prob scope

Stakeholder review may expand attribution requirements mid-epic to include click-through tracking per channel (WhatsApp vs SMS vs email), which is not currently in scope but was mentioned in user story discussions. This would require schema changes in the foundation epic and delay delivery.

Mitigation & Contingency

Mitigation: Capture per-channel data in the device_metadata JSONB field from day one as an unstructured field (share_channel: 'whatsapp'). This preserves data without requiring a schema column, allowing structured querying to be added later without migrations.

Contingency: If channel-level analytics become a hard requirement during this epic, timebox the change to adding a nullable channel column to referral_events and a corresponding filter parameter on the aggregation query, deferring dashboard UI to a separate task.