high priority medium complexity integration pending backend specialist Tier 6

Acceptance Criteria

MilestoneEventSource abstract interface is defined with a Stream<MilestoneEvent> get milestoneEvents getter — no concrete dependency on ReferralAttributionService in the badge layer
BadgeCriteriaIntegrationProvider is registered in the Riverpod ProviderScope at app startup (in main.dart or AppProviders) before any navigation occurs
Subscription to milestoneEvents is established inside BadgeCriteriaIntegration.init() which is called eagerly via ref.listen or keepAlive in the provider
When a Riverpod provider is disposed and rebuilt (e.g., after navigation pop), the subscription is cancelled and re-established without missing any buffered events from task-010's replay buffer
Subscription StreamSubscription is stored and cancelled in BadgeCriteriaIntegration.dispose() to prevent memory leaks — verified with Flutter DevTools memory snapshot
Events received by BadgeCriteriaIntegration are logged at debug level with event_type and milestone_threshold for observability
If the badge awarding logic throws, the exception is caught and logged — the event stream is not cancelled and subsequent events are still delivered
Integration test confirms that an end-to-end event published by ReferralAttributionService is received by BadgeCriteriaIntegration within the same test pump cycle
No circular provider dependencies are introduced — verified by running the Riverpod provider graph builder without assertion errors

Technical Requirements

frameworks
Flutter
Riverpod
apis
Supabase Auth
data models
badge_definition
assignment
performance requirements
Subscription initialisation at app startup must add < 10ms to cold start time
Event delivery from publish to badge handler must complete within one Dart event loop tick
StreamSubscription must not hold strong references to UI widgets — use autoDispose providers to prevent widget tree leaks
security requirements
BadgeCriteriaIntegration must re-validate organisation_id from the event payload against the current authenticated session before awarding any badge — prevents cross-tenant badge injection
Event stream is in-process only — no serialisation to persistent storage or network transmission
Provider keepAlive must only be set if the subscription must survive page navigation; default to autoDispose with replay buffer to avoid stale state

Execution Context

Execution Tier
Tier 6

Tier 6 - 158 tasks

Can start after Tier 5 completes

Implementation Notes

Define the abstract interface: abstract class MilestoneEventSource { Stream get milestoneEvents; }. ReferralAttributionService implements MilestoneEventSource. Register it as a Riverpod Provider. BadgeCriteriaIntegration takes MilestoneEventSource as a constructor parameter (injected via Riverpod ref.watch).

Use ref.listen> at the provider level for reactive subscription rather than manually managing StreamSubscription inside a class — this is the idiomatic Riverpod pattern and handles provider lifecycle automatically. For providers that must outlive individual screens, use ProviderScope overrides at the MaterialApp level. Document the provider dependency graph in a comment at the top of the providers file to aid future maintainers.

Testing Requirements

Unit tests (flutter_test): test BadgeCriteriaIntegration subscription lifecycle — subscribe, receive event, dispose, re-subscribe. Use a fake MilestoneEventSource backed by a StreamController. Verify subscription is cancelled on dispose (assert streamController.hasListener == false after dispose). Verify replay buffer delivers buffered events after late subscription.

Widget test: mount a ProviderScope with both providers, publish a milestone event, pump, assert badge award method was called on the fake BadgeAwardService. Regression test: dispose and re-create the BadgeCriteriaIntegration provider and confirm no duplicate subscriptions or missed events.

Component
Referral Attribution Service
service high
Epic Risks (3)
high impact medium prob integration

Confirmed registration events originate from the membership system (Dynamics portal for HLF), which may call back asynchronously with significant delay. If the attribution service only accepts synchronous confirmation at registration time, late callbacks will fail to match the originating referral code, resulting in under-counted conversions.

Mitigation & Contingency

Mitigation: Design the attribution confirmation path as a webhook endpoint (Supabase Edge Function) that accepts a referral_code + new_member_id pair at any time after click. The service matches by code string, not by session. Persist pending_signup events immediately at onboarding screen submission so there is always a record to upgrade to 'confirmed' when the webhook fires.

Contingency: If the membership system cannot reliably call the webhook, implement a polling reconciliation job (Supabase pg_cron, daily) that queries the membership system for recently registered members and back-fills any unmatched attribution records.

medium impact medium prob technical

If confirmRegistration() is called more than once for the same new member (e.g., idempotency retry from the webhook), duplicate milestone events could be emitted, causing the badge system to award badges multiple times.

Mitigation & Contingency

Mitigation: Use a UNIQUE constraint on (referral_code_id, new_member_id) in the referral_events table for confirmed events. The confirmRegistration() method uses upsert semantics; milestone evaluation reads the confirmed count from the aggregation query rather than counting individual calls.

Contingency: If duplicate awards occur in production, the badge system should support idempotent award checks (query existing badges before awarding). Add a deduplication guard in BadgeCriteriaIntegration as a secondary defence.

medium impact medium prob scope

Stakeholder review may expand attribution requirements mid-epic to include click-through tracking per channel (WhatsApp vs SMS vs email), which is not currently in scope but was mentioned in user story discussions. This would require schema changes in the foundation epic and delay delivery.

Mitigation & Contingency

Mitigation: Capture per-channel data in the device_metadata JSONB field from day one as an unstructured field (share_channel: 'whatsapp'). This preserves data without requiring a schema column, allowing structured querying to be added later without migrations.

Contingency: If channel-level analytics become a hard requirement during this epic, timebox the change to adding a nullable channel column to referral_events and a corresponding filter parameter on the aggregation query, deferring dashboard UI to a separate task.