critical priority medium complexity backend pending backend specialist Tier 3

Acceptance Criteria

forceResolve() accepts a pair_id (String) and a ResolutionAction enum (KEEP_BOTH, KEEP_FIRST, KEEP_SECOND, MERGE)
On success, the duplicate_queue row is updated with status='resolved', resolved_by=coordinatorId, resolved_at=UTC timestamp, and resolution_action=action
KEEP_FIRST: marks activity_b as a duplicate (sets is_duplicate=true on activities table); activity_a remains active
KEEP_SECOND: marks activity_a as a duplicate; activity_b remains active
KEEP_BOTH: marks the pair as resolved with no changes to either activity's active status
MERGE: triggers the merge workflow (creates a merged activity record, marks both originals as merged=true) — can delegate to a MergeActivityService if that service exists, otherwise throw UnimplementedError with a clear message
Throws ChapterScopeException if the pair's chapter_id is not in the coordinator's scope
Throws PairAlreadyResolvedException if the pair status is already 'resolved'
On success, the unresolved count cache is immediately invalidated
The entire operation is wrapped in a Supabase transaction (or sequential writes with rollback handling) to prevent partial updates
Returns a ResolutionResult domain object with the resolved pair's updated state

Technical Requirements

frameworks
Flutter
Riverpod
Supabase Dart client
apis
Supabase PostgREST — UPDATE on duplicate_queue, conditional UPDATE on activities table
data models
ResolutionAction (enum)
ResolutionResult
DuplicateQueueRow
ActivityRecord
performance requirements
Resolution must complete within 1 second under normal network conditions
Use Supabase RPC (stored procedure) for MERGE action to ensure atomicity
security requirements
Verify coordinator owns the pair's chapter before executing any write — prevent unauthorized resolution
Only users with coordinator or admin role may call forceResolve()
coordinator_id persisted in resolved_by must come from the server-side JWT, not a client-supplied value
All writes must pass through Supabase RLS policies

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Define a `ResolutionAction` enum in the domain layer: `enum ResolutionAction { keepBoth, keepFirst, keepSecond, merge }`. The repository method signature should be `Future resolveQueuePair(String pairId, ResolutionAction action, String coordinatorId, DateTime resolvedAt)`. For KEEP_FIRST and KEEP_SECOND, a second repository call updates the activities table — consider wrapping both calls in a Supabase RPC function to get transaction guarantees (Dart client doesn't support multi-statement transactions natively). For MERGE, gate behind a feature flag or throw `UnimplementedError('MERGE action requires MergeActivityService — implement in a later task')` to prevent silent data loss.

After any successful write, call `ref.invalidate(unresolvedCountProvider)` to immediately refresh the nav badge. Emit a `PairResolvedEvent` on a Riverpod event bus so the queue list BLoC can remove the resolved pair from its local state without re-fetching.

Testing Requirements

Unit tests: mock the repository and verify each ResolutionAction produces the correct sequence of repository calls. Test that KEEP_FIRST marks activity_b as duplicate and not activity_a. Test ChapterScopeException thrown for out-of-scope pairs. Test PairAlreadyResolvedException for already-resolved pairs.

Test cache invalidation is called on success. Test that failure in the second write (activity update) is handled gracefully with a descriptive error. Integration tests: execute forceResolve() against a test Supabase instance and verify row states. Test concurrent resolution of the same pair returns an error for the second caller.

Component
Deduplication Queue Service
service medium
Epic Risks (2)
medium impact medium prob technical

If the duplicate check RPC fails due to a network error or Supabase outage, the service must decide whether to block submission entirely (safe but disruptive) or allow submission to proceed silently (functional but risks data duplication). An incorrect choice leads to either user frustration or data quality issues.

Mitigation & Contingency

Mitigation: Define an explicit error policy in the service: RPC failures result in a DuplicateCheckResult with status: 'check_failed' and no candidates. The caller treats this as 'allow submission, flag for async review'. Document this as the intended graceful degradation behaviour in the service interface contract.

Contingency: If stakeholders require blocking on RPC failure, expose a configurable `failMode` parameter in the service that can be toggled per organisation via the feature flag system without a code deployment.

medium impact medium prob scope

The DuplicateComparisonPanel must handle varying activity schemas across organisations (NHF, HLF, Blindeforbundet each have different activity fields). A rigid layout may not accommodate all field variations, causing truncation or missing data in the comparison view.

Mitigation & Contingency

Mitigation: Design the panel to render a dynamic list of key-value pairs rather than a fixed-column layout. Define a `ComparisonField` model that each service populates with only the fields relevant to the activity type and organisation, allowing the panel to adapt without schema knowledge.

Contingency: If dynamic rendering proves too complex within the timeline, ship a simplified panel showing only the five most critical fields (peer mentor, activity type, date, chapter, submitter) and log a follow-up ticket for full field rendering in a later sprint.