Multi‐threading Strategies Kotlin - G33-Moviles-2026-1/Wiki GitHub Wiki

Multi-threading & Concurrency Strategies

1. suspend / await — Non-blocking I/O Operations

Protocol: suspend functions / Dispatchers.IO

Where: All DAO operations (Room database), Retrofit network calls, and repository data fetching (e.g., fetching rooms, parsing schedules).

Reason: Every database query or network request is marked as suspend and moved to Dispatchers.IO. This prevents blocking the main thread during I/O operations, ensuring the UI remains responsive and fluid while data is fetched or written.


2. Off-thread Serialization — CPU-bound Work Delegation

Protocol: withContext(Dispatchers.IO)

Where: Snapshot persistence in the navigation system, JSON serialization/deserialization of complex states.

Reason: Converting complex objects or large lists into JSON strings (and writing them to disk) is CPU-bound work. Running this on the main thread would block rendering. Using withContext(Dispatchers.IO) moves this heavy lifting to a background thread pool, keeping the main UI thread free.


3. viewModelScope.launch — Lifecycle-Aware Async Execution

Protocol: viewModelScope.launch { ... }

Where: All ViewModels (e.g., initiating a room search, toggling a favorite, handling a booking confirmation).

Reason: UI-triggered asynchronous tasks need to be safely canceled if the user navigates away before the task completes. viewModelScope automatically manages the lifecycle of these coroutines, preventing memory leaks and crashes by canceling ongoing work when the ViewModel is destroyed.


4. Two-phase Async Load — Reactive Cache + Network Update

Protocol: Sequential emit or update with try/catch block.

Where: Favorites loading, schedule initialization.

Reason: To provide an instant UI, the app first reads from the local database (instant) and updates the StateFlow. Then, it launches a network request in the background. Once the network request succeeds, it updates the StateFlow again with the fresh data. Each phase is handled independently so a network failure doesn't break the locally cached UI.


5. Flow.collect — Continuous Background Observation

Protocol: flow.collect { ... } or flow.collectLatest { ... }

Where: NetworkMonitor, observing DataStore preferences, continuous UI state observation in Compose.

Reason: The app needs to react to changes over time (like network connectivity dropping or returning) without manually polling. Flow provides a cold asynchronous data stream. collect allows the app to passively react to new emitted values in a thread-safe, non-blocking manner.


6. Serialized Critical Sections — Mutex Locks

Protocol: Mutex.withLock { ... }

Where: Pending-sync processing in Favorites, Navigation history/cache updates in NavigationRepository.

Reason: When multiple asynchronous operations attempt to modify the same shared state (like quickly navigating while state is restoring, or rapid sync triggers), race conditions occur. Mutex ensures that only one coroutine can enter the critical section at a time, preventing data corruption and duplicate operations.


7. Persistent Background Sync — WorkManager

Protocol: WorkManager.enqueueUniqueWork with NetworkType.CONNECTED constraints.

Where: Background data synchronization, Offline queue flushing (e.g., BookingRepository).

Reason: Crucial background operations (like syncing offline edits to the backend) must complete even if the app is closed or the device is rebooted. WorkManager guarantees execution and allows setting constraints so the sync only attempts to run when the device has a valid network connection.


8. Isolated Failure Handling — SupervisorJob

Protocol: CoroutineScope(SupervisorJob() + Dispatchers.IO)

Where: Long-lived repository scopes, SyncManager, NetworkMonitor.

Reason: In a standard coroutine scope, if one child coroutine fails (throws an exception), the entire scope is canceled. SupervisorJob isolates failures. If a background sync task fails, it won't crash the network monitor or other independent tasks running in the same repository scope.


Summary

Strategy Protocol Purpose
Non-blocking I/O suspend / Dispatchers.IO Move database/network calls off the main thread
Off-thread serialization withContext(Dispatchers.IO) Move CPU-heavy JSON parsing to background pool
Lifecycle-aware execution viewModelScope.launch Auto-cancel async work when screen is closed
Reactive cache + network Sequential updates via StateFlow Instant cached UI followed by background refresh
Continuous observation Flow.collect Passive, non-blocking reaction to stream events
Serialized critical sections Mutex.withLock Prevent race conditions on shared mutable state
Persistent background sync WorkManager + Network Constraints Guaranteed execution of queued offline actions
Isolated failure handling SupervisorJob Prevent single task failure from canceling scope