Blogg
Här finns tekniska artiklar, presentationer och nyheter om arkitektur och systemutveckling. Håll dig uppdaterad, följ oss på LinkedIn
Här finns tekniska artiklar, presentationer och nyheter om arkitektur och systemutveckling. Håll dig uppdaterad, följ oss på LinkedIn
Kotlin Coroutines have become the standard for asynchronous programming in the Kotlin ecosystem. While I might not reach for them in every single task, they remain one of the most powerful tools in my developer toolbox. However, using them effectively requires more than just knowing syntax, it requires understanding the underlying mechanics of concurrency.
To bridge the gap between basic usage and deep understanding, I decided to join the Coroutines Mastery course by Marcin Moskala. As part of the certification process to verify these new skills, I built a final project: KoCache, a coroutine-native caching library.
Here is a summary of the course and a look under the hood of my final project.
The course, hosted at coroutinesmastery.com, is a 5-week cohort-based deep dive. It’s not just a collection of videos; it’s a structured journey designed to take you from “I use coroutines” to “I understand how coroutines work.”
The structure is intense but rewarding. For five weeks, you get:
Job hierarchies work, how cancellation propagates, and how to properly use CoroutineContext and Dispatchers changed how I write code.Mutex and understanding the role of Actors and Channels.Flow, SharedFlow, and StateFlow. We covered reactive programming patterns that are essential for modern UI development.For my certification capstone, I built KoCache (available on Codeberg).
Why build another cache? Most caching solutions in the Java/Kotlin ecosystem are thread-based. They rely on blocking synchronization (like synchronized blocks) which stops the thread while waiting for a lock. In the world of Coroutines, blocking is a sin.
I wanted a cache that was:
KoCache implements the advanced patterns I learned in Weeks 2 and 3 of the course.
Instead of standard Java locks, KoCache utilizes Mutex. When we need to read or write to the internal map, we lock the Mutex. If the lock is busy, the calling coroutine suspends rather than blocking the underlying thread. This allows the thread to go off and do other work (like handling UI events) while waiting for the cache.
A cache without limits is dangerous. KoCache implements an LRU eviction policy to keep memory usage predictable.
LinkedHashMap configured with access-order. Every time a value is read or written, it is moved to the end of the collection.maxSize, the item at the start of the collection (the least recently used one) is dropped.Mutex mentioned above, this complex state change happens safely across multiple coroutines without race conditions.DeferredA common caching problem is the “thundering herd”: when two coroutines request the same missing key at the same time, both might try to fetch the data from the network.
KoCache solves this by storing Deferred values. When a request comes in:
Deferred job for this key?await() its result.async job to fetch the data, store it, and return the result.
This ensures that the expensive fetch operation happens exactly once, and all concurrent requesters get the same result.This was a fantastic course for mastering coroutines. The pacing and the difficulty level of the exercises were just right. I particularly appreciated the daily lesson structure, which allowed me to fit the work in whenever I had free time. If you want to understand the machinery behind launch, async, and Flow, I highly recommend it.
