» » Kotlin native. Working with a new memory model

Kotlin native. Working with a new memory model

Let's talk about a new memory management model that appeared a few months back.

On August 31, JetBrains presented a preview of the new memory management model in Kotlin Native . The main focus of the development team was on the security of sharing between threads, eliminating memory leaks and freeing us from the use of special annotations. The improvement also touched on Coroutines, and now you can safely switch between coroutine contexts without freezing. Updates were also picked up by Ktor:

So, what's new in Kotlin version 1.6.0-M1-139:

1. It is stated that we can remove all freeze () blocks (including in all background Workers ), and switch between contexts and streams without any problems.

2.Using AtomicReference or FreezableAtomicReference does not lead to memory leaks.

3.When working with global constants, you no longer need to use SharedImmutable .

4.When working with Worker.execute, the producer no longer requires returning an isolated subgraph of objects.

However, there are some nuances:
1. It is necessary to leave a freeze when working with AtomicReference. Alternatively, we can use FreezableAtomicReference or AtomicRef from atomicfu . However, we are warned that atomicfu has not yet reached version 1.x.

2.When calling the suspend function of Kotlin in Swift, its completion handler block may not arrive in the main thread. That is, we add DispatchQueue.main.async{...} if we need to.

3.deInit Swift/ObjC objects can be called in another thread.

4.Global properties are initialized lazily, i.e. at the first call. Previously, global properties were initialized at startup. If you need to support this behavior, then add the @'EagerInitialization annotation now . It is recommended to read the documentation before use.

There are nuances in working with coroutines, in the version that supports the new memory management model:

1. We can work in Worker with Channel and Flow without freezing. And unlike the native-mt version of freezing, for example, the channel will freeze all its contents, which may not be expected.

2. Dispatchers.Default is now supported by the global queue.

3. newSingleThreadContext and newFixedThreadPoolContext can now be used to create a coroutine manager with support for a pool of one or more developers.

4. Dispatchers.Main is linked to the main queue for Darwin and a separate Worker for other Native platforms. Therefore, it is recommended not to use it to work with Unit tests, since nothing will be called in the main thread's queue.

There are many nuances, there are certain performance problems and known bugs, which the development team wrote about in the documentation. But this is still a preview (not even alpha).

Well, let's try to adjust our solution from the previous articles to the new version of the memory management model.
To install version 1.6.0-M1-139, let's add some settings:

// build.gradle.kts
buildscript {
    repositories {
    dependencies {

// settings.gradle.kts

pluginManagement {
    repositories {
        maven {
            url = uri("https://maven.pkg.jetbrains.space/kotlin/p/kotlin/dev")
        maven {
            url = uri("https://maven.pkg.jetbrains.space/public/p/kotlinx-coroutines/maven")


#Common versions

And of course, add a dependency for coroutines:


val commonMain by getting {
            dependencies {

Important! If you do not have Xcode version 12.5 or higher installed, be sure to download and install. This is the minimum compatible version with 1.6.0-M1-139. If you already have several versions of Xcode installed, including lower ones, then change to the appropriate one using xcode-select, close the Kotlin Multiplatform project and run Invalidate cache and Restart. Otherwise, you will get a version incompatibility error.

Let's start by removing the freeze() blocks from the non-corutineed version:

internal fun background(block: () -> Unit) {
    val future = worker.execute(TransferMode.SAFE, { block}) {

//Main wrapper
internal fun main(block:()->Unit) {
    dispatch_async(dispatch_get_main_queue()) {

We will also remove the freeze from the parameters that we use for UrlSession (we have a native network client):

fun request(request: Request, completion: (Response) -> Unit) {
        this.completion = completion
        val responseReader = ResponseReader().apply { this.responseListener = this@HttpEngine }
        val urlSession =
                NSURLSessionConfiguration.defaultSessionConfiguration, responseReader,
                delegateQueue = NSOperationQueue.currentQueue()

        val urlRequest =
            NSMutableURLRequest(NSURL.URLWithString(request.url)!!).apply {


        fun doRequest() {
            val task = urlSession.dataTaskWithRequest(urlRequest)


To completely get rid of frosts, change AtomicReference to FreezableAtomicReference:

internal fun <T> T.atomic(): AtomicReference<T>{
    return AtomicReference(this.share())

internal fun <T> T.atomic(): FreezableAtomicReference<T>{
    return FreezableAtomicReference(this)

And we correct the code, where we use atomic links:

 private fun updateChunks(data: NSData) {
        var newValue = ByteArray(0)
        newValue += chunks.value
        newValue += data.toByteArray()
        chunks.value = newValue//.share()

The code breathes cleanliness and just flies, despite the fact that the GC (in which there may be pain ) has not changed.

Now we tune the example with coroutines:

val uiDispatcher: CoroutineContext = Dispatchers.Main
val ioDispatcher: CoroutineContext = Dispatchers.Default

We'll use the default dispatchers first. To test the magic of the GlobalQueue, let's display the context data in a block managed by ioDispatcher:

StandaloneCoroutine{Active}@26dbcd0, DarwinGlobalQueueDispatcher@28ea470

We remove frosts when working with Flow and / or Channel:

class FlowResponseReader : NSObject(),
    NSURLSessionDataDelegateProtocol {
    private var chunksFlow = MutableStateFlow(ByteArray(0))
    private var rawResponse = CompletableDeferred<Response>()

    suspend fun awaitResponse(): Response {
        var chunks = ByteArray(0)

        chunksFlow.onEach {
            chunks += it
        val response = rawResponse.await()
        response.content = chunks.string()
        return response


    private fun updateChunks(data: NSData) {
        val bytes = data.toByteArray()

Everything works great and fast. Don't forget to push the response to the main thread queue:

actual override suspend fun request(request: Request):Response {

        val response = engine.request(request)
        return withContext(uiDispatcher){response}

Important! To prevent leaks on the iOS side, especially in the case of a large number of different Swift / ObjC objects, and to help the GC, we wrap the call and response blocks in an autoreleasepool.

Now let's try the following. Let's start on MainScope, but with the help of newSingleThreadContext we will specify another background dispatcher:

 val task = urlSession.dataTaskWithRequest(urlRequest)
        mainScope.launch(newSingleThreadContext("MyOwnThread")) {
[StandaloneCoroutine{Active}@384d2a0, WorkerDispatcher@384d630]

Everything works without hesitation. A mountain of worries will fall off our development shoulders very soon.
But there is a bold "BUT". Not all libraries that we use in KMM applications are ready for the new memory model, the new approach to freezing and transferring between contexts. We may get an InvalidMutabilityException or a FreezingException.
Therefore, for them, in applications with version 1.6.0-M1-139, you will have to disable the built-in freeze:


//либо build.gradle.kts
kotlin.targets.withType(KotlinNativeTarget::class.java) {
    binaries.all {
        binaryOptions["freezing"] = "disabled"

For more details about the new version of the memory management model, see here: https://github.com/JetBrains/kotlin/blob/master/kotlin-native/NEW_MM.md

And a knee sample:

Related Articles

Add Your Comment

reload, if the code cannot be seen

All comments will be moderated before being published.