/ SWIFT

(Almost) Everything you should know about Grand Central Dispatch in Swift

Every iOS application has a main thread, which is there to display your user interface and listen for events. Complex computations may slow down the main thread and freeze the app. Here is where multithreading comes into play. We must move all the heavy lifting to a background thread, and then move the result back to the main. This describes the main pattern of work with Grand Central Dispatch.

What is Grand Central Dispatch

At its core, Grand Central Dispatch (GCD) is a very efficient runtime for communication between threads and asynchronous execution. It is highly flexible and scales easily to lots of cores on the Mac as well as dual-core Apple Watch 4.

Grand Central Dispatch offers a task-based paradigm of thinking. There is no explicit thread management in GCD, which allows to write concurrent code without actually thinking about threads. It helps to easier translate application logic into code, compared to the thread-based paradigm.

Anatomy of Grand Central Dispatch

Under the hood GCD is just a list of work items, that you have passed for execution. DispatchWorkItem class represents a single task, which is essentially a Swift closure. These work items are enqueued when DispatchQueue.async is called and dequeued automatically. They can be categorized by a quality-of-service (QoS) class, which determines the order of execution.

Tasks can be organized into groups. DispatchGroup allows to aggregate multiple tasks and synchronize their behavior as a single unit.

A queue manages the execution of tasks either serially or concurrently. Serial queues execute tasks one at a time. Concurrent queues don’t wait for one task to finish executing before starting the next. Both queues process work units in First-In-First-Out (FIFO) order.

Internally, there is a GCD thread pool that services all queues. The threads in the pool do not have any guaranteed lifetime and can be destroyed when a task completes. At the same time, when all threads are busy, a new thread can be brought up in the pool. This is called ephemeral thread pool.

GCD provides five pre-created queues ready to use: one serial called the main queue, and four concurrent ones. The latter have different priorities: high, default, low and background.

GCD queues can execute tasks either synchronously or asynchronously. DispatchQueue.sync has to wait for the task to complete. DispatchQueue.async schedules a work item for execution and returns immediately.

Grand Central Dispatch Best Practices

Concurrency is a hard problem to tackle, even if using GCD. I have composed a list of dos and don’ts which are here to address the most frequent GCD tasks in Swift.

Committing and Cancelling a Task

Here is how a task can be executed on a global dispatch queue:

// Do work synchronously
DispatchQueue.global().sync { ... }

// Do work asynchronously
DispatchQueue.global().async { ... }

I recommend creating your own queue, since using the global ones introduces the unwanted side effects:

let queue = DispatchQueue(label: "Some serial queue")

// Do work synchronously
queue.sync { ... }

// Do work asynchronously
queue.async { ... }

Sometimes, we need extra control over the execution. To be able to cancel a task, create a work item:

class Service {
    private var pendingWorkItem: DispatchWorkItem?
    let queue = DispatchQueue(label: "Some serial queue")

    func doSomething() {
        pendingWorkItem?.cancel()
        
        let newWorkItem = DispatchWorkItem { ... }
        pendingWorkItem = newWorkItem
        
        queue.async(execute: newWorkItem)
    }
}

Batching Tasks

The purpose of dispatch groups is to let you know when several independent tasks have completed. Here is how we can wait for two tasks:

let queue = DispatchQueue(label: "Serial queue")
let group = DispatchGroup()

queue.async(group: group) {
    sleep(1)
    print("Task 1 done")
}

queue.async(group: group) {
    sleep(2)
    print("Task 2 done")
}

group.wait()

print("All tasks done")

The above code prints:

Task 1 done
Task 2 done
All tasks done

Often you want to go on and come back later when the tasks have completed. Here is how non-blocking waiting is implemented:

let queue = DispatchQueue(label: "Serial queue")
let group = DispatchGroup()

group.enter()
queue.async {
    sleep(1)
    print("Task 1 done")
    group.leave()
}

group.enter()
queue.async {
    sleep(2)
    print("Task 2 done")
    group.leave()
}

group.notify(queue: queue) {
    print("All tasks done")
}

print("Continue execution immediately")

Which prints:

Continue execution immediately
Task 1 done
Task 2 done
All tasks done

This solution also shows another pattern of working with dispatch queue: by balancing the calls to enter() and leave() we indicate the start and end of async operation. This is especially useful when working with the code that you cannot change, e.g. system or 3rd party frameworks.

Locking and Protecting the Resource

A common application of Grand Central Dispatch queues is locking. Not to repeat myself, I recommend reading Atomic Properties in Swift, where I explain the concept of locking and demonstrate different kinds of Swift locking API, including GCD. In Benchmarking Swift Locking APIs I compare their performance.

Avoiding Thread Explosion

The below example demonstrates how thread explosion causes a deadlock.

/// Create concurrent queue
let queue = DispatchQueue(label: "Concurrent queue", attributes: .concurrent)

for _ in 0..<999 {
    // 1
    queue.async {
        sleep(1000)
    }
}

// 2
DispatchQueue.main.sync {
    queue.sync {
        print("Done")
    }
}
  1. For the concurrent queue a new thread is brought up in the pool to service each async operation. The pool hits the limit at 65 threads on my machine.
  2. From the main queue we call a sync operation on the same queue. The main thread is blocked, since there are no available threads in the pool. At the same time, all the threads in the pool are waiting for the main thread. Since they are both waiting for each other, hence the deadlock occurs.

Recommendations: prefer serial queues with asynchronous dispatch. Limit the work if using concurrent queues.

Limiting Work in Progress

As we already know, unlimited work might lead to a deadlock. Here is how we can apply dispatch semaphore to limit a queue to 3 concurrent tasks:

let concurrentTasks = 3

let queue = DispatchQueue(label: "Concurrent queue", attributes: .concurrent)
let sema = DispatchSemaphore(value: concurrentTasks)

for _ in 0..<999 {
    queue.async {
        // Do work
        sema.signal()
    }
    sema.wait()
}

Parallel Execution

Grand Central Dispatch implements an efficient parallel for-loop. It must be called on a specific queue not to accidentally block the main one:

DispatchQueue.global().async {
    DispatchQueue.concurrentPerform(iterations: 999) { index in
        // Do something
    }
}

Recap

Let’s briefly recap what we just learned:

  • To put the work in the pipeline, a task must be submitted to a queue.
  • If you want to submit the work and proceed, use asynchronous execution. If you need to know that the work has completed, pick the synchronous one.
  • Dispatch groups can be used in combination with work items and queues to create flexible runloops.

Further reading: