Clever Geek Handbook
📜 ⬆️ ⬇️

Grand central dispatch

Grand Central Dispatch ( GCD ) is Apple's technology designed to create applications that take advantage of multi-core processors and other SMP systems [1] . This technology is an implementation of parallelism of tasks and is based on the design template “Thread Pool”. GCD was first introduced in Mac OS X 10.6 . The source code for the libdispatch library that implements GCD services was released under the Apache license on September 10, 2009 [1] . Subsequently, the library was ported [2] to another FreeBSD operating system [3] .

GCD allows you to define tasks in the application that can be executed in parallel, and launches them in the presence of free computing resources (processor cores) [4] .

A task can be defined as a function or as a “ block ”. [5] A block is a non-standard extension of the syntax of the programming languages C / C ++ / Objective-C , which allows encapsulating code and data in one object, an analog of a closure . [four]

Grand Central Dispatch uses threads at a low level, but hides implementation details from the programmer. GCD tasks are lightweight, inexpensive to create and switch; Apple claims that adding a task to the queue requires only 15 processor instructions , while creating a traditional thread costs several hundred instructions. [four]

The GCD task can be used to create a work item that is placed in the task queue or can be linked to the event source. In the second case, when an event is triggered, the task is added to the corresponding queue. Apple claims that this option is more efficient than creating a separate thread waiting for the event to fire.

Content

  • 1 Platform Features
  • 2 Examples
    • 2.1 Asynchronous Call
    • 2.2 Parallelization of a cycle
    • 2.3 Creating sequential queues
  • 3 See also
  • 4 References

Platform Features

The GCD platform declares several types of data and functions for creating and manipulating them.

  • Dispatch Queues are objects that support task queues (anonymous blocks, or functions), and run these tasks in queue order. The library automatically creates several queues with different priority levels and performs several tasks at the same time, automatically choosing the optimal number of tasks to run. A library user can create any number of consecutive queues that start tasks in the order they are added, one at a time. Since a sequential queue can only perform one task at a time, such queues can be used to synchronize access to shared resources.
  • Dispatch Sources are objects that allow you to register blocks or functions for their asynchronous execution when a certain event is triggered.
  • Dispatch Groups are objects that allow you to combine tasks into groups for subsequent joining. Tasks can be added to the queue as members of a group, and then the group object can be used to wait for all tasks in the group to complete.
  • Dispatch Semaphores are objects that allow no more than a certain number of tasks to be performed simultaneously. See semaphore .

Examples

Two examples demonstrating the ease of use of Grand Central Dispatch can be found in John Syracuse's Snow Leopard review on Ars Technica. [6] .

Asynchronous Call

Initially, we have an application with the analyzeDocument method that counts words and paragraphs in a document. Usually, the process of counting words and paragraphs is fast enough and can be performed in the main thread, without fear that the user will notice a delay between pressing the button and receiving the result:

  - ( IBAction ) analyzeDocument: ( NSButton * ) sender {
     NSDictionary * stats = [ myDoc analyse ];
     [ myModel setDict : stats ];
     [ myStatsView setNeedsDisplay : YES ];
   }

If the document is very large, then the analysis can take a lot of time for the user to notice the “freezing” of the application. The following example makes it easy to solve this problem:

  - ( IBAction ) analyzeDocument :( NSButton * ) sender 
 {

      dispatch_async ( dispatch_get_global_queue ( 0 , 0 ), ^ {
          NSDictionary * stats = [ myDoc analyze ];
          dispatch_async ( dispatch_get_main_queue (), ^ {
            [ myModel setDict : stats ];
            [ myStatsView setNeedsDisplay : YES ];
          });
      });
 }

Here the call [myDoc analyze] is placed in a block, which is then placed in one of the global queues. After [myDoc analyze] completes, the new block is placed in the main queue, which updates the user interface. After making these simple changes, the programmer avoided the potential “hang” of the application when analyzing large documents.

Loop

The second example demonstrates parallelizing a loop:

  for ( i = 0 ; i < count ; i ++ ) {
       results [ i ] = do_work ( data , i );
 }
 total = summarize ( results , count );

Here the do_work function is called count times, the result of its i-th execution is assigned to the i-th element of the results array, then the results are summed. There is no reason to believe that do_works relies on the results of its previous calls, so there is nothing stopping it from making multiple do_works calls in parallel. The following listing demonstrates the implementation of this idea using GCD:

  dispatch_apply ( count , dispatch_get_global_queue ( 0 , 0 ), ^ ( size_t i ) {
      results [ i ] = do_work ( data , i );
     });
 total = summarize ( results , count );

In this example, dispatch_apply starts count times the block passed to it, placing each call in the global queue and passing the numbers from 0 to count-1 to the blocks. This allows the OS to select the optimal number of threads for the most complete use of available hardware resources. dispatch_apply does not return control until all its blocks have completed work, this ensures that all the work of the original loop is completed before calling summarize.

Creating Sequential Queues

The developer can create a separate sequential queue for tasks that must be performed sequentially, but can work in a separate thread. A new queue can be created this way:

  dispatch_queue_t exampleQueue ;
 exampleQueue = dispatch_queue_create ( "com.example.unique.identifier" , NULL );

 // exampleQueue can be used here.

 dispatch_release ( exampleQueue );

Avoid putting such a task in a sequential queue that puts another task in the same queue. This is guaranteed to lead to deadlock . The following listing shows a case of such a deadlock:

  dispatch_queue_t exampleQueue = dispatch_queue_create ( "com.example.unique.identifier" , NULL );

 dispatch_sync ( exampleQueue , ^ {
   dispatch_sync ( exampleQueue , ^ {
     printf ( "I am now deadlocked ... \ n " );
   });
 });

 dispatch_release ( exampleQueue );

See also

  • Blocks (C language extension)
  • OpenMP is an open standard for C , C ++ , Fortran .
  • Intel TBB is an open source C ++ library from Intel .
  • Task Parallel Library is a .NET technology developed by Microsoft .
  • Java Concurrency is a Java technology (also known as JSR 166).

Links

  1. ↑ Apple Previews Mac OS X Snow Leopard to Developers Archived June 11, 2008. .
  2. ↑ GCD libdispatch w / Blocks support working on FreeBSD
  3. ↑ FreeBSD Quarterly Status Report
  4. ↑ 1 2 3 Apple Technical Brief on Grand Central Dispatch Archived on September 20, 2009. .
  5. ↑ Grand Central Dispatch (GCD) Reference (unspecified) . Date of treatment October 31, 2009. Archived April 9, 2012.
  6. ↑ Mac OS X 10.6 Snow Leopard: the Ars Technica review
Source - https://ru.wikipedia.org/w/index.php?title=Grand_Central_Dispatch&oldid=90224371


More articles:

  • Proof (film 2005)
  • 2nd Brest Street
  • Khodorovsky, Joseph Isaevich
  • Kaminsky, Dina Isaakovna
  • Religion in the USA
  • Tokmok (Ukraine)
  • Coloninae
  • Kuosmanen, Sakari
  • Blocks (C language extension)
  • Prison and Prison Workers Day

All articles

Clever Geek | 2019