Dealing with the challenges of multi-core concurrency
Keywords:multi-core concurrency dependency loop parallelisation
Concurrency fundamentals
It is first important to separate the notion of inherent concurrency and implemented parallelisation. A given algorithm or process may be full of opportunities for things to run independently from each other. An actual implementation will typically select from these opportunities a specific parallel implementation and go forward with that.
For example, in our burger-making example, you could make burgers more quickly if you had multiple assembly lines going at the same time. In theory, given an infinite supply of materials, you could make infinitely many burgers concurrently. However, in reality, you only have
a limited number of employees and countertops on which to do the work. So you may actually implement, say, two lines even though the process inherently could allow more. In a similar fashion, the number of processors and other resources drives the decision on how much parallelism to implement.
It's critical to note, however, that a chosen implementation relies on the inherent opportunities afforded by the algorithm itself. No amount of parallelisation will help an algorithm that has little inherent concurrency, as we'll explore later in this chapter.
So what you end up with is a series of program sections that can be run independently punctuated by places where they need to "check in" with each other to exchange data—an event referred to as "synchronisation."
For example, one fast food employee can lay a patty on a bun completely independently from someone else squirting mustard on a different burger. During the laying and squirting processes, the two can be completely independent. However, after they're done, each has to pass his or her burger to the next guy, and neither can restart with a new burger until a new one is in place. So if the mustard guy is a lot faster than the patty-laying guy, he'll have to wait idly until the new burger shows up. That is a synchronisation point (figure 1).
![]() |
Figure 1: Where the two independent processes interact is a synchronisation point. |
A key characteristic here is the fact that the two independent processes may operate at completely different speeds, and that speed may not be predictable. Different employees on different shifts, for example, may go at different speeds. This is a fundamental issue for parallel execution of programs. While there are steps that can be taken to make the relative speeds more predictable, in the abstract, they need to be considered unpredictable. This concept of a program spawning a set of independent processes with occasional check-in points is shown in figure 2.
![]() |
Figure 2: A series of tasks run mutually asynchronously with occasional synchronisation points. |
Related Articles | Editor's Choice |
Visit Asia Webinars to learn about the latest in technology and get practical design tips.