Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Processors/DSPs

Dealing with the challenges of multi-core concurrency

Posted: 05 Mar 2015     Print Version  Bookmark and Share

Keywords:multi-core  concurrency  dependency  loop  parallelisation 

The opportunities and challenges brought about by multi-core technology—or any kind of multiple processor arrangement—are rooted in the concept of concurrency. You can loosely conceive of this as 'more than one thing happening at a time'. But when things happen simultaneously, it's very easy for chaos to ensue. If you create an 'assembly line' to make burgers quickly in a fast food joint, with one guy putting the patty on the bun and the next guy adding a dab of mustard, things will get messy if the mustard guy doesn't wait for a burger to be in place before applying the mustard. Coordination is key, and yet, as obvious as this may sound, it can be extremely challenging in a complex piece of software.

Concurrency fundamentals
It is first important to separate the notion of inherent concurrency and implemented parallelisation. A given algorithm or process may be full of opportunities for things to run independently from each other. An actual implementation will typically select from these opportunities a specific parallel implementation and go forward with that.

For example, in our burger-making example, you could make burgers more quickly if you had multiple assembly lines going at the same time. In theory, given an infinite supply of materials, you could make infinitely many burgers concurrently. However, in reality, you only have

a limited number of employees and countertops on which to do the work. So you may actually implement, say, two lines even though the process inherently could allow more. In a similar fashion, the number of processors and other resources drives the decision on how much parallelism to implement.

It's critical to note, however, that a chosen implementation relies on the inherent opportunities afforded by the algorithm itself. No amount of parallelisation will help an algorithm that has little inherent concurrency, as we'll explore later in this chapter.

So what you end up with is a series of program sections that can be run independently punctuated by places where they need to "check in" with each other to exchange data—an event referred to as "synchronisation."

For example, one fast food employee can lay a patty on a bun completely independently from someone else squirting mustard on a different burger. During the laying and squirting processes, the two can be completely independent. However, after they're done, each has to pass his or her burger to the next guy, and neither can restart with a new burger until a new one is in place. So if the mustard guy is a lot faster than the patty-laying guy, he'll have to wait idly until the new burger shows up. That is a synchronisation point (figure 1).

Figure 1: Where the two independent processes interact is a synchronisation point.

A key characteristic here is the fact that the two independent processes may operate at completely different speeds, and that speed may not be predictable. Different employees on different shifts, for example, may go at different speeds. This is a fundamental issue for parallel execution of programs. While there are steps that can be taken to make the relative speeds more predictable, in the abstract, they need to be considered unpredictable. This concept of a program spawning a set of independent processes with occasional check-in points is shown in figure 2.

Figure 2: A series of tasks run mutually asynchronously with occasional synchronisation points.

1 • 2 • 3 • 4 • 5 • 6 • 7 Next Page Last Page

Comment on "Dealing with the challenges of multi..."
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top