Global Sources
EE Times-India
EE Times-India > EDA/IP

Using MCAPI to ease MPI load

Posted: 04 Jul 2011     Print Version  Bookmark and Share

Keywords:High-performance computing  Message Passing Interface  asymmetric multi-processing 

High-performance computing (HPC) is dependent on large numbers of computers to accomplish a difficult task. Often, one computer will act as a master, parceling out data to processes that may be located anywhere in the world. The Message Passing Interface (MPI) provides a way to move the data from one place to the next.

Normally, MPI would be implemented once in each server to handle the messaging traffic. But with multicore servers using more than a few cores, it can be very expensive to use a complete MPI implementation because MPI would have to run on each core in the computer in an asymmetric multi-processing (AMP) configuration. The Multicore Communications API (MCAPI), on the other hand – a protocol designed with embedded systems in mind – is a much more efficient way to move MPI messages around within the computer.

Heavyweight champion
MPI was designed for HPC and is a well-established protocol that is robust enough to handle the problems that might be encountered in a dynamic network of computers. For example, such networks are rarely static. Whether it's due to updates, maintenance, the purchase of additional machines, or even the simple fact that there is a physical network cable that can be inadvertently unplugged, MPI must be able to handle the eventuality of the number of nodes in the network changing. Even with a constant number of servers, those servers run processes that may start or stop at any time. So MPI includes the ability to discover who's out there on the network.

At the programming level, MPI doesn't reflect anything about computers or cores. It knows only about processes. Processes start at initialisation, and then this discovery mechanism builds a picture of how the processes are arranged. MPI is very flexible in terms of how the topology can be created, but, when everything is up and running, there is a map of processes that can be used to exchange data. A given program can exchange messages with one process inside or outside a group or with every process in a group. The program itself has no idea whether it's talking to a computer next to it or one on another continent.

So a program doesn't care whether a computer running a process with which it's communicating is single-core or multi-core, homogeneous or heterogeneous, symmetric (SMP) or asymmetric (AMP). It just knows there's a process to which it wants to send an instant message. It's up to the MPI implementation on the computer to ensure that the messages get through to the targeted processes.

Due to the architectural homogeneity of SMP multi-core, this is pretty simple. A single OS instance runs over a group of cores and manages them as a set of identical resources. So a process is naturally spread over the cores. If the process is multi-threaded, then it can take advantage of the cores to improve computing performance; nothing more must be done.

However, SMP starts to bog down with more cores because bus and memory access bog down. For computers that are intended to help solve big problems as fast as possible, it stands to reason that more cores in a box is better, but only if they can be utilised effectively. To avoid the SMP limitations, we can use AMP instead for larger-core-count (so-called "many-core") systems.

With AMP, each core (or different subgroups of cores) runs its own independent OS instance, and some might even have no OS at all, running on "bare metal." Because a process cannot span more than one OS instance, each OS instance – potentially each core – runs its own processes. So, whereas an SMP configuration can still look like one process, AMP looks like many processes – even if they're multiple instances of the same process.

1 • 2 • 3 Next Page Last Page

Comment on "Using MCAPI to ease MPI load"
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top