Parallel Systems

Neurons
STI developed a neural network simulator that ran on an IBM Regatta series supercomputer. It efficiently split fine grained computation across 32 processors

Sometimes a single computer is not powerful enough to do what is needed. In these cases, the only solution is to resort to using many computers in parallel. In order to take advantage of many computers, the program must be properly designed to run as a parallel program. This requires that the computation be split across multiple threads, which then get allocated to different processors. It also requires that the threads communicate with each other to dynamically decide which computation will get done by which thread. Further, the allocation of data to the processors must be taken into account, since the processor working on a particular data set must have that data available, and shipping data between processors can be quite slow, depending on the way the processors are connected.

Other times there are several computers which do different things, but must also be connected together to perform one coherent task. In these cases, the allocation of work and data is usually less of a problem. However, each processor has its own independent code base, and there must be a protocol carefully designed for how these processors will communicate. Failures in the design of such protocols often result in processors just waiting indefinitely for the other processor, and the whole system locks up.

We have written a number of parallel applications of both types, so if you need help designing or implementing a parallel system, please contact us today!