Introduction to Parallel Computing

0
207

The main aim or goal to increase the computations and efficiency in computers can be
achieved through a branch of science called Parallel computing. The alternative attributes
for these goals are also represented in terms of performance and scalability measurement.
So, here is a paper to discuss some of the parallel systems, their goals, especially on
challenges and its countermeasures in their application. Now let us understand some
terminology and deeper insight into these systems.

Parallel Systems is defined that the parallel systems comprise of an algorithm and the
parallel architecture that the algorithm is executed. The main attributes that are used as
metrics to it are runtime of computation, speedup, complexity, efficiency, cost, granularity,
portability, performance and scalability.

Some of the researchers have come up with their theories and proposed laws that helps to
represent them as notable numbers for each and combined attributes. Laws proposed by
Amdahl, Gustafson and reducing algorithm are most prominent ones. Of all above
mentioned computational runtime is the system’s key attribute to other higher-level
attributes like efficiency, performance and scalability are highly desired features to regulate
and access the parallel systems.

One of the main disadvantages of parallel systems is Parallel slowdown. It is a
the phenomenon in which the parallel algorithm is parallelized beyond a threshold point in
parallel computing and causing to run the entire program slowly.

Another major deprecating cause that pulls down the parallel systems is the parallel
overhead. The amount of time which is wasted to keep all the parallel tasks in coordination
then the time that actually is used to work a solution is called the parallel overhead. It
constitutes of:

  • Time to start up the task (ts )
  • Time to coordinate and synchronise between the parallel tasks (t syn )
  • Time to transfer the data between the processors (t data )
  • Software overhead to implement the parallel architecture in OS and libraries (S p )
  • Time is taken to terminate the task

Alongside these major drawbacks, there are other factors that challenge the parallel
systems like lacking dynamic topology selection based on the computing algorithm, more
power consumption, limitations of I/O units and memory in the systems and choosing
algorithms with higher efficiency for computation nodes.
The following countermeasures are proposed and followed to eradicate the drawbacks
uniquely to avoid further implications arrived through them.

Firstly, the parallel slowdown is a deficiency that is caused by blocking the computation
resources which thereby affects the whole system components. In order to understand the
diagnostic features under the system performance to improve the slowdown. The
process/thread that is involved with any of these components:

  • CPU bound (needs more of CPU resources)
  • Memory bound (requires RAM resources)
  • I/O bound (Network and/or hard drive resources)

The above-mentioned resources are said to be limited unless the computation is never
shared with any virtual or remote services. Also, every feature is vital because the access
lock on any one of them could lead to a closure path for another resource type. As the
chances of slowdown occurring are high in a multi-threaded environment, it is highly advised
to create more singleton applications where there exists the least dependency between the
resources accessed by those threads.

To retribute the causes and drawbacks of parallel overhead, the main sources that stand as
pillars for it has to be suppressed. The main sources of parallel overhead are:

  • Inter Processor communication
  • Load imbalance
  • Extra computation

The ways that are possible for the reduction of parallel overhead are to increase the parallel
performance. That is:

1. The data locality can be decreased to maximize the communications, and therefore
they can reduce the overhead.
2. Load balancing the distribution of work among the processors should be made
uniform and parallelism is achieved from this to make it less effect prone from the
overhead.
3. Modification of the sequential algorithm may result in extra computational work
leading to overhead. This can be reduced by focusing of parallelizable points like
loops and recursive calls and make the algorithm run parallel in a more efficient way.

Another underlying development like improving the querying techniques in the parallel
query executions, by reducing the number of CPU’s execution in the interdependent
database environment. This has tables in its system where there is a chance of exchanging
the information through complex connections between the dependent tables inside it.
Usage of other strategies like performance comparisons, increase in the parallel speedups
and comparative speedups are also some of the upcoming techniques to boost the
performance in parallel systems.

All the above-discussed features and techniques are interconnected which also determines
the consequences of parallel applications that further ameliorate the performance and
scalability.

LEAVE A REPLY

Please enter your comment!
Please enter your name here