Future of Parallel Computing


Parallel Computing is a field that is undergoing tremendous progress and has the potential to scale great heights. All the problems that come up in the future are mostly parallel in nature. The problems that cannot be solved in any serial approach and need a framework of a parallel workflow can be coined as parallel problems. Post an extensive study it is understood that many problems that are present in today’s world can not only be solved but can be solved more efficiently with the use of parallelism.

Parallel computing uses technologies that solve problems encountered in distributed systems and systems connected to complex networks. NP-hard problems are the problems that cannot be solved in a specified time frame. These problems require huge computing resources that are practically not possible in the case of traditional systems. Usually, supercomputers are adopted for this purpose. Supercomputers are those systems that are highly interconnected with huge chunks of processing units and comparably exponential memory available for use.

Practically, we can use parallel computing to our benefit as at times we can achieve a system efficiency of 100%. This is fulfilled by a concept called Super Linear speedup. This is gained when the total amount of work done by n-processors is less than the total amount of work performed by a single processor. Such efficiency is impossible to achieve theoretically, though it can be possible in some cases. For example, if we have 8 processors, each having a 2 MB cache, and your computation uses 6MB of data. If it were to be done sequentially on just one processor, the amount of computation by that processor will be a lot because of the data movement between CPU, cache, and RAM (The CPU needs to move in parts of 2MB sequentially). On the other hand, if we use 8 processors the overhead of data movement will be eliminated since the whole data can now be fit in the cache memory of 8 processors (total 16MB).  This way we can achieve super-linear speedup.

We have observed that whenever we have lower than threshold performance from the serial execution we use the parallel algorithm to achieve super-linear speedup. The concept is however rarely possible as the requirements of each process are far greater than the memory size of each cache. We try to understand the future scope in the field of parallel computing and algorithms by considering the alternative solutions that are already starting to boom. The two major competing computing architectures under study are the Superconducting computers and Quantum computers.

Superconducting computers is a field of research that has been active for more than 50 years now. But even with many breakthroughs in this field, there has not been a single commercial implementation of these computers to date. These computers use the concept of Exascale computing which has been brought back to the research field in late 2015. Exascale computing is believed to be in the order of the performance of a human brain. These systems do not have any exhibit relation to quantum effects hence there are different in comparison to the quantum computers.

On the other hand, Quantum computing is a field of study that is also termed as instant computing. These systems are inherently parallel but they are not conventionally parallel. We measure the processing of each quantum in qubits. According to a study “the overall computational power a quantum computer is doubled every time we add a quantum to a quantum computer”. Fact: “If we have a 300 qubit quantum computer it would be more powerful than all the computers in the world put together”.

The two biggest contenders’ in this field are IBM and Google and have already switched to work with Superconducting computers rather than parallel and distributed computers. They use their systems at subzero temperatures to avoid any resistance and provide better throughput. The future of parallelism may still be of great application in many fields and computers, but the concept of parallel computers and distributed systems may be on the verge of extinction a decade down the line. It is safe to say that the introduction of the aforementioned computing methodologies proves that parallelism is here to stay but may soon enough be replaced by quantum parallelism.


Please enter your comment!
Please enter your name here