"Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously."
This subfield focuses on the development of algorithms that can be executed simultaneously by multiple processors in a distributed system.
Consensus algorithms: These are algorithms that ensure agreement in a distributed system, like Paxos and Raft.
Distributed hash tables (DHT): DHT is a decentralized distributed system that stores key-value pairs across multiple nodes.
Byzantine fault tolerance: It is a technique for ensuring that a distributed system can withstand faulty nodes that may break the system entirely.
Atomicity and consistency: This refers to the management of transactions in a multi-node system, ensuring that these transactions happen in an all-or-nothing manner and are consistent across all nodes.
Routing algorithms: These algorithms are used to direct messages from one node to another in a distributed system.
Replication: Replication is the process of copying data across multiple nodes to ensure high availability and fault tolerance.
Gossip protocols: These protocols are used in a distributed system to efficiently propagate information through the system.
Message-passing models: This entails how nodes in a distributed system pass messages to one another.
Leader election: This process is used to select or elect a node responsible for coordinating the actions of other nodes in a distributed system.
Clock synchronization: Ensuring that all nodes in a system have agreed upon the same time.
Load balancing: This involves distributing the workload evenly among nodes in a distributed system.
Failure detection: The process of ensuring that nodes in a distributed system are alive and participating in the system.
Topology and scalability: How the architecture of the distributed system impacts its ability to scale.
Network protocols: It includes how nodes in a distributed system communicate with one another over the network.
Middleware and middleware technologies: It comprises various software components that are used to manage and scale distributed systems.
Consensus algorithms: Consensus algorithms are used in distributed systems to achieve agreement among multiple nodes or processes. Some popular consensus algorithms include Paxos, Raft, Byzantine Fault Tolerance (BFT), and Practical Byzantine Fault Tolerance (PBFT).
Distributed locking: Distributed locking algorithms provide a way for multiple processes or nodes to synchronize access to a shared resource in a distributed system. Some examples of distributed locking algorithms are the distributed locking protocol, Zookeeper, and Chubby.
Distributed Hash Tables (DHT): Distributed Hash Tables are used to maintain a distributed key-value pair look-up system. They are widely used in peer-to-peer (P2P) applications and in distributed file systems such as Hadoop HDFS.
Leader-election algorithms: Leader-election algorithms provide a way for distributed systems to elect a single leader from a group of nodes. Raft, Paxos, and Bully algorithm are some popular leader-election algorithms.
Byzantine fault tolerance (BFT) algorithms: BFT algorithms are designed to handle fault tolerance in distributed systems even when some nodes may be malicious or faulty. PBFT, Byzantine Generals Problem, Byzantine Paxos are some examples of BFT algorithms.
Distributed search: Distributed search algorithms are used to assemble search results from multiple nodes in a distributed system. The Google search engine is one such example.
MapReduce: MapReduce is a distributed algorithm used for processing and generating big data sets on a cluster of computers. It breaks down large datasets into smaller chunks that can be processed in parallel.
Distributed machine learning: This is an area in which algorithms such as parallel versions of Logistic Regression, Random Forest, Neural Networks, etc. are used to conduct machine learning on big data sets in distributed systems.
Distributed sorting: Distributed sorting algorithms are used to sort large data sets across different nodes in a distributed system. Example Sorting algorithms that are distributed in nature are Radix Sort, Bitonic Sort, etc.
"Large problems can often be divided into smaller ones, which can then be solved at the same time."
"There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism."
"Parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors, due to the physical constraints preventing frequency scaling."
"Parallel computing is closely related to concurrent computing as they are frequently used together, and often conflated, though the two are distinct."
"It is possible to have parallelism without concurrency, and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU)."
"In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion."
"Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task."
"Explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs."
"Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance."
"A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilized."
"As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture."
"Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks."
"Concurrency introduces several new classes of potential software bugs, of which race conditions are the most common."
"In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism."
"In contrast to parallel computing, in concurrent computing, the various processes often do not address related tasks."
"Typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution."
"As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture."
"Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance."
"Explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs."