Distributed Computing

Home > Computer Science > Theory of Computation > Distributed Computing

It focuses on the design and analysis of algorithms for coordinating and managing the behavior of multiple computers or devices, particularly in situations where resources are distributed or shared.

Distributed Systems: This encompasses the entirety of the subject matter, covering the reasons for creating distributed systems, the pros and cons, design considerations, etc.
Network Architecture: Covers the underlying network structure and the various layers, protocols, services, and algorithms used to build and operate network communication.
Communication Models: Explains the fundamental models of communication used in distributed systems, such as direct communication, message passing, publish/subscribe, and remote procedure call (RPC).
Synchronization Mechanisms: Describes the various synchronization models and algorithms used to coordinate distributed computation and maintain consistency.
Consensus protocols: Is a process in computer science used to achieve agreement on a single data value among distributed processes or systems.
Distributed Algorithms: Focuses on the development of algorithms specially designed to run in distributed systems, where processes communicate and coordinate with one another over the network.
Fault Tolerance: Refers to the capability of the distributed system to continue functioning even in the presence of failures of its constituent components.
Replication: Explores the mechanisms used to maintain copies of data at multiple sites to increase system reliability and facilitate parallelism.
Grid Computing: A distributed computing paradigm that leverages on the pooled resources of a number of geographically dispersed computers to solve complex problems.
Cloud Computing: A networking technology whereby computing-as-a-service is delivered over the internet using a remote server infrastructure.
MapReduce: A programming model for parallel processing and analysis of distributed data on large clusters.
Parallel Computing: Addresses parallel algorithms and architectures used to build highly efficient systems with optimal performance.
High-Performance Computing: Deals with how to build distributed systems that can handle the most demanding computational workloads, such as scientific simulations, data analysis, and artificial intelligence.
Peer-to-Peer Computing: Describes the mechanisms used to create distributed systems that do not rely on central servers, but instead, employ direct communication between peers.
Edge Computing: Computer or another device processing data, usually at the edge of a network or closest to where the data is created to speed up data transfer and storage.
Client-Server Computing: This is the most common type of distributed computing where one or more clients request services from a central server.
Peer-to-Peer Computing: In this type of distributed computing, each participant (node) can act as both a server and a client. Participants share resources such as processing power, storage, and bandwidth, and can communicate with each other directly.
Grid Computing: In this type of distributed computing, resources (supercomputers, storage systems, data repositories, etc.) are networked together to form a grid, allowing multiple users to access and share resources across organizational boundaries.
Cloud Computing: Cloud computing refers to the delivery of computing services (servers, storage, databases, software, analytics, etc.) over the internet on a pay-as-you-go basis.
Volunteer Computing: Volunteer computing is a type of distributed computing where participants donate their unused computing resources to contribute to a larger project (e.g., SETI@Home).
Cluster Computing: Cluster computing is a type of distributed computing where multiple computers are connected to work together as a single system to solve complex computing problems (e.g., high-performance computing).
Fog Computing: Fog computing is a decentralized approach to computing that extends the capabilities of cloud computing to the edge of the network, enabling low-latency, real-time processing of data.
Mobile Computing: Mobile computing refers to the use of mobile devices (smartphones, tablets, wearables, etc.) to access and share computing resources and data over a distributed network.
Edge Computing: Edge computing is a computing architecture where data processing and storage are moved closer to the edge of the network, reducing latency and improving efficiency.
Ad-hoc Computing: Ad-hoc computing refers to the creation of a temporary network of wireless devices to perform a specific task without the need for a central infrastructure.
"A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another."
"Distributed computing is a field of computer science that studies distributed systems."
"The components of a distributed system interact with one another in order to achieve a common goal."
"Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components."
"When a component of one system fails, the entire system does not fail."
"Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications."
"A computer program that runs within a distributed system is called a distributed program."
"Distributed programming is the process of writing such programs."
"There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors, and message queues."
"Distributed computing also refers to the use of distributed systems to solve computational problems."
"In distributed computing, a problem is divided into many tasks."
"Each task is solved by one or more computers, which communicate with each other via message passing."
"The components of a distributed system... communicate and coordinate their actions by passing messages to one another."
"Maintaining concurrency of components" is a significant challenge in distributed systems.
"Overcoming the lack of a global clock" is a significant challenge in distributed systems.
"Managing the independent failure of components" is a significant challenge in distributed systems.
"When a component of one system fails, the entire system does not fail."
"Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications."
"A computer program that runs within a distributed system is called a distributed program."
"Computers in distributed computing... communicate with each other via message passing."