Inter Process Communication - aryanjoshi0823/5143-Operating-System GitHub Wiki
Inter-process Communication (IPC) is a mechanism that allows processes to communicate with each other and synchronize their actions. IPC is an essential concept in both operating system design and distributed systems, enabling efficient resource sharing, data exchange, and task coordination. IPC can be implemented through various mechanisms such as shared memory, message passing, pipes, sockets, and message queues. This article discusses the primary IPC mechanisms: Pipes, Sockets, Message Queues, and Shared Memory.
A pipe is a method of inter-process communication that allows data to be transferred in one direction between processes. Pipes are typically used for communication between related processes, such as parent-child processes.
- Anonymous Pipes: These are used for communication between processes that share a common ancestor. Data written by one process to the pipe can be read by another process, but only in one direction.
- Named Pipes (FIFOs): Named pipes allow communication between processes that do not share a common ancestor. Named pipes appear as files in the filesystem, making them accessible by processes that know the pipe's name, even if they are not related.
- Unidirectional Communication: Data flows in only one direction.
- Blockage Mechanism: When the pipe buffer is full, the writing process is blocked until the consumer reads from the pipe.
- Simple Use Case: Pipes are commonly used in producer-consumer scenarios, where one process produces data, and another consumes it.
- A producer process writes data into a pipe, and a consumer process reads the data from the pipe.
A socket is an endpoint for communication between two machines or processes. Sockets enable network-based or local inter-process communication by using the TCP/IP protocol stack or other communication protocols.
- TCP Sockets: Provide reliable, stream-oriented communication. These sockets guarantee that data is received in the same order it was sent, ensuring reliable data transmission between processes.
- UDP Sockets: Provide faster, less reliable communication. UDP sockets do not guarantee the order or delivery of data, making them suitable for real-time applications like video streaming or online gaming.
- Bidirectional Communication: Data can flow in both directions between the two connected processes (client and server).
- Client-Server Model: Sockets are widely used in client-server models where the server listens for incoming connections, and clients initiate requests.
- Network Communication: Sockets can be used for communication over a local area network (LAN) or across the internet.
- A client process sends a request via a socket, and the server process responds via the same socket.
A message queue is a method of inter-process communication that stores messages temporarily until they are retrieved by another process. Message queues allow asynchronous communication between processes, making them useful in scenarios where processes are not synchronized.
- FIFO (First In, First Out): Messages are processed in the order in which they are placed in the queue.
- Asynchronous Communication: The sender does not wait for the receiver to acknowledge the message, allowing both processes to operate independently.
- Blocking and Non-blocking Operations: Message queues can operate in blocking mode (sender waits until the message is received) or non-blocking mode (sender continues execution without waiting).
- A producer process places messages in a message queue, and a consumer process retrieves and processes them when needed.
Shared memory is a memory segment that is shared between two or more processes. It allows processes to access the same block of memory, enabling fast data exchange without the need for inter-process communication (IPC) mechanisms such as pipes or message queues.
- Fastest Communication Method: Since processes directly access the shared memory space, data transfer is fast and requires minimal system overhead.
- Synchronization Required: Since multiple processes can access shared memory simultaneously, proper synchronization mechanisms (e.g., semaphores, mutexes) are required to avoid data corruption or race conditions.
- Bidirectional Communication: Both processes can read and write to the shared memory.
- Multiple processes use shared memory to store large datasets that need to be accessed and modified by all processes, such as in high-performance computing applications.
Feature | Pipes | Sockets | Message Queues | Shared Memory |
---|---|---|---|---|
Direction | Unidirectional | Bidirectional | Unidirectional | Bidirectional |
Type of Communication | Local communication (related processes) | Local or network communication (client-server) | Local communication (asynchronous) | Local communication (shared access) |
Data Buffering | Buffered in pipe | Buffered in socket buffer | Buffered in message queue | No buffering; direct access to memory |
Reliability | Reliable (if FIFO) | Reliable (TCP), Unreliable (UDP) | Reliable (FIFO order) | Depends on synchronization mechanisms |
Use Cases | Simple producer-consumer tasks | Web servers, chat systems, real-time applications | Message-based communication in distributed systems | High-performance applications, shared data access |
Performance | Moderate | Moderate to High | Moderate | High |