4 ‐ Linux Process - CloudScope/DevOpsWithCloudScope GitHub Wiki
A Process is an instance of a running program. It has its own memory space, system resources, and execution state.
PID (Process ID): Each process in Linux is assigned a unique identifier called PID. This helps the system manage and track processes.
PPID (Parent Process ID): This is the PID of the process that started (or is the parent of) the current process.
Process Type:
Interactive Processes:
-
Definition: Processes that require user interaction to perform their tasks.
-
Characteristics: They typically handle user inputs directly and provide interactive feedback.
-
Examples: Command-line interfaces, GUI applications like text editors.
Batch Processes
-
Definition: Processes that run without user interaction and usually execute a batch of tasks.
-
Characteristics: They are often scheduled to run during off-peak hours or automated tasks.
-
Examples: Data processing scripts, batch job schedulers like cron jobs.
Real-time Processes
-
Definition: Processes that require immediate and deterministic responses to external events.
-
Characteristics: They operate with strict timing constraints and high priority to ensure timely execution.
-
Examples: Control systems in industrial applications, real-time data processing tasks.
Daemon Processes
-
Definition: Background processes that start at boot time and run continuously or at scheduled intervals.
-
Characteristics: They do not interact directly with users and often provide system services.
Examples:
cron
: Manages scheduled tasks.
sshd
: Handles incoming SSH connections.
Process States:
-
Running (R): The process is currently being executed.
-
Waiting (S): The process is waiting for some event or condition (e.g., I/O operations).
-
Stopped (T): The process has been stopped, usually by receiving a signal.
-
Zombie (Z): The process has completed execution but still has an entry in the process table. This happens when the parent process hasn't yet read the exit status.
-
Dead: A process that has terminated and is no longer in the process table.
Process Management Commands:
ps
: Displays information about running processes.
Example: ps aux shows detailed information about all processes.
top
: Provides a dynamic, real-time view of system processes, including CPU and memory usage.
Example: top displays a list of processes with their resource usage.
htop
: An interactive process viewer that is a more user-friendly alternative to top.
Example: htop provides a colorful, interactive display of processes.
pstree
: Shows processes in a tree format, indicating parent-child relationships.
Example: pstree displays a hierarchical view of processes.
kill
: Sends signals to processes, commonly used to terminate processes.
Example: kill -9 PID forcefully kills the process with the specified PID.
pkill
: Kills processes based on name or other attributes.
Example: pkill process_name terminates all processes matching the name.
Process Control
Signals: Processes can receive signals to perform various actions (e.g., termination, pause). Signals can be sent using commands like kill and are used for process control and communication.
Common signals:
SIGTERM (15): Request for termination (graceful shutdown).
SIGKILL (9): Forceful termination (immediate and cannot be caught or ignored).
SIGSTOP (19): Pauses the process (can be resumed with SIGCONT).
Different type of conditions in linux process.
1. Race Condition
Definition: A race condition occurs when the outcome of a program depends on the sequence or timing of uncontrollable events, such as the
order of execution of concurrent threads or processes. This typically happens when multiple threads or processes access shared resources
concurrently without proper synchronization.
Characteristics:
-
Unpredictable Behavior: The result of the program can vary with different executions due to the non-deterministic order of thread/process execution.
-
Data Corruption: If multiple threads/processes read and write to shared data without synchronization, data inconsistency and corruption can occur.
-
Common Scenarios: Updating a shared variable, accessing a file, or modifying data structures concurrently.
Prevention:
-
Mutexes: Use mutexes to protect critical sections of code that access shared resources.
-
Atomic Operations: Use atomic operations for simple updates to shared variables.
-
Condition Variables: Use condition variables to coordinate thread execution.
2. Deadlocks
- Definition: A deadlock is a situation where two or more processes or threads are unable to proceed because each is waiting for the other to release resources. This results in a situation where none of the involved processes can continue, effectively causing the system to halt.
Characteristics:
-
Circular Wait: A set of processes are waiting for resources held by each other, forming a circular chain.
-
Resource Holding: Processes hold resources while waiting for additional resources that are currently held by other processes.
-
Mutual Exclusion: At least one resource is held in a non-shareable mode (only one process can use it at a time).
Prevention:
-
Resource Ordering: Always acquire locks in a consistent, predefined order.
-
Timeouts: Use timeouts when trying to acquire locks to avoid indefinite waiting.
-
Deadlock Detection: Implement mechanisms to detect and recover from deadlocks, such as resource allocation graphs.
-
Avoid Nested Locks: Minimize or avoid nested locking when possible to reduce the risk of deadlocks.
3. Livelocks
Definition: A livelock occurs when a process or thread keeps changing states in response to other processes or threads, but no progress is made. Unlike a deadlock, where processes are stuck waiting for resources indefinitely, in a livelock, processes are actively trying to make progress but are unable to do so due to constant interference from each other.
Characteristics:
-
Active State: Processes or threads are not idle but are continuously trying to complete their tasks.
-
Continuous Change: They keep changing their state or actions in response to each other, but no actual progress towards the intended goal is achieved.
-
Interference: The interference usually happens because processes or threads are constantly adjusting their actions in response to the states or actions of others.
Prevention:
-
Backoff Strategies: Implement backoff strategies where processes or threads wait for a while before retrying, thus reducing the chance of livelock.
-
Avoiding Overly Aggressive Retry: Ensure that retry logic does not lead to continuous interference.
-
Proper Resource Allocation: Use resource allocation strategies to prevent continuous interference between processes.
4. Starvation
Definition: Starvation occurs when a process or thread is perpetually denied the resources it needs to proceed, even though other processes or threads are receiving resources. This often happens when resources are allocated to other processes while the starved process is continuously ignored or delayed.
Characteristics:
-
Resource Denial: A process or thread is unable to gain access to necessary resources due to continuous allocation to other processes.
-
Prolonged Waiting: The affected process or thread waits indefinitely or for an unreasonably long time while other processes or threads continually access the resources.
-
Priority Issues: Often arises in systems with priority-based scheduling where lower-priority processes might be starved of CPU time.
Example:
In a system where higher-priority threads constantly receive CPU time, a lower-priority thread might never get a chance to execute, leading to starvation.
Prevention:
-
Fair Scheduling Algorithms: Use scheduling algorithms that ensure all processes or threads get a fair share of resources. For example, Round Robin scheduling helps in ensuring that all processes get a chance to execute.
-
Aging: Implement aging strategies where the priority of waiting processes increases over time, ensuring that they eventually receive resources.
-
Resource Allocation Policies: Design policies that prevent indefinite postponement of processes, ensuring all processes eventually get the required resources.
Scheduling Algorithms in Operating System
Preemptive Scheduling Algorithms
In these algorithms, processes are assigned with priority. Whenever a high-priority process comes in, the lower-priority process that has occupied the CPU is preempted. That is, it releases the CPU, and the high-priority process takes the CPU for its execution.
Non-Preemptive Scheduling Algorithms
In these algorithms, we cannot preempt the process. That is, once a process is running on the CPU, it will release it either by context switching or terminating. Often, these are the types of algorithms that can be used because of the limitations of the hardware.
Types of Scheduling Algorithms in OS.
-
First Come First Serve (FCFS)
-
Shortest Job First (SJF)
-
RoundRobin
-
Shortest Remaining Time First (SRTF)
Ref: https://www.scaler.com/topics/operating-system/scheduling-algorithms-in-os/