Preemptive VS Non‐Preemptive Scheduling - TarisMajor/5143-OpSystems GitHub Wiki

maxresdefault

Preemptive scheduling and non-preemptive scheduling are two fundamental approaches used in CPU scheduling to manage how processes are allocated CPU time. Understanding the differences between these two approaches is crucial for designing and optimizing operating systems to ensure efficient process management and system performance.

Preemptive Scheduling

Preemptive scheduling allows the operating system to interrupt and suspend the currently running process to allocate the CPU to another process. This ensures that high-priority processes can be executed without being blocked by lower-priority processes.

Key Characteristics of Preemptive Scheduling

  1. Interruptions: Processes can be interrupted at any time, allowing the scheduler to allocate the CPU to a different process.
  2. Dynamic Priority Handling: Allows the system to respond quickly to changes in process priorities and system load.
  3. Time Slices: Often uses time slices (quantum) to ensure that all processes get a fair share of CPU time.

Advantages of Preemptive Scheduling

  1. Improved Responsiveness: High-priority processes can be executed promptly, improving system responsiveness.
  2. Fairness: Ensures that no single process can monopolize the CPU, providing a fairer distribution of CPU time among processes.
  3. Better for Interactive Systems: Suitable for interactive and real-time systems where quick response times are critical.

Disadvantages of Preemptive Scheduling

  1. Context Switching Overhead: Frequent context switching between processes can introduce significant overhead, impacting overall system performance.
  2. Complex Implementation: More complex to implement compared to non-preemptive scheduling, requiring sophisticated algorithms to manage process priorities and scheduling decisions.

Common Preemptive Scheduling Algorithms

  1. Round Robin (RR)
  2. Shortest Remaining Time First (SRTF)
  3. Priority Scheduling (with preemption)
  4. Multilevel Queue Scheduling

Non-Preemptive Scheduling

Non-preemptive scheduling allocates the CPU to a process until the process either completes its execution or voluntarily relinquishes the CPU. Once a process starts executing, it cannot be interrupted by other processes.

Key Characteristics of Non-Preemptive Scheduling

  1. No Interruptions: Once a process is allocated the CPU, it runs to completion or until it voluntarily yields the CPU.
  2. Predictable Execution: Execution order is more predictable as processes are not interrupted.

Advantages of Non-Preemptive Scheduling

  1. Lower Overhead: Fewer context switches result in lower overhead compared to preemptive scheduling.
  2. Simpler Implementation: Easier to implement and manage due to the absence of process interruptions and preemption logic.
  3. Better for Batch Systems: Suitable for batch processing systems where processes run to completion without the need for quick responsiveness.

Disadvantages of Non-Preemptive Scheduling

  1. Potential Starvation: Long-running processes can block shorter or more critical processes, leading to potential starvation.
  2. Poor Responsiveness: Less suitable for interactive or real-time systems where quick response times are required.
  3. Risk of CPU Monopolization: A single process can monopolize the CPU, preventing other processes from executing in a timely manner.

Common Non-Preemptive Scheduling Algorithms

  1. First-Come, First-Served (FCFS)
  2. Shortest Job Next (SJN)
  3. Priority Scheduling (without preemption)

Comparison

Feature Preemptive Scheduling Non-Preemptive Scheduling
Interruptions Processes can be interrupted Processes cannot be interrupted
Context Switching High overhead due to frequent switches Lower overhead due to fewer switches
Responsiveness Higher, suitable for interactive systems Lower, suitable for batch processing
Implementation Complexity More complex Simpler
Risk of Starvation Lower, due to dynamic priority handling Higher, long processes can block others
CPU Utilization Fairer distribution Risk of CPU monopolization

Sources for Further Reading