Operating Systems - CPU Scheduling

The computer becomes more productive when the operating system can switch the CPU among processes. A single-processor system can only have one process running at a time. The other processes must wait until the CPU is free. The objective of multiprogramming is to have some process running at all times, we want to maximize CPU utilization. A process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU then just sits idle. All waiting time is wasted. But with multiprogramming however, we keep several processes in memory at one time. The operating system takes the CPU away from a process that has to wait, so another process can execute.

 

CPU - I/O Burst Cycle

A CPU burst is the processor executing instructions for a small amount of time. I/O burst is when the system is waiting for some I/O. Processes alternate between CPU bursts and I/O bursts, therefore I/O burst Cycle. Programs can be either I/O-bound or CPU-bound. An I/O-bound program has many short CPU bursts, while CPU-bound programs typically have a few CPU bursts.

 

CPU Scheduler

There's a queue in memory where processes are ready to be chosen by the CPU scheduler, also known as short-term scheduler, once the CPU becomes idle. The CPU is allocated to that process when selected by the scheduler, and the process can then start to execute its instructions.

 

Preemptive Scheduling

When do CPU-scheduling decisions take place?

  1. A process switches from running state to waiting state
  2. A process switches from running state  to ready state
  3. A process switches from waiting state to ready state
  4. A process terminates

When scheduling takes place only under circumstances 1 and 4, the scheduling scheme is called nonpreemptive (or cooperative). Once a process starts running, it keeps running until it terminates or switch to the waiting state. If  scheduling takes place under 2 and 3 too it's called preemptive.

A disadvantage of preemptive scheduling is that it can result in race conditions.

 

Dispatcher

The dispatcher is invoked in every process switch. It's the module that gives control of the CPU to the process selected by the short-term scheduler. The functions of the dispatcher involves the following:

  1. Switching context
  2. Switching to user mode
  3. Jumping to the proper location in the user program to restart that program

Since it is invoked during every process switch, it should be as fast as possible. Dispatch latency is the time it takes for the dispatcher to stop one process and start another.

 

Scheduling Criteria

There are several  different criteria to consider when trying to get the best scheduling algorithm, including:

  • CPU utilization. The CPU should be as busy as possible.
  • Throughput. Number of processes that are completed per time unit.
  • Turnaround time. The interval from the time of submission of a process to the time of completion.
  • Waiting time. The sum of the periods spent waiting in the ready queue.
  • Response time. The time it takes to start responding.

In general, we want to maximize CPU utilization and throughput, and minimize turnaround time, waiting time and response time to make scheduling as efficient as possible.

 

Scheduling Algorithms

The CPU scheduler needs to decide which of the processes in the ready queue to select. To do so, it uses a scheduling algorithm. I will go through some of them here very briefly. If you're interested to read about the algorithms in more depth, there's plenty of information on the internet.

 

(FCFS) First-Come, First-Served Scheduling

This is the simplest algorithm of them all. Unfortunately, it's can yield some very long average wait times. Imagine if the first process to get there takes a long time, then all other processes that are waiting to get their instructions read by the CPU have to wait a long time.

 

(SJF) Shortest-Job-First Scheduling

The idea behind the SJF algorithm is to pick the quickest job that needs to be done, get it out of the way first, then pick the next job and so on. For example, the processes:

Process Burst time
P1 4
P2 9
P3 3
P4 7

will execute in the order P3, P1, P4 and lastly P4. Preemptive SJF is sometimes referred to as shortest remaining time first scheduling. Preemption occurs when a new process arrives in the ready queue that has a predicted burst time shorter than the time remaining in the process whose burst is currently on the CPU.

 

Priority Scheduling

A more general case of SJF is the priority scheduling. Each job is assigned a priority, and the job with the highest priority gets scheduled first.

Process Burst time Priority
P1 4 3
P2 9 4
P3 3 1
P4 7 2

Processes described in the table above will execute in the order: P3, P4, P1, P2, as seen by the priority.

 

(RR) Round-Robin Scheduling

Round robin scheduling is designed especially for time-sharing systems. It is similar to FCFS scheduling, with the exception that CPU bursts are assigned with limits called time quantum. When a process is given the CPU, a timer is set for whatever value has been set for a time quantum.

 

Multilevel Queue Scheduling

A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. Processes are permanently assigned to one queue, based on some property of the process, such as memory, priority, type or size. Each of queue has its own scheduling algorithm. There must also exist a scheduling algorithm among the queues.

 

Multilevel Feedback Queue Scheduling

In multilevel queue scheduling, the processes are permanently assigned to a queue. The multilevel feedback queue allows processes to move between queues. This can become useful when a process uses too much CPU time, so it can be moved to a lower-priority queue.

 

References

Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Chapter 6

Enjoyed this article? Give the teacher an apple.

cookie

0

Author

authors profile photo

Articles with similar tags

Comments