Round Robin Scheduling Algorithm and Time Quanta Selection

1. The Scheduler

To achieve efficient CPU utilization a multitasking RTOS uses an orderly transfer of control from one task to another. To accomplish this the RTOS must monitor system resources and the execution state of each task, and it must ensure that each task receives control of the CPU in a timely manner.
The key word here is timely. A real-time system that does not perform a required operation at the correct time has failed. That failure can have consequences that range from benign to catastrophic. Response time for a request for kernel services and the execution time of these services must be fast and predictable. With such an RTOS, application program code can be designed to ensure that all needs are detected and processed.
Real-time applications usually consist of several tasks (also can be called processes or threads) that require control of system resources at varying times due to external or internal events. Each task must compete with all other tasks for control of system resources such as memory, execution time, or peripheral devices. In order to manage this “competition” between these tasks, we use the Scheduler. The scheduler is the heart of every kernel. Scheduler provides algorithms needed to determine which task execute when. Most real time kernels support two common scheduling algorithms. They are
  • Preemptive Priority Based Scheduling Algorithm
  • Round Robin Algorithm

2. Preemptive Priority Based Scheduling Algorithm

A task can be compute-bound (heavily dependent on CPU resources) or I/O-bound (heavily dependent on access to external devices). A task that is I/O bound or compute bound cannot be allowed to monopolize a system resource if a more important task requires the same resource. There must be a way of interrupting the operation of the lesser task and granting the resource to the more important one.
One method for achieving this is to assign a priority to each task. The kernel, then uses this priority to determine a task’s place within the sequence of execution of other tasks. In a prioritized system the highest priority task that is ready is given control of the processor. Tasks of lower priority must wait and, even when running, may have their execution preempted by a task of higher priority. When a task completes its operation or blocks (waits for something to happen before it can continue) it releases the CPU. The scheduler in the RTOS then looks for the next highest priority code entity that is ready to run and assigns it control of the CPU.

3. Round Robin Algorithm

In real time operating system, if there are multiple tasks with different priorities than we use Preemptive priority based scheduling algorithm. But there is a chance for the existence of multiple tasks with the same priority. In such type of situations, we go for Round Robin Scheduling algorithm.

A round-robin scheduling is similar to First Come First Serve scheduling, but pre-emption is added to switch between processes. It attempts to share the CPU fairly among all ready tasks of the same priority. Without round-robin scheduling, when multiple tasks of equal priority must share the processor, a single task can use the processor by never blocking, thus never giving other equal priority tasks a chance to run.
Round-robin scheduling achieves fair allocation of the CPU to tasks of the same priority by an approach known as time slicing or time Quantum. Each task executes for a defined interval or time slice; then another task executes for an equal interval, in rotation. The allocation is fair in that no task of a priority group gets a second slice of time before the other tasks of a group are given a slice.

4. To implement Round Robin scheduling

  • We keep the ready queue as a FIFO queue of processes.
  • New processes are added to the tail of the ready queue.
  • The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process.
  • The process may have a CPU burst of less than 1 time quantum.
  • In this case, the process itself will release the CPU voluntarily.
  • The scheduler will then proceed to the next process in the ready queue.
  • Otherwise, if the CPU burst of the currently running process is longer than 1 time quantum,
  • The timer will go off and will cause an interrupt to the OS.
  • A context switch will be executed, and the process will be put at the tail of the ready queue.
  • The CPU scheduler will then select the next process in the ready queue.


Figure 1: The list of runnable processes

Figure 2: The list of runnable processes after ‘A’ uses up its quantum (‘A’ placed at tail of ready queue)

The average waiting time under the Round Robin policy is often long. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds: (a time quantum of 4 milliseconds)

Table 1: Average waiting, turnaround times of Round Robin Policy
Process
Duration
(ms)
Order
Arrival
Time (ms)
Waiting
Time (ms)
Turnaround
Time (ms)
P1
24
1
0
17
41
P2
14
2
0
19
33
P3
3
3
0
8
11
Average
-
-
-
14.66
28.33

Using round-robin scheduling, we would schedule these processes according to the following chart


Figure 3: Process execution sequence in Round Robin Scheduling Algorithm

  • In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row (unless it is the only runnable process).
  • If a process's CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue. The RR scheduling algorithm is thus pre-emptive.
·         If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units.
·         Each process must wait no longer than (n-1)*q time units until its next time quantum.
·         For example, with five processes and a time quantum of 20 milliseconds, each process will get up to 20 milliseconds every 100 milliseconds.
  • The performance of the RR algorithm depends heavily on the size of the time quantum.
·         If the time quantum is extremely large, the RR policy is the same as the First Come First Serve policy.
·         If the time quantum is extremely small (say, 1 millisecond), the RR approach is called processor sharing and (in theory) creates the appearance that each of  n processes has its own processor running at 1/n the speed of the real processor.
  • We need also to consider the effect of context switching on the performance of RR scheduling. Switching from one process to another requires a certain amount of time for doing the storing and restoring of CPU registers and memory maps, updating various tables and lists, flushing and reloading the memory cache, etc.
·         Let us assume that we have only one process of 10 time units.
·         If the quantum is 12 time units, the process finishes in less than 1 time quantum, with no overhead.
·         If the quantum is 6 time units, however, the process requires 2 quanta, resulting in a context switch.
·         If the time quantum is 1 time unit, then nine context switches will occur, slowing the execution of the process accordingly
Figure 4: The way in which a smaller time quantum increases context switches.

  • Thus, we want the time quantum to be large with respect to the context-switch time.
  •  If the context-switch time is approximately 10 percent of the time quantum, then about 10 percent of the CPU time will be spent in context switching.
  • In practice, most modern systems have time quanta ranging from 10 to 100 milliseconds.
  • The time required for a context switch is typically less than 10 microseconds; thus, the context-switch time is a small fraction of the time quantum.
  • Setting the quantum too short causes too many process switches and lowers the CPU efficiency, but setting it too long may cause poor response to short interactive requests.
  • Poor average waiting time when job lengths are identical; Imagine 10 jobs each requiring 10 time slices, all complete after about 100 time slices, even FCFS is better!
  • In general, the average turnaround time can be improved if most processes finish their next CPU burst in a single time quantum. If context-switch time is added in, the average turnaround time increases for a smaller time quantum, since more context switches are required.
  • Although the time quantum should be large compared with the context-switch time, it should not be too large. If the time quantum is too large, RR scheduling degenerates to FCFS policy.
Unknown
Unknown

Latest
First