In this first project, you will implement a rudimentary simulation of an operating system. The initial focus will be on processes, assumed to be resident in memory, waiting to use the CPU. Memory and the I/O subsystem will not be covered in depth in this project.
A process is defined as a program in execution. For this assignment, processes are in one of the following three states:
Processes in the READY state reside in a queue called the ready queue. This queue is ordered based on a configurable CPU scheduling algorithm. In this first assignment, there are four al- gorithms to implement: shortest job first (SJF); shortest remaining time (SRT); first come first served (FCFS); and round robin (RR). Note that if you use a large enough time slice, RR essentially becomes the FCFS algorithm.
All four of these algorithms will be simulated for the same set of processes, which will allow for a comparative analysis. As such, when you run your program, all four algorithms are simulated.
And in general, when a process reaches the front of the queue and the CPU is free to accept the next process, the given process enters the RUNNING state and starts executing its CPU burst.
After the CPU burst is completed, if the process does not terminate, the process enters the BLOCKED state, waiting for an I/O operation to complete (e.g., waiting for data to be read in from a file). When the I/O operation completes, depending on the scheduling algorithm, the process either (1) returns to the READY state and is added to the ready queue or (2) preempts the currently running process and switches into the RUNNING state.
Note that preemptions occur only for some algorithms, i.e., for SRT and RR. Each algorithm is summarized on the next page.
In SJF, processes are stored in the ready queue in order of priority based on their CPU burst times. More specifically, the process with the shortest CPU burst time will be selected as the next process executed by the CPU.
The SRT algorithm is a preemptive version of the SJF algorithm. In SRT, when a process arrives, before it enters the ready queue, if it has a CPU burst time that is less than the remaining time of the currently running process, a preemption occurs. When such a preemption occurs, the currently running process is added back to the ready queue.
The FCFS algorithm is a non-preemptive algorithm in which processes line up in the ready queue, waiting to use the CPU. This is your baseline algorithm (and may be implemented as RR with an infinite time slice).
The RR algorithm is essentially the FCFS algorithm with predefined time slice tslice. Each process is given tslice amount of time to complete its CPU burst. If this time slice expires, the process is preempted and added to the end of the ready queue (though see the rradd parameter described below).
If a process completes its CPU burst before a time slice expiration, the next process on the ready queue is immediately context-switched into the CPU.
Skipping preemptions in RR
For your simulation, if a preemption occurs but there are no other processes on the ready queue, do not perform a context switch. For example, if process G is using the CPU and the ready queue is empty, if process G is preempted by a time slice expiration, do not context-switch process G back to the empty queue. Instead, keep process G running with the CPU and do not count this as a context switch. In other words, when the time slice expires, check the queue to determine if a context switch should occur.