Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
Class CSCI-GA.2250-001 Spring 2024
In this lab we explore the implementation and effects of different scheduling policies discussed in class on a set of processes/threads executing on a system. The system is to be implemented using Discrete Event Simulation (DES) (http://en.wikipedia.org/wiki/Discrete_event_simulation). In discrete-event simulation, the operation of a system is represented as a chronological sequence of events. Each event occurs at an instant in time and marks a change of state in the system. This implies that the system progresses in time through defining and executing the events (state transitions) and by progressing time discretely between the events as opposed to incrementing time continuously (e.g. don’t do “sim_time++”). Events are removed from the event queue in chronological order, processed and might create new events at the current or future time. Note that DES has nothing to do with OS, it is just an awesome generic way to step through time and simulating system behavior that you can utilize in many system simulation scenarios.
You are not implementing this as a multi-program or multi-threaded application. By using DES, a process is simply the PCB object that goes through discrete state transitions. In the PCB object you maintain the state and statistics of the process as any OS would do. In reality, the OS doesn’t get involved during the execution of the program (other than system calls), only at the scheduling events and that is what we are addressing in this lab.
Any process essentially involves processing some data and then storing / displaying it (on Hard drive, display etc). (A process which doesn’t store/display processed information is practically meaningless). For instance: when creating a zip file, a chunk of data is first read, then compressed, and finally written to disk, this is repeated until all of the file is compressed. Hence, an execution timeline of any process will contain alternating discrete periods of time which are either dedicated for processing (computations involving CPU aka cpuburst) or for doing IO (aka ioburst). For this lab assume that our system has only 1 CPU core without hyperthreading - meaning that only 1 process can run at any given time; all processes are single threaded - i,e, they are either in compute/processing mode or IO mode. These discrete periods will therefore be non-overlapping. There could be more than 1 process running (concurrently) on the system at a given time though, and a process could be waiting for the CPU, therefore the execution timeline for any given process can/will contain 3 types of non-overlapping discrete time periods representing (i) Processing / Computation, (ii) Performing IO and (iii) ready to execute but waiting to get scheduled on the CPU.
The simulation works as follows:
Various processes will arrive / spawn during the simulation. Each process has the following 4 parameters:
1) Arrival Time (AT) - The time at which a process arrives / is spawned / created.
2) Total CPU Time (TC) - Total duration of CPU time this process requires
3) CPU Burst (CB) – A parameter to define the upper limit of compute demand (further described below)
4) IO Burst (IO) - A parameter to define the upper limit of I/O time (further described below)
The processes during its lifecycle will follow the following state diagram :
Initially when a process arrives at the system it is put into CREATED state. The processes’ CPU and the IO bursts are statistically defined. When a process is scheduled (becomes RUNNING (transition 2)) the cpu_burst is defined as a random number between [ 1 .. CB ]. If the remaining execution time is smaller than the cpu_burst compute, reduce it to the remaining execution time. When a process finishes its current cpu_burst (assuming it has not yet reached its total CPU time TC), it enters into a period of IO (aka BLOCKED) (transition 3) at which point the io_burst is defined as a random number between [ 1 .. IO ]. If the previous CPU burst was not yet exhausted due to preemption (transition 5), then no new cpu_burst shall be computed yet in transition 2 and you continue with the remaining cpu burst.
The scheduling algorithms to be simulated are:
FCFS, LCFS, SRTF, RR (RoundRobin), PRIO (PriorityScheduler) and PREemptive PRIO (PREPRIO). In RR, PRIO and PREPRIO your program should accept the time quantum and for PRIO/PREPRIO optionally the number of priority levels maxprio as an input (see below “Execution and Invocation Format”). We will test with multiple time quantums and maxprios, so do not make any assumption that it is a fixed number. The context switching overhead is “0” .
You have to implement the scheduler as “objects” without replicating the event simulation infrastructure (event mgmt or simulation loop) for each case, i.e. you define one interface to the scheduler (e.g. add_process(), get_next_process()) and implement the schedulers using object oriented programming (inheritance). The proper “scheduler object” is selected at program starttime based on the “-s” parameter. The rest of the simulation must stay the same (e.g. event handling mechanism and Simulation()). The code must be properly documented. When reading the process specification at program start, always assign a static_priority to the process using myrandom(maxprio) (see above) which will select a priority between 1..maxprio. A process’s dynamic priority is defined between [ 0 .. (static_priority-1) ]. With every preemption (quantum expiration or PREPRIO) the dynamic priority decreases by one. When “-1” is reached the prio is reset to (static_priority-1). Please do this for all schedulers though it only has implications for the PRIO/PREPRIO schedulers as all other schedulers do not take priority into account. However uniformly coding and calculating this will enable a simpler and scheduler independent state transition implementation.