Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
25% of course grade
Create a simulation engine in the C language to model the behavior of a process dispatcher in an operating
system, moving processes from the ready state to the running state.
You will use three different dispatching algorithms:
1. first-come-first-served (FCFS)
2. shortest-job-first (SJF)
3. round-robin (RR)
Each process will have a burst time, which represents the total time that a process is in the system.
You will model this part of the system:
new
ready running
admitted exit
interrupt
dispatch
terminated
In FCFS and SJF, a process will stay in the running state for the duration of its quantum. After that, it will
leave the system.
In RR, a process will move from running to ready when the quantum expires. Each time this happen, you
will subtract the length of the quantum from the burst time from the process, so that eventually the
process does finish running and leaves the system.
The simulation system will be a discrete-event simuation (DES). Everything that happens in the system will
happen in response to an event. All events in the system will be in a priority queue.
Here are the events in the system:
PROCESS_SUBMITTED
PROCESS_STARTS
1
PROCESS_ENDS
PROCESS_TIMESLICE_EXPIRES
The program will consist of an event loop. Here's the basic idea:
currentTime = getMinPrioirty(PQ);
event = dequeue(PQ);
while (event != NULL && time < stop_time) {
handleEvent(event);
currentTime = getMinPriority(PQ);
event = dequeue(PQ);
}
You might choose to handle the events directly in the loop instead of having a separate handleEvent()
function.
Each event will have a type and a pointer to a process (data structures are below).
Here's what the system does for the various events:
PROCESS_SUBMITTED
if the CPU is idle then
create a new event PROCESS_STARTS event for this process at t = currentTime
else
put the process in the CPU event with priority as follows:
if schedulerType is FCFS or RR then
priority = 0
else
if schedulerType is SJF then
priority = the burstTime for this process
PROCESS_STARTS
update the waiting time for this process
if schedulerType is FCFS or SJF then
create a new event PROCESS_ENDS at currentTime + burstTime for this process
else
if burstTime for this process > quantum then
create a new event PROCESS_TIMESLICE_EXPIRES at currentTime + quantum
else create a new event event PROCESS_ENDS at currentTime + burstTime for this process
PROCESS_ENDS
update stats about this process
if the cpu queue is not empty then
get the next process in the CPU queue
create a new event PROCESS_STARTS for this process at currentTime
PROCESS_TIMESLICE_EXPIRES
update info for this process (subtract the quantum from the burstTime for this process)
if the cpu queue is not empty then
get the next process in the CPU queue
2
create a new event PROCESS_STARTS for this process at currentTime
There are two priority queues for this system
(1) the event queue
- the priority is the time
- the data is a process
(2) the CPU queue (= the ready queue)
- for SJF, the priority is the burstTime; for RR and FCFS the priority value is always zero, so this is just a firstin-first-out
queue
- the data is a process
You can use your priority-queue implementation for each of these, or you can use mine, which is available
on my gitlab, in ForStudents/CS201/Assignments/Three.
Here are data structures you'll need:
typedef struct {
int pid;
int burstTime;
int waitTime;
int numPreemptions;
int lastTime;
} Process;
typedef struct {
EventType eventType;
Process *process;
} Event;
and this typedef:
typedef enum EventTypeEnum {
PROCESS_SUBMITTED,
PROCESS_STARTS,
PROCESS_ENDS,
PROCESS_TIMESLICE_EXPIRES
} EventType;
The priority queues themselves are just variables of type PQueue:
PQueue eventQueue;
PQueue cpuQueue;
For a process, the waitTime will be the total time that the process spends waiting for the CPU
For SJF and FCFS, this will be the difference in the time of PROCESS_START and PROCESS_SUBMITTED.
For RR, this will be initially the difference between PROCESS_START and PROCESS_SUBMITTED. And then
every time the timeslice expires, save the current time in lastTime, and the next time the process is
scheduled, increment waitTime with the startTime - lastTime.
3
For RR, every time that the quantum expires for a process and the process still has burstTme > 0, increment
the numPremptions field for that process.
Part I: Implement FCFS and SJF scheduling, using an event queue and a CPU queue
Due Monday, Nov. 26th, at 11:59 pm.
Use the processes shown in the slides (five process submitted, with specified event times and CPU times).
This will let you verify the correctness of your system: if you print out each event and the time and the
process it corresponds to, then you should see the correct events and times and processes for this example:
FCFS
t = 0 PROCESS_SUBMITTED pid = 1
t = 3 PROCESS_SUBMITTED pid = 2
t = 4 PROCESS_SUBMITTED pid = 3
t = 6 PROCESS_SUBMITTED pid = 4
t = 6 PROCESS_SUBMITTED pid = 5
t = 0 PROCESS_STARTS pid = 1 waitTime = 0
t = 0 PROCESS_ENDS pid = 1
t = 6 PROCESS_STARTS pid = 2 waitTime = 6-3 = 3
t = 13 PROCESS_ENDS pid = 2
t = 13 PROCESS_STARTS pid = 3 waitTime = 13-4 = 9
t = 15 PROCESS_ENDS pid = 3
t = 15 PROCESS_STARTS pid = 4 waitTime = 15-6 = 9
t = 20 PROCESS_ENDS pid = 4
t = 20 PROCESS_STARTS pid = 5 waitTime = 20-6 = 14
t = 22 PROCESS_ENDS pid = 5
mean wait time = (0 + 3 + 9 + 9 + 14) / 5 = 7 ms
SJF
t = 0 PROCESS_SUBMITTED pid = 1
t = 3 PROCESS_SUBMITTED pid = 2
t = 4 PROCESS_SUBMITTED pid = 3
t = 6 PROCESS_SUBMITTED pid = 4
t = 6 PROCESS_SUBMITTED pid = 5
t = 0 PROCESS_STARTS pid = 1 waitTime = 0
t = 6 PROCESS_ENDS pid = 1
t = 6 PROCESS_STARTS pid = 3 waitTime = 6-4 = 2
t = 8 PROCESS_ENDS pid = 3
t = 8 PROCESS_STARTS pid = 5 waitTime = 8-6 = 2
t = 10 PROCESS_ENDS pid = 5
t = 10 PROCESS_STARTS pid = 4 waitTime = 10-6 = 4
t = 15 PROCESS_ENDS pid = 4
t = 15 PROCESS_STARTS pid = 2 waitTime = 15-3 = 12
t = 22 PROCESS_ENDS pid = 2
4
mean wait time = (0 + 2 + 2 + 4 + 12) / 5 = 4 ms
Part II: FCFS and SJF with a set of random processes
Due Saturday, Dec. 1st, at 11:59 pm.
Define an int variable nprocesses. Set this in your program to 50. Generate random processes in this way:
double proc_interarrival_time_mean = 10;
double proc_burst_time_mean = 5;
int proc_interarrival_time, t;
int nprocesses;
nprocesses = 50;
t = 0;
for (i=0; i<nprocesses; ++i) {
proc = (Process *) malloc(sizeof(Process));
proc->pid = i+1; // start the process IDs at 1 instead of zero
proc->waitTime = 0;
proc->lastTime = 0;
proc->numPreemptions = 0;
proc->burstTime = gen_exprand(proc_burst_time_mean);
// now create new PROCESS_SUBMITTED event for this proc, at time = t
proc_interarrival_time = gen_exprand(proc_interarrival_time_mean);
t = t + proc_interarrival_time;
}
This models the behavior of random processes in a system. See discussion of gen_randexp() below.
Let your simulation run until all of the processes have exited the system.
Part III: Implement round-robin (RR) scheduling
Due Friday, Dec. 7th at 11:59 pm.
Use an int variable called quantum. Set the quantum to 4 to test your simulation engine with the five
processes from part I. Then use RR with the random processes described in Part II.
Notes
Here is my gen_exprand() function:
int gen_exprand(double mean) {
double r, t;
5
r = drand48();
t = -log(1-r) * mean;
return(floor(t));
}
If you are compiling/linking this on a Windows machine, you'll need to find the equivalent function to
drand48(). On Linux, drand48() returns a uniformly distributed random number in the range [0, 1).