Monday, July 1, 2024

CST334 - Week 2

 

CST334 – Operating Systems - Week 2

In this module, the discussion focused on processes and how OS manages them. The process is the program's state at an instance during execution, identified by PID. A processor contains the memory status for static and dynamic allocations, CPU registers, and file descriptors (standard input, output, and error). The first step to running a program is loading its code by invoking its main method. The process abstraction provides an illusion of endless resources, achieved by the process control block CPU virtualization and the scheduler. For instance, multiple processes will run and stop using CPU Time Sharing supported by mostly all OSes. The OS manages the running processes via the CPU scheduler. Policies determine which process to run at a point in time, and the mechanism allows the switching of processes status.


In Unix systems, processes are created using fork() and exec() system calls. A fork() function will create a copy of the parent process with a different address space called a child. I also learned that a Unix shell means that a code could be run between the call of fork() and exec(), allowing an additional layer of manipulation to run a program. The use of wait() results in a deterministic outcome since the parent must wait for the termination of the child. Additionally, exec() frees the child to execute a new program.

The concept of CPU scheduling is not only interesting but also familiar.




  • FIFO: First In, First Out. Works great if all jobs have almost equal run time. Otherwise, the convoy effect greatly impacts the average turnaround time.
  • Shortest Job First (SJF): designed to allow the shortest run time job first, improving the convoy effect issue if shortest jobs arrive before the longer ones.
  • Shortest Time-to-Completion First (STCF): Longer jobs are preempted once a short job arrives to resume after the completion of the short run time. STCF adds preemption to SJF, therefore, improving the overall average turnaround time.
  • Round Robin: great for response time. The time-slicing or RR will interrupt by the period value of time slice while considering the cost of context-switch.

Multi-Level Feedback Queue (MLFQ)

Jobs will run based on the priority set by the schedules in different queues. If more than one job is set to the same priority, the scheduling uses Round Robin between these jobs.  A new job enters the system with high priority until the allotment period is over, in which case the priority is reduced. In the case the job alerts the CPU for activities like I/O waiting on a user’s input, the job remains a high priority. The idea is to get a sense of the job length so that the OS can position the job in the appropriate queue. If the job is short, a queue with high priority allows the job to finish; however, if the job takes longer, the job is lowered to the next lower-priority queue.

Some Problems with MLFQ

  • Starvation: When long-running jobs don’t receive CPU time due to many high-priority jobs, a priority boost solves this issue, which allows all jobs to be moved into a topmost priority after a period of time.
  • Game the scheduler: This problem, which involves triggering the CPU with conditions to continue preserving a higher share of the CPU, can significantly disrupt the scheduling process.





No comments:

Post a Comment

CST462S - Final Week

Service Learning Journal Reflections This marks the final week of our Service Learning experience. Our team successfully completed the final...