- A Critical Section is a code segment that accesses shared variables and has to be executed as an atomic action. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. If any other process also wants to execute its critical section, it must wait until the first one finishes.
Solution to critical Problem
A solution to the critical section problem must satisfy the following three conditions:
- Mutual Exclusion:-
Out of a group of cooperating processes, only one process can be in its critical section at a given point of time.
If no process is in its critical section, and if one or more threads want to execute their critical section then any one of these threads must be allowed to get into its critical section.
- Bounded Waiting:-
After a process makes a request for getting into its critical section, there is a limit for how many other processes can get into their critical section, before this process's request is granted. So after the limit is reached, system must grant the process permission to get into its critical section.
- A race condition is a special condition that may occur inside a critical section. A critical section is a section of code that is executed by multiple threads and where the sequence of execution for the threads makes a difference in the result of the concurrent execution of the critical section.
- When the result of multiple threads executing a critical section may differ depending on the sequence in which the threads execute, the critical section is said to contain a race condition. The term race condition stems from the metaphor that the threads are racing through the critical section, and that the result of that race impacts the result of executing the critical section.
- This may all sound a bit complicated, so I will elaborate more on race conditions and critical sections in the following sections.
Race Conditions in Critical Sections
- The code in the add() method in the example earlier contains a critical section. When multiple threads execute this critical section, race conditions occur.
- More formally, the situation where two threads compete for the same resource, where the sequence in which the resource is accessed is significant, is called race conditions. A code section that leads to race conditions is called a critical section.
Process Control Block
A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including
- The current state of the process i.e., whether it is ready, running, waiting, or whatever.
- Unique identification of the process in order to track "which is which" information.
- A pointer to parent process.
- Similarly, a pointer to child process (if it exists).
- The priority of process (a part of CPU scheduling information).
- Pointers to locate memory of processes.
- A register save area.
- The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems.
Kernel and shell of Os
The inner softer core part of a seed OR
central or essential part of a system
the outside protective covering part of a seed OR
framework or exterior structure
The core inner part of the OS is the Kernel (linux kernel or Windows kernel or FreeBSD kernel) but users interact with this by using the outer part or shell (eg bash shell or cmd.exe or korn shell)
Users can not directly control hardware like printers or monitors. Users can not directly control virtual memory or process scheduling. While the kernel takes care of such matters, the user uses the UI or shell to communicate with the kernel. The UI can be CLI (bash shell or DOS shell) or GUI (Kde for Linux or Metro UI for Windows)
- Context switching is the procedure of storing the state of an active process for the CPU when it has to start executing a new one.
- For example, process A with its address space and stack is currently being executed by the CPU and there is a system call to jump to a higher priority process B; the CPU needs to remember the current state of the process A so that it can suspend its operation, begin executing the new process B and when done, return to its previously executing process A.
- Context switches are resource intensive and most operating system designers try to reduce the need for a context switch.
- They can be software or hardware governed depending upon the CPU architecture.
- Context switches can relate to either a process switch, a thread switch within a process or a register switch.
- The major need for a context switch arises when CPU has to switch between user mode and kernel mode but some OS designs may obviate it.
- A common approach to context switching is making use of a separate stack per switchable entity (thread/process), and using the stack to store the context itself. This way the context itself is merely the stack pointer. For example,
here the act of context switching is done by changing the stack pointer to a new location, and the registers are stored on the stack itself.