0
821views
Threading in MOS
1 Answer
0
1views

A single process is divided into multiple units and each unit is called a thread or light weighted process.

A thread consists of a program counter, a stack, and a set of registers, (and a thread ID.) Traditional (heavyweight) processes have only a single thread of control. There is one program counter, and one sequence of instructions that can be carried out at any given time.

As shown in the figure below, multi-threaded applications have multiple threads within a single process, each having their own program counter, stack and set of registers. Because each thread operates independently so, values of the program counter, register, and location of the stack are also getting changed. But all threads are sharing common code, data, and certain structures such as open files.

enter image description here

Fig. Single-threaded and multithreaded processes

Threads can be supported at two levels either user level or kernel level.

User-level threads

User level threads provide the run time routines required for thread management. These routines are linked to applications at runtime. Kernel intervention is not required for thread management. The libraries multiplex the potentially large number of user-defined threads on the top of a single kernel implemented process. User level thread provides excellent performance as compare to kernel level thread. In addition, the user-level thread has the following advantages:

  1. No modifications in the operating system are required to support user-level thread.
  2. They are flexible enough to customize the languages or needs of users and libraries can be used to implement different package.

There are also some disadvantages of user-level threads.

  1. This threads can provide excellent performance but limited to applications like parallel programs requires little bit intervention of kernel. User-level threads are operating within the context of traditional processes.
  2. User-level thread requires that system call be non-blocking. If thread blocks because of the system call, it will prevent other runnable threads from executing.

Kernel level thread

In kernel-level threads, kernel directly supports multiple threads per address space. The kernel also provides the operations for thread management. Kernel-level threads have the following advantages.

  1. Co-ordination between synchronization and thread scheduling is easily managed.
  2. They are suitable for multithreaded applications, such as server processes, where interaction with the kernel are frequent due to IPC, page fault, exceptions, etc.
  3. They have fewer overheads due to traditional processes.
  4. The kernel is the sole provider of thread management operations, it has to provide any feature needed by any application.

There are also some disadvantages of user-level threads.

  1. Kernel level threads are too costly to creating and destroying threads in Kernel.
  2. Context switching time of kernel level threads is too slow while the user-level thread is fast. For this reason system developer prefers user-level threads because of excellent performance.

There are four major benefits to multi-threading are given below:

Response - One thread may provide fast response while other threads are blocked or slowed down and busy itself in doing intensive calculations.

Sharing of resource - By default threads share common code, data, and other resources, which allows multiple tasks to be performed simultaneously in a single address space.

Economy - Thread creation and management (and context switches between multiple threads) is much faster than performing the same tasks for processes. It is economically affordable.

Scalability, i.e. Utilization of multiprocessor architectures - A process having a single thread, run on one CPU only. Though there would be multiple CPU’s available it will use a single one. On the other hand, the execution of a multi-threaded application may be divided into all available processors. (Note that single-threaded processes can still benefit from multi-processor architectures when there are multiple processes contending for the CPU, i.e. when the load average is above some certain threshold).

Please log in to add an answer.