Design Issues in Distributed Operating System
1 Answer

Following are the design issues in a distributed operating system:


The user interface of a transparent distributed system must be the same in local and remote resources. That is, users should be able to access remote resources as though these resources were local, and the distributed system should be responsible for locating the resources and for managing for the exact interaction between them. User mobility is also another aspect of transparency.

Users should not force to use the specific machine but the system must allow them to log into any machine and use the services of machines. A transparent distributed system provides the features of user mobility by bringing over the user’s environment (for example, home directory) to wherever he logs in.

Many aspects of a distributed system can cover the concept of transparency. These are given below.

a) Transparency in Location: The users cannot tell the location of resources.

b) Transparency in Migration: Resources can move from one site to another without losing their identities like names.

c) Transparency in Replication: The users cannot tell how many copies are there of the same file or directory on different computers.

d) Transparency in Concurrency: Multiple users can share resources at the same time automatically.

e) Transparency in Parallelism: All tasks can happen in parallel without disturbing the users.


Scalability is another issue in the distributed system. Scalability is nothing but the capacity of a system to increased service load. The resources are bounded with the system. It may become completely full or saturated under increased load.

Let’s take an example of a file system; saturation occurs either when a server's CPU runs at a high utilization rate or when disks are almost full. Scalability is a relative property, but it can be measured accurately. A scalable system should increase the load but not reduce the speed than a non-scalable one.

But in practice, first, its performance reduces moderately; after that its resources reach a saturated state. Even perfect design also cannot handle this situation. Scalability is related to fault tolerance. A heavily loaded component can become paralyzed and work likes a faulty component.

When we shift the load from a faulty component to the backup component, the latter on those components can be saturated. Generally, having extra resources is essential for ensuring reliability as well as for handling peak loads easily. An inherent advantage of a distributed system is a potential for fault tolerance and scalability because of the multiplicity of resources.


Another key issue is flexibility. It is important that system be flexible because we are just beginning to learn about how to build a distributed system.


One of the important goals of the distributed OS was to make them more reliable than a single processor system. If some machines fail, other machines will continue their jobs without any trouble. A high reliable system must be highly available, but that is not enough. Trusted data to the system must not be lost and if files are stored on multiple systems are must be kept consistent. More copies kept, better the availability.


The actual issue of performance is the hidden data in the background. The performance aspect is important in building a transparent, flexible, reliable distributed system. In general, when running a particular application on a distributed system, it should not be appreciably worse than running the same application on a single processor. Unfortunately, achieving this is easier said than done.

Fault Tolerance

Computer systems sometimes fail. When faults occur in hardware or software, programs may produce unexpected or incorrect results. Sometimes it may stop before they have completed the respective calculation. But the failures in a distributed system are partial – that is, though some part of the system fails while others continue to function. Therefore the handling of failures is somewhat difficult. The distributed system provides a high degree of availability because it has replicated servers and data can be recovered easily.

Please log in to add an answer.