0
10kviews
Issues in Designing Load Sharing Algorithms.

This question appears in Mumbai University > Parallel And Distributed System subject

Marks: 10 M

Year: Dec 13

1 Answer
0
304views
  • In a distributed system,proper utilization of resources is not required to balance load on all nodes but it is necessary and sufficient to prevent the nodes from being idle.This is known as load sharing.
    • Load-sharing is much simpler than load-balancing since it only attempts to ensure that no node is idle when heavily node exists.
    • Load sharing algorithms require that proper decisions be made regarding load estimation policy,process transfer policy,state information exchange policy,location policy,priority assignment policy,and migration limiting policy.
    • Other policies for load sharing are as follows:

1.Load estimation policies for Load-sharing algorithms

  • Load-sharing algorithms attempt to avoid nodes from being idle but it is sufficient to know whether a node is busy or idle.Thus these algorithms normally employ the simplest load estimation policy of counting the total number of processes on a node.
  • In modern systems counting the total number of processes on a node is not suitable.Therefore measuring CPU utilization should be used to estimate the load of a node in these systems.

2.Process transfer policies for Load-sharing algorithms

  • Load sharing algorithms normally use all-or-nothing strategy.This strategy uses the threshold value of all the nodes fixed at one.A Node becomes a receiver node when it has no process,& becomes a sender node when it has more than one process.
  • To avoid processing power on nodes having zero process load-sharing algorithms,a threshold value of two is used instead of one.
  • When CPU utilization is used as the load estimation policy,the double-threshold policy should be used as the process transfer policy.

3.Location policies for Load-sharing algorithms

Location policy decides whether the sender node or the receiver node of the process takes the initiative to search for suitable node in the system.The location policy can be the following:

Sender-initiated location policy

  • In this policy heavily loaded nodes search for lightly loaded nodes.
  • When the node becomes overloaded,it either broadcasts or randomly probes the other nodes one by one to find a node that is able to receive remote processes.
  • A node is viable to receive a process only if the transfer of the process to the receivers node will not increase the receiver nodes load above its threshold value.
  • When broadcasting,suitable node is known to be present as soon as reply arrives at the sender node.

Receiver-initiated location policy

  • In this policy lightly loaded nodes search for heavily loaded nodes
  • When the node becomes under loaded,it either broadcast or randomly probes the other nodes one by one to indicate its willingness to receive remote processes.
  • A node is viable to receive a process only if the transfer of the process to the receivers will not increase the receiver node’s load above its threshold value.
  • When broadcasting,suitable node is known to be present as soon as reply arrives at the receiving node.

4.State information exchange policies for Load-sharing algorithms:

In load-sharing algorithms it is not necessary for the nodes to periodically exchange state information,but needs to know the state of other nodes when it is either under loaded or overloaded.A node shares state information with other nodes only when its state changes.Commonly used policies are:

i.Broadcast when state changes

  • In sender-initiated/receiver-initiated location policy a node broadcasts State Information Request when it becomes overloaded/under-loaded.
  • It is called broadcast-when-idle policy when receiver-initiated policy is used with fixed threshold value of one.

ii.Poll when state changes

  • In large networks polling mechanism is used
  • Polling mechanism randomly asks different nodes for state information until find an appropriate one or probe limit is reached.
  • It is called poll-when-idle policy when receiver-initiated policy is used with fixed threshold value of one.

5.Priority assignment policy for Load-balancing algorithms

In a distributed operating system that support process migration,a priority assignment rule for scheduling the local & remote process of a node should be planned.

One of the following priority assignment rules can be used:

i.Selfish priority assignment rule

  • In this the local processes are given higher priority than remote processes.
  • This rule has the worst response time performance & best response time performance is achieved for local processes.

ii.Altruistic priority assignment rule

  • Remote processes are given higher priority than local processes in this rule.
  • It has the best response time performance.

iii.Intermediate priority assignment rule

  • Priority in this rule is decided depending on number of local & remote processes on a node.
  • When the number of local processes is greater or equal to the number of remote processes than local processes are given higher priority or else remote processes are given higher priority than local processes.
  • Performance of this rule is between the two above policies.Overall response time performance is close to altruistic policy.

6.Migration limiting policy for Load-balancing algorithms:

  • This policy determines the total number of times a process can migrate.

One of the following two policies may be used

a.Uncontrolled:A remote process arriving at a node is treated just as a process originating at a node due to which a process can migrate any number of times.

b.Controlled:This avoids the instability of the uncontrolled policy & uses a migration count parameter to fix a limit on the number of time a process that can migrate.

  • A long process may be allowed to migrate more times compares to short process.
Please log in to add an answer.