0
3.0kviews
Dynamic Programming Approach:
1 Answer
0
24views

Dynamic programming approach is similar to divide and conquer in breaking down the problem into smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these sub- problems are not solved independently. Rather, results of these smaller sub-problems are remembered and used for similar or overlapping sub-problems.

Dynamic programming is used where we have problems, which can be divided into similar sub- problems, so that their results can be re-used. Mostly, these algorithms are used for optimization. Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the previously solved sub-problems. The solutions of sub-problems are combined in order to achieve the best solution.

So we can say that −

  • The problem should be able to be divided into smaller overlapping sub-problem.

  • An optimum solution can be achieved by using an optimum solution of smaller sub- problems.

  • Dynamic algorithms use Memorization.

Comparison

In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are motivated for an overall optimization of the problem.

In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger sub-problem. Dynamic algorithms use Memoization to remember the output of already solved sub-problems.

Example

The following computer problems can be solved using dynamic programming approach −

  • Fibonacci number series
  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by
  • Floyd-Warshall
  • Shortest path by Dijkstra
  • Project scheduling

Dynamic programming can be used in both top-down and bottom-up manner. And of course, most of the times, referring to the previous solution output is cheaper than recomputing in terms of CPU cycles.

1. General Method

• Works the same way as divide-and-conquer, by combining solutions to sub problems – Divide- and-conquer partitions a problem into independent sub problems – Greedy method only works with the local information • Dynamic programming is required to take into account the fact that the problems may not be partitioned into independent sub problems – The sub problem is solved only once and the answer saved in a table ∗ Applicable when the sub problems overlap, or sub problems share sub problems – Solution to a problem can be viewed as the result of a sequence of decisions

2. Multistage graphs

A multistage graph G = (V, E) is a directed graph where vertices are partitioned into k (where k $\gt$; 1) number of disjoint subsets S = {s 1 ,s 2 ,…,s k } such that edge (u, v) is in E, then u Є s i  and v Є s 1 + 1  for some subsets in the partition and |s 1 | = |s k | = 1.

The vertex s Є s 1  is called the source and the vertex t Є s k  is called sink.

G is usually assumed to be a weighted graph. In this graph, cost of an edge (i, j) is represented by c(i, j). Hence, the cost of path from source s to sink t is the sum of costs of each edges in this path.

The multistage graph problem is finding the path with minimum cost from source s to sink t.

Example

Consider the following example to understand the concept of multistage graph.

enter image description here

According to the formula, we have to calculate the cost (i, j)using the following steps

Step-1: Cost (K-2, j)

In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three options to choose

the minimum cost at this step.

Cost(3, 4) = min {c(4, 7) + Cost(7, 9),c(4, 8) + Cost(8, 9)} = 7

Cost(3, 5) = min {c(5, 7) + Cost(7, 9),c(5, 8) + Cost(8, 9)} = 5

Cost(3, 6) = min {c(6, 7) + Cost(7, 9),c(6, 8) + Cost(8, 9)} = 5

Step-2: Cost (K-3, j)

Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and 3. So, the value

i = 2 and j = 2 and 3.

Cost(2, 2) = min {c(2, 4) + Cost(4, 8) + Cost(8, 9),c(2, 6) +

Cost(6, 8) + Cost(8, 9)} = 8

Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9), c(3, 6) + Cost(6, 8) + Cost(8, 9)} = 10

Step-3: Cost (K-4, j)

Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5) + Cost(5, 8) + Cost(8, 9))} = 12 c(1, 3) + Cost(3, 6) + Cost(6, 8 + Cost(8, 9))} = 13

Hence, the path having the minimum cost is 1→ 3→ 5→ 8→ 9.

3. Single source shortest path

The single-source shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph. The single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v.

Bellman Ford's Algorithm:

Bellman Ford's algorithm is used to find the shortest paths from the source vertex to all other vertices in a weighted graph. It depends on the following concept: most n−1 edges, because the shortest path couldn't have a cycle.

There is no need to pass a vertex again, because the shortest path to all other vertices could be found without the need for a second visit for any vertices.

Algorithm Steps:

  • The outer loop traverses from 0 : n−

  • Loop over all edges, check if the next node distance > current node distance + edge weight, in this case update the next node distance to "current node distance + edge weight".

This algorithm depends on the relaxation principle where the shortest distance for all vertices is gradually replaced by more accurate values until eventually reaching the optimum solution. In the beginning all vertices have a distance of "Infinity", but only the distance of the source vertex = 0, then update all the connected vertices with the new distances (source vertex distance + edge weights), then apply the same concept for the new vertices with new distances and so on.

A very important application of Bellman Ford is to check if there is a negative cycle in the graph,

Time Complexity of Bellman Ford algorithm is relatively high O(V⋅E), in case E=V2, O(E3)..

Bellman Ford Algorithm

Algorithm: Bellman-Ford-Algorithm (G, w, s)

for each vertex v Є G.V

v.d := ∞

v.∏ := NIL

s.d := 0

for i = 1 to |G.V| - 1

for each edge (u, v) Є G.E

if v.d > u.d + w(u, v)

v.d := u.d +w(u, v)

v.∏ := u

for each edge (u, v) Є G.E

if v.d > u.d + w(u, v)

return FALSE

return TRUE

Analysis

The first for loop is used for initialization, which runs in O(V)times. The next for loop runs |V -1| passes over the edges, which takes O(E) times.

Hence, Bellman-Ford algorithm runs in O(V, E) time.

Please log in to add an answer.