0
2.8kviews
Depth first search.
1 Answer
0
74views

DFS always expands the deepest node in the current fringe of the search tree. The progress of the search is illustrated in figure (8). The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors. As those nodes are expanded, they are dropped from the fringe, so then the search “backs up” to the next shallowest node that still has unexplored successors.

This strategy can be implemented by TREE – SEARCH with a last – in – first – out (LIFO) queue, also known as a stack. As a alternative to the TREE – Search implementation, it is common to implement DFS with a recursive function that calls itself on each of its children in turn.

figure (8) DFS on binary tree, notes that have been expanded and have no descendants in the fringe can be removed from memory these are shown in black. Nodes at depth 3 are assumed to have no successors and M is the only goal node.

DFS has very modest memory requirements. It needs to store only a single path from the root to a leaf node, along with the remaining unexpanded sibling nodes for each node on the path. Once a node has been expanded, it can be removed from memory as soon as all its descendants have been fully expanded.

For s state space with branching factor b and maximum depth m, DFS requires storage of only bm + 1 nodes.

Using some assumptions as figure (7) and assuming that nodes at the same depth as the goal node have no successors, we find that DFS would require 118 KB instead of 10 petabytes at d = 12, a factor of 10 billion times less space.

A variant of DFS called back tracking search uses still less memory. In back tracking, only one successor is generated; each partially expanded node remembers which successor to generate next. In this way, only O(m) memory is needed rather than O(bm).

Back tracking search facilities yet another memory saving trick : The idea of generating a successor by modifying the current state description directly rather than copying it first. This reduces the memory requirements to just one state description and O(m) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor. For problems with large state descriptions, such as robotic assembly, these techniques are critical to success. The drawback of DFS is that it can make a wrong choice and get stuck going down a very long path (on even infinite) when a different choice would lead to a solution near root of the search tree. Example: in figure (8) DFS will explore the entire left sub tree even if node C is a goal node. If node J were also a goal node, then DFS would return it as a solution, hence DFS is not optimal.

If the left sub tree were of unbounded depth but contained no solutions, DFS would never terminate, hence it is not complete. In the worst case, DFS will generate all of the $O(b^m)$ nodes in the search tree, where m is the maximum depth of any node. Note that m can be much larger than d and is infinite if the tree is unbounded.

Please log in to add an answer.