Construct decision trees:
A decision tree is a diagram that depicts the many options for solving an issue. Given particular criteria, decision trees usually provide the best beneficial option, or a combination of alternatives, for many cases.
The decision tree consists of nodes that form a rooted tree, meaning it is a directed tree with a node called “root” that has no incoming edges.
All other nodes have exactly one incoming edge. A node with outgoing edges is called an internal or test node. All other nodes are called leaves (also known as terminal or decision nodes).
In a decision tree, each internal node splits the instance space into two or more sub-spaces according to a certain discrete function of the input attribute values.
Below are some assumptions that we made while using the decision tree:
In the beginning, we consider the whole training set as the root.
Feature values are preferred to be categorical. If the values are continuous then they are discretized prior to building the model.
On the basis of attribute values, records are distributed recursively.
We use statistical methods for ordering attributes as root or the internal node.