0
17kviews
Why naive Bayesian classification is called naive. Briefly outline the major ideas of naive Bayesian classification
4
703views

Naive Bayes:

A naive Bayes classifier assumes that the presence (or absence) of a particular feature of a class is unrelated to the presence (or absence) of any other feature, given the class variable. Basically, it's "naive" because it makes assumptions that may or may not turn out to be correct.

• Bayes theorem is used to find conditional probabilities.
• The conditional probability of an event is a likelihood obtained with the additional information that some other event has previously occurred.
• P(X|Y) is the conditional probability of event X occurring for the event Y which has already occurred. $$P(X|Y) = P(X and Y)/P(A)$$
• An initial probability is called as apriori probability which we get before any additional information is obtained.
• The probability is called as a posterior probability value which we get or revised after any additional information is obtained.

Basics of Bayesian Classification:

• Probabilistic learning: Explicit probabilities are calculated for Hypothesis.
• Incremental: The probability of a hypothesis whether it is correct can be incrementally increased or decreased by each training example.
• Probabilistic prediction: Multiple hypothesis can be predicted by their probability weight.
• Meta-classification: The ouputs of several classifiers can be combined, e.g by multiplying the probabilities that all classifiers predict for a given class.
• Standard: The computationally intractable Bayesian methods provide a standard of optimal decision making against which other methods can be measured.

Given training data D, posterior probability of a hypothesis h, p(h|D) follows the Bayes theorem-

$$P(h|D) = \frac{P(D|h)P(h)}{P(D)}$$

P(h): Independent probability of h: prior probability

P(D): Independent probability of D

P(D|h): Conditional probability of D given h: likelihood

P(h|D): Conditional probability of h given D: Posterior probability