Chapter 46 Naive Bayes classifiers
Naive Bayes classifiers are a family of simple “probabilistic classifiers” based on applying Bayes’ theorem with strong (naïve) independence assumptions between the features.
They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve higher accuracy levels.
Naïve Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers.
\(P(c|x) = \frac{P(x|c)(P(c))}{P(x)}\), where
\(P(c|x)\) - posteriour probability
\(P(x|c)\) - Likelihood
\(P(c)\) - Class Prior Probbility
\(P(x)\) - Predictor Prior Probability