How does a Bayesian belief network learn?


Bayesian classifiers are statistical classifiers. They can predict class membership probabilities, including the probability that a given sample belongs to a specific class. Bayesian classifiers have also display large efficiency and speed when it can high databases.

Once classes are defined, the system should infer rules that govern the classification, therefore the system should be able to find the description of each class. The descriptions should only refer to the predicting attributes of the training set so that only the positive examples should satisfy the description, not the negative examples. A rule is said to be correct if its description covers all the positive examples and none of the negative examples of a class is covered.

It is assuming that the contributions by all attributes are independent and that each contributes equally to the classification problem, a simple classification scheme called Naïve Bayes classification. By analyzing the contribution of each “independent” attribute, a conditional probability is determined. A classification is made by joining the impact that the several attributes have on the prediction to be create.

Naïve Bayes classification is called Naïve because it assumes class conditional independence. The effect of an attribute value on a given class is independent of the values of the other attributes. This assumption is made to decrease computational costs and therefore is treated Naïve.

In the learning or training of a belief network, a multiple scenarios are possible. The network topology can be given in advance or inferred from the information. The network variables can be observable or private in some training tuples. The method of hidden data is defined as missing values or incomplete information.

There are multiple algorithms exist for understanding the network topology from the training records given observable variables. The issue is discrete optimization. Human professionals generally have a good grasp of the direct conditional dependencies that influence in the domain under analysis, which supports in network design. Experts should define conditional probabilities for the nodes that perform in direct dependencies.

These probabilities can be used to evaluate the remaining probability values. If the network topology is acknowledged and the variables are observable, therefore training the network is simple. It consists of computing the CPT entries, as is similarly done when computing the probabilities involved in naive Bayesian classification.

When the network topology is given and several variables are hidden, there are several methods to select from training the belief network. It can define a promising method of gradient descent. For those without an advanced numerical background, the definition can view rather frightening with its calculus-packed formulae.

Updated on: 16-Feb-2022

287 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements