Inference

<< Click to Display Table of Contents >>

Navigation:  Using SMILE Wrappers >

Inference

SMILE includes functions for several popular Bayesian network inference algorithms, including the clustering algorithm, and several approximate stochastic sampling algorithms.  To run the inference , obtain the probability of evidence currently set in the network, or switch between various inference algorithm implementations, use the following methods:

Java:

Network.updateBeliefs

Network.probEvidence

Network.setBayesianAlgorithm

Network.getBayesianAlgorithm

Network.setInfluenceDiagramAlgorithm

Network.getInfluenceDiagramAlgorithm

Python:

Network.update_beliefs

Network.prob_evidence

Network.set_bayesian_algorithm

Network.get_bayesian_algorithm

Network.set_influence_diagram_algorithm

Network.get_influence_diagram_algorithm

R:

Network$updateBeliefs

Network$probEvidence

Network$setBayesianAlgorithm

Network$getBayesianAlgorithm

Network$setInfluenceDiagramAlgorithm

Network$getInfluenceDiagramAlgorithm

C#:

Network.UpdateBeliefs

Network.ProbEvidence

Network.BayesianAlgorithm (read/write property)

Network.InfluenceDiagramAlgorithm (read/write property)

The default algorithm for discrete Bayesian Networks is clustering over network preprocessed with relevance. The output of this algorithm is exact (as opposed to various sampling algorithms also available in the library).

The sampling inference algorithms can be controlled by setting the number of generated samples with the Network.setSampleCount method. Obviously, the more samples are generated, the more time it takes to complete the inference.

Network.updateBeliefs throws an exception with error code -42 if the temporary data structures required to complete the inference take too much memory. In such case, or if the inference takes too long, consider taking advantage of SMILE's relevance reasoning layer. Relevance reasoning runs as a preprocessing step, which can lessen the complexity of later stages of inference algorithms. Relevance reasoning takes the target node set into account, therefore, to reduce the workload you should reduce the number of nodes set as targets if possible. Note that by default all nodes are targets (this is the case when no nodes were marked as such). If your network has 1,000 nodes and you only need the probabilities of 20 nodes, by all means call Network.setTarget on them.

If changing the model to use Noisy-MAX nodes is possible, then it's definitely worth trying. The inference can be performed very efficiently on the networks with  Noisy-MAX nodes when Noisy-MAX decomposition is enabled. To enable it, call Network.setNoisyDecompEnabled. If enabled, the Noisy-MAX decomposition runs in the relevance layer and reduces the complexity of the subsequent phases of the inference algorithm. To further control the decomposition you can call Network.setNoisyDecompLimit, which controls the maximal number of parents in the temporary structures managed by SMILE during inference.