Expectation-Maximization EM learning

The engine.
Post Reply
bernt
Posts: 5
Joined: Wed Sep 18, 2013 10:27 am

Expectation-Maximization EM learning

Post by bernt »

Dear list,

I am using the learning parameters options in GENIE to generate the CPT from observed data.
It works fine, but I am not sure if I fully understand how GENIES does this.
When I read about the EM algorithme, it says that it sort of finds the posteriori values of the parameters used in the statistical model.
But what model (distributions) are assumed in GENIE? or I guess in SMILE?
When the model is done, I get a log value. Is that how well the model performs?

I am new to the EM algorithme, and starting to pick up some of these more advanced statistical methods....
So I appreciate all help.

Thank you
./Bernt
Martijn
Posts: 76
Joined: Sun May 29, 2011 12:23 am

Re: Expectation-Maximization EM learning

Post by Martijn »

Hi Bernt,

For Bayesian networks we assume Multinomial distributions.
Yes, the log value is the loglikelihood of the data, i.e. how well the model fits the data.

As long as your data does not contain any missing values the EM algorithm works the same as the same Maximum Likelihood Estimation.
If there is missing data it becomes an iterative procedure using current parameters to estimate the necessary sufficient statistics and then perform MLE again.
This repeats until convergence.

Hope this helps.

Best,

Martijn
Post Reply