Search found 179 matches

by mark
Thu Apr 25, 2019 2:12 pm
Forum: SMILE
Topic: EM algorithm and smoothing
Replies: 9
Views: 8647

Re: EM algorithm and smoothing

So if ESS=10 and the prior distribution is uniform (i.e., 0.5-0.5) then 5-5 is used for counts and smoothing (it is combined with whatever else the data gives). If ESS=10 and the prior distribution is 0.9-0.1 then 9-1 is used for counts and smoothing (more weight is given to the first parameter). Et...
by mark
Wed Apr 24, 2019 8:44 pm
Forum: SMILE
Topic: EM algorithm and smoothing
Replies: 9
Views: 8647

Re: EM algorithm and smoothing

What you say makes sense. If a given combination of parent variables never appears then obviously the distribution in the child node cannot be learned because it simply never occurs. That is also exactly when smoothing will make a big difference as it does not get washed away by the data. However, o...
by mark
Wed Apr 24, 2019 2:27 pm
Forum: SMILE
Topic: EM algorithm and smoothing
Replies: 9
Views: 8647

Re: EM algorithm and smoothing

You can just use a uniform distribution (which should be the default). I think that works well for smoothing.
by mark
Tue Apr 16, 2019 10:45 pm
Forum: SMILE
Topic: EM algorithm and smoothing
Replies: 9
Views: 8647

Re: EM algorithm and smoothing

Hi MartinA, you should be able to achieve this by setting the equivalent sample size and picking/setting a prior distribution (I believe uniform is the default).
by mark
Thu Jan 12, 2017 4:07 pm
Forum: SMILE
Topic: Setting Beta(Dirichlet) prior when using jsmile EM
Replies: 4
Views: 5195

Re: Setting Beta(Dirichlet) prior when using jsmile EM

Re. your first point. I implied a uniform distribution (since that's the default in SMILE) but that's indeed not true in general. Re. your second point. When ESS is set to 200 and the prior mean is 0.75 then alpha=200*0.75 and beta=200*0.25. Re. your third point. If I remember correctly, that is not...
by mark
Sun Jan 08, 2017 1:45 am
Forum: SMILE
Topic: Data input for HMM(DBN) with different lengths across samples
Replies: 7
Views: 7006

Re: Data input for HMM(DBN) with different lengths across samples

Based on your description, I might know what the issue is. In EM in SMILE the time series are shortened to length k if slice k+1 and beyond contain no evidence. The reason for this is that a time series could go on forever without any evidence and this (or any arbitrary cut off point) should not aff...
by mark
Wed Jan 04, 2017 10:57 am
Forum: SMILE
Topic: Learned HMM parameters(by EM) different from BNT toolbox
Replies: 3
Views: 5352

Re: Learned HMM parameters(by EM) different from BNT toolbox

To be honest, I am not sure whether I have a good answer. All of the things you list (and more, e.g., bugs) could have contributed to the differences. It's hard to judge from such a high level experiment what the issue is. To debug this, I would start with a simple, known network (i.e., generate dat...
by mark
Mon Oct 24, 2016 10:45 pm
Forum: SMILE
Topic: Output posterior probabilities for latent variables (and missing values) after EM converges on training set
Replies: 2
Views: 5967

Re: Output posterior probabilities for latent variables (and missing values) after EM converges on training set

Can't you just load a sequence of observations, perform inference (update beliefs), and read out the posterior distributions from the latent variables? Then you clear sequence 1, load sequence 2, and repeat the same procedure. Unless I'm missing something, I don't see why this wouldn't be possible.
by mark
Fri Feb 27, 2015 3:42 pm
Forum: GeNIe
Topic: score for comparison ofBNs learned with different algorithms
Replies: 6
Views: 5667

Re: score for comparison ofBNs learned with different algori

You could use the Bayesian score to compare two networks that were outputted by different algorithms. However, then the comparison is really only based on the Bayesian score. One, perhaps better, alternative would be to run cross-validation.
by mark
Tue Feb 24, 2015 7:32 pm
Forum: GeNIe
Topic: score for comparison ofBNs learned with different algorithms
Replies: 6
Views: 5667

Re: score for comparison ofBNs learned with different algori

It's the overall log likelihood score of the final iteration, i.e., how well the final set of parameters fit all the data. (Note that in EM the log likelihood score increases or remains the same for every iteration.) But be careful in comparing different models using the log likelihood because addin...
by mark
Tue Oct 07, 2014 5:21 pm
Forum: SMILE
Topic: In EM learning, how to keep the right relationships
Replies: 2
Views: 4436

Re: In EM learning, how to keep the right relationships

After running EM you can swap the b0 and b1 state names.
by mark
Sat May 12, 2012 3:16 am
Forum: SMILE
Topic: Problem with learning DBN parameters with jSmile
Replies: 3
Views: 5343

Re: Problem with learning DBN parameters with jSmile

It works in GeNIe and it should also work in SMILE and the wrappers. Are you using the latest version of SMILE? A while ago I fixed an issue related to the error message you are getting so maybe you are still using an older version.
by mark
Thu Feb 02, 2012 9:45 pm
Forum: GeNIe
Topic: Probability of 0.5
Replies: 5
Views: 5197

Re: Probability of 0.5

Distributions are only fixed when you fix them yourself, which I suspect is not the case. Having complete data is not a sufficient condition to get rid of 0.5 probabilities. What you need is that for each family (child plus its parents) all parent configurations occur in the data. Especially when a ...
by mark
Thu Jan 26, 2012 9:14 am
Forum: GeNIe
Topic: Probability of 0.5
Replies: 5
Views: 5197

Re: Probability of 0.5

The probability could be exactly 0.5 in the data or the distribution could be fixed.

Have you tried randomizing? That could break the 0.5 probability.