Search found 179 matches
- Sat Jan 21, 2012 3:42 am
- Forum: GeNIe
- Topic: Probability of 0.5
- Replies: 5
- Views: 5215
Re: Probability of 0.5
There could be several reasons, but no data is the most likely one.
- Thu Dec 01, 2011 5:45 am
- Forum: SMILE
- Topic: the problem of jSMILE parameter learning
- Replies: 5
- Views: 5721
Re: the problem of jSMILE parameter learning
You can check the GeNIe documentation (http://genie.sis.pitt.edu/wiki/Main_Page) but I'm afraid it's lacking a bit at the moment.
- Wed Nov 30, 2011 3:51 pm
- Forum: SMILE
- Topic: the problem of jSMILE parameter learning
- Replies: 5
- Views: 5721
Re: the problem of jSMILE parameter learning
Do you have any experience with Dynamic Bayesian Networks? It sounds to me they do exactly what you want.
- Wed Nov 30, 2011 2:50 am
- Forum: SMILE
- Topic: the problem of jSMILE parameter learning
- Replies: 5
- Views: 5721
Re: the problem of jSMILE parameter learning
I don't understand what you're trying to do but have you thought about using a Dynamic Bayesian Network to handle time?
- Tue Nov 29, 2011 5:01 am
- Forum: GeNIe
- Topic: Confidence in the Learn parameters with EM dialog
- Replies: 1
- Views: 2947
Re: Confidence in the Learn parameters with EM dialog
You are right and we will make the change. Thanks for the feedback.
- Fri Aug 05, 2011 8:50 pm
- Forum: GeNIe
- Topic: A "BIG" problem...= =
- Replies: 5
- Views: 6557
Re: A "BIG" problem...= =
EM algorithm seems can solve problem about Missing Value .. if there is no Missing Value in my data file? [Run EM Algorithm]step will calculate the probability of nodes? EM estimates the parameters of all the conditional distributions (CPTs) in the network regardless if any of the values are missing.
- Fri Jul 15, 2011 6:56 pm
- Forum: GeNIe
- Topic: Convergence criteria for EM algorithm?
- Replies: 1
- Views: 3631
Re: Convergence criteria for EM algorithm?
1. It looks at the relative improvement of the log likelihood. When it's below a certain threshold, learning is stopped. This criteria cannot be changed at the moment.
2. It is the data log likelihood of the learned parameters.
2. It is the data log likelihood of the learned parameters.
- Sat Jul 02, 2011 2:38 am
- Forum: GeNIe
- Topic: What is the learning rate that is used by default in GeNIe.
- Replies: 6
- Views: 5352
Re: What is the learning rate that is used by default in GeN
GeNIe uses the basic EM, no special alterations.
- Fri Jul 01, 2011 8:02 pm
- Forum: GeNIe
- Topic: What is the learning rate that is used by default in GeNIe.
- Replies: 6
- Views: 5352
Re: What is the learning rate that is used by default in GeN
I haven't studied the paper in-depth, but it seems to me this is some sort of modified EM algorithm which would explain the differences. The 'plain' EM algorithm has no learning rate parameter.
- Fri Jul 01, 2011 7:22 pm
- Forum: GeNIe
- Topic: What is the learning rate that is used by default in GeNIe.
- Replies: 6
- Views: 5352
Re: What is the learning rate that is used by default in GeN
What do you mean by learning rate in the context of EM? Also, the initial parameters influence the final parameter values as EM may get stuck in a local maximum (at least in the case of incomplete data).
- Sun May 29, 2011 12:21 am
- Forum: SMILE
- Topic: Impossible outcomes, and how to deal with them
- Replies: 4
- Views: 4041
Re: Impossible outcomes, and how to deal with them
It is possible that EM converges to 0 probabilities if the priors counts are set to zero. This is controlled by the confidence parameter when you invoke EM. Did you set this to a number larger than 0?
- Wed May 25, 2011 5:05 am
- Forum: SMILE
- Topic: Impossible outcomes, and how to deal with them
- Replies: 4
- Views: 4041
Re: Impossible outcomes, and how to deal with them
Can you tell me how you invoked EM?
- Fri Apr 29, 2011 11:41 am
- Forum: SMILE
- Topic: Greedy Thick Thinning
- Replies: 1
- Views: 2356
Re: Greedy Thick Thinning
Please refer to Heckerman's "A Tutorial on Learning With Bayesian Networks" for an explanation of GreedyThickThinning.
- Tue Apr 19, 2011 8:03 am
- Forum: GeNIe
- Topic: Learning DBN parameters (transition probablilities) in GeNIe
- Replies: 20
- Views: 15603
Re: Learning DBN parameters (transition probablilities) in G
It look ok, although many entries in the CPTs do not seem to be updated because of a lack of data. There is a big difference between learning with and without unrolling. If you unroll, the CPTs at each time step are learned separately. However, these CPTs are usually assumed to be identical and then...
- Mon Apr 18, 2011 5:35 pm
- Forum: GeNIe
- Topic: Learning DBN parameters (transition probablilities) in GeNIe
- Replies: 20
- Views: 15603
Re: Learning DBN parameters (transition probablilities) in G
The right way is to perform learning on the original network and never on the unrolled network. This way your predictions should be much more accurate. Can you give this a try?