k2 ordering
k2 ordering
How does K2 structure learning algorithm choose its ordering in Genie?
So it is not related to the Coopers K2 algorithm? Please see here http://www.cs.ubc.ca/~murphyk/Software/ ... ge.html#K2mark wrote:We don't have the K2 structure learning algorithm in GeNIe (or SMILE). We do have the K2 prior over parameters that is used in Greedy Thick Thinning and Naive Bayes, but in neither case an ordering is required.
this implementation needs a priory Ordering.
mark wrote:They are two different things. (Although it may be that Cooper also used the K2 prior in his K2 algorithm and that's how it got its name. Not sure though.)
Heres the coopers algorithm (http://citeseerx.ist.psu.edu/viewdoc/su ... .1.88.4436)
g(,) function is I think the marginal likelihood for i-th node and its parents
- Attachments
-
- K2 algorithm cooper herskovits.PNG (30.87 KiB) Viewed 12747 times
Can you please elaborate on the K2 prior, I cant find any alternative structure learnign that would not refere to the coopers method, but his method requires an ORDERING...?mark wrote:We don't have the K2 structure learning algorithm in GeNIe (or SMILE). We do have the K2 prior over parameters that is used in Greedy Thick Thinning and Naive Bayes, but in neither case an ordering is required.
thx
I am writing a thesis and I used hill climbing wiht k2 from Genie for some stuff, I need to write a little note about the methods I used so if u please can tell me more about your k2 algorithm implemented in Genie?mark wrote:Look at Heckerman's tutorial on learning with Bayesian networks for approaches that don't require an ordering over the variables.
thx
Last edited by frito on Wed Oct 01, 2008 2:09 am, edited 1 time in total.
The GreedyThickThinning algorithm in GeNIe is based on Heckerman's tutorial. The tutorial also discusses the BDe prior, one of the options for a prior in GeNIe, the other being the K2 prior. The K2 prior is defined in the following way. The prior distribution of every parameter associated with a node and a combination of its parents has a Dirichlet(1,...,1) distribution. In Latex notation it would be something like P(\theta_{X_i|Pa(x_i)})=Dirichlet(1,...1,). The number of ones depends on the number of states in the node. Let me know if you still have questions.
many thanks Mark!mark wrote:The GreedyThickThinning algorithm in GeNIe is based on Heckerman's tutorial. The tutorial also discusses the BDe prior, one of the options for a prior in GeNIe, the other being the K2 prior. The K2 prior is defined in the following way. The prior distribution of every parameter associated with a node and a combination of its parents has a Dirichlet(1,...,1) distribution. In Latex notation it would be something like P(\theta_{X_i|Pa(x_i)})=Dirichlet(1,...1,). The number of ones depends on the number of states in the node. Let me know if you still have questions.