I am wondering whether there is a score in Genie, using which we can compare the performance of BNs learned using different algorithms. I know that "best score in iteration" can be used to compare only Bayesian search runs over the same data set.
Additionally, after parameter learning is finished, the result appears in a box saying:
Parameter learning is finished Log(p)=-1361.837068
What does this log(p) indicate and what is the formula used for it? Can I get a reference? and I guess it must be different than the "best score in iteration" Could you please explain what both mean?
Thanking you in advance, I am looking forward for your reply
score for comparison ofBNs learned with different algorithms
Re: score for comparison ofBNs learned with different algori
It's the overall log likelihood score of the final iteration, i.e., how well the final set of parameters fit all the data. (Note that in EM the log likelihood score increases or remains the same for every iteration.) But be careful in comparing different models using the log likelihood because adding more arcs to a model (and implicitly parameters) will never decrease the score.
Re: score for comparison ofBNs learned with different algori
Hi Mark,
Thank you very for the reply, so what do you suggest to do or to use when comparing different networks, which were learned from the data set/or built from the same data set. Thank you in advance
Thank you very for the reply, so what do you suggest to do or to use when comparing different networks, which were learned from the data set/or built from the same data set. Thank you in advance
Re: score for comparison ofBNs learned with different algori
You could use the Bayesian score to compare two networks that were outputted by different algorithms. However, then the comparison is really only based on the Bayesian score. One, perhaps better, alternative would be to run cross-validation.
Re: score for comparison ofBNs learned with different algori
Thank you for your reply again. I was more wondering whether there is a feature of Genie to be able to compare different Bayesian networks. I already have a BN data structure from Expert knowledge. I want to compare that with the BNs learned from the same data set using Genie. To my knowledge Genie does not provide Bayesian score for the learned networks. am I right? Additionally, is there an additional feature of Genie to run cross validation or do we have to do that by ourselves? Thank you very much for your help
-
- Site Admin
- Posts: 430
- Joined: Tue Dec 11, 2007 4:24 pm
Re: score for comparison ofBNs learned with different algori
Esma,
As Mark pointed out, comparing the scores for different networks is tricky, as the scores by themselves will never increase when you increase the complexity of a network. Therefore, you can almost certainty say that complex models will have higher scores. In modeling we value simplicity and in structure learning there are several ways of penalizing complexity.
The best way to compare the quality/accuracy of different models is to run cross-validation. This can be done in GeNIe through Learning/Validate menu choice. Of course, you need to have a data set on which to test your models. Cross-validation happens only on the model parameters. You can evaluate your model on the data set straight (fair is you have not used these data for learning) or choose k-fold cross-validation or its extreme variant "leave-one-out". The results include a confusion matrix, ROC curves, and calibration curves. I hope this helps. Good to see that you are a GeNIe user!
Marek
As Mark pointed out, comparing the scores for different networks is tricky, as the scores by themselves will never increase when you increase the complexity of a network. Therefore, you can almost certainty say that complex models will have higher scores. In modeling we value simplicity and in structure learning there are several ways of penalizing complexity.
The best way to compare the quality/accuracy of different models is to run cross-validation. This can be done in GeNIe through Learning/Validate menu choice. Of course, you need to have a data set on which to test your models. Cross-validation happens only on the model parameters. You can evaluate your model on the data set straight (fair is you have not used these data for learning) or choose k-fold cross-validation or its extreme variant "leave-one-out". The results include a confusion matrix, ROC curves, and calibration curves. I hope this helps. Good to see that you are a GeNIe user!
Marek
Re: score for comparison ofBNs learned with different algori
Hi Marek,
Thank you very much, that really helped a lot. I am new to Genie, but I really enjoy it and its wide range of capabilities. I hope I get better in time using it, until then I hope I don't cause too much of a hustle with my questions. Thanks again!
Thank you very much, that really helped a lot. I am new to Genie, but I really enjoy it and its wide range of capabilities. I hope I get better in time using it, until then I hope I don't cause too much of a hustle with my questions. Thanks again!