Quiet failures in setting evidence / updating beliefs

The engine.
Post Reply
jonnie
Posts: 41
Joined: Mon Feb 06, 2012 12:49 pm

Quiet failures in setting evidence / updating beliefs

Post by jonnie »

Hello,
after spending a while scratching my head over strange crashes, I found out Smile inference sometimes fails without notifying the user.

Here's what happened:
- I used Lauritzen
- Setting the evidences one after another, I always check the return code of SetEvidence; afterwards I check if the ErrorHandler contains anything and also i re-check every node to make sure the network is really configured as it should be.
- Everything went fine.
- I call UpdateBeliefs, and again no errors in the ErrorHandler and the return code says DSL_OKAY.
- For some of the target nodes, there is no valid value set. Value()->IsValueValid() returns 0.

Why is that so?
I suspect there is some conflict in the evidence provided, however I can't confirm this manually (200 nodes, 30 evidences).
I don't know how Smile checks evidence conflicts; if there's an algorithm for it that only goes to a maximum 'depth' or something.
Is there any flag that I have to set to enable 'exhaustive' conflict checks? How can I know whether my evidence is really conflict-free?

What I came up with now is, when I generate random evidence, I perform a belief update and check that all the target nodes' values are valid, just to make sure the inference is possible.
shooltz[BayesFusion]
Site Admin
Posts: 1417
Joined: Mon Nov 26, 2007 5:51 pm

Re: Quiet failures in setting evidence / updating beliefs

Post by shooltz[BayesFusion] »

- I call UpdateBeliefs, and again no errors in the ErrorHandler and the return code says DSL_OKAY.
- For some of the target nodes, there is no valid value set. Value()->IsValueValid() returns 0.
The behavior you've described would be a bug. Can you post your network so we can try to reproduce and fix it?

I don't know how Smile checks evidence conflicts; if there's an algorithm for it that only goes to a maximum 'depth' or something.
Is there any flag that I have to set to enable 'exhaustive' conflict checks?
By default, the conflicts are checked at SetEvidence time. You can disable these checks with DSL_network::DeactivateRelevance.
jonnie
Posts: 41
Joined: Mon Feb 06, 2012 12:49 pm

Re: Quiet failures in setting evidence / updating beliefs

Post by jonnie »

It happens regularly while generating random evidence in the network.
I can extract one of the situations as a "Case" and store it in a new .xdsl file; I open it in GeNIe, use the Case, and perform inference. Same result: some nodes don't get valid values and the little question mark at bottom-right remains.
I'll PM you about sending the network...
shooltz[BayesFusion]
Site Admin
Posts: 1417
Joined: Mon Nov 26, 2007 5:51 pm

Re: Quiet failures in setting evidence / updating beliefs

Post by shooltz[BayesFusion] »

I've got your model and was able to reproduce the issue. The problem arises in the final phase of the clustering algorithm, when clique's potential is marginalized/normalized to obtain node posteriors. We check for all-zero condition after marginalization and before normalization. If this condition is found, the node is not updated. This is exactly what's happening with your network.

Currently I'm trying to determine if the problem is due to conflicts in the evidence not found by SMILE or by loss of numeric precision during the clustering.
jonnie
Posts: 41
Joined: Mon Feb 06, 2012 12:49 pm

Re: Quiet failures in setting evidence / updating beliefs

Post by jonnie »

Update:
My evidence WAS conflicting. The conflict check at SetEvidence time is not almighty, and if you really need to verify beyond any doubt that a certain set of evidence is non-conflicting, you have to perform a full inference without targets and then look if all values are set valid.
The newest GeNIe release detects non-valid target nodes after inference and returns an error code. http://genie.sis.pitt.edu/download/genie2_binaries.zip (unzip over your GeNIe install.)
Thanks to shooltz for the reply.
I'm glad it's not a numerical problem :)
jonnie
Posts: 41
Joined: Mon Feb 06, 2012 12:49 pm

Re: Quiet failures in setting evidence / updating beliefs

Post by jonnie »

Hello,
I stumbled upon another phenomenon similar to this: LBP fails quietly.
The network is no polytree so Pearl fails of course. However, I expected LBP to perform fine, since it's not really a sampling algorithm. The problem with sampling algorithms is that my network contains lots of deterministic nodes, and the chance nodes have states with very low probabilities. But I thought LBP should work.
So what happens is: UpdateBeliefs returns DSL_OKAY, and there is no error present in the ErrorHandler. However, not all target nodes have valid values after UpdateBeliefs. The evidence sets are not conflicting, since Lauritzen sets the targets properly for all of them.
Any hint on what this can be? Can there be several reasons? Is LBP buggy, or is the network as described not suitable for LBP? Maybe the messages didn't converge?
If the messages didn't converge, it would be nice if there's an error in the error handler, and UpdateBeliefs doesn't return DSL_OKAY.
Greetings, Jo
shooltz[BayesFusion]
Site Admin
Posts: 1417
Joined: Mon Nov 26, 2007 5:51 pm

Re: Quiet failures in setting evidence / updating beliefs

Post by shooltz[BayesFusion] »

Is LBP buggy
It very well may be. Send me the network if you want detailed analysis of the issue you're experiencing.
Post Reply