shooltz wrote:
We have some noisyMax-specific code in the EM (parameter learning) and in the relevance layer, which can simplify the network before Bayesian inference is invoked. The relevance layer handles noisyMax nodes with evidence set in a special way; I'm not familiar with this part of the algorithm, but can ask for more detailed info if you need it.
What kind of noisyMax-specific code are you using for parameter learning? When I did some learning on a noisyMax network (in GeNIe) I got the impression that no special code was used. The reason is that after learning, when I saved the network, the noisyMax parameters in the file where replaced by a standard CPT (and a large one at that, due to the structure of my network). However, the xml tag in the file specifying the type of node did not change from noisyMax to standard chance, thus making GeNIe unable to reopen the file until I manually edited the xml tags. So at the least, there is a bug here to be fixed.
Anyhow, one way to implement noisyMax-specific learning is to create and solve (in the least squares sense) the over-determined linear equation system that you get by taking the logarithm of the basic Noisy-OR equation for each specimen in the data. (Of course, ln(0) needs to be approximated with something sufficiently small, e.g. ln(0.01).) I did that in Matlab, and it seemed to work fine, though I guess that you need a pretty good (i.e. Matlab-like) engine for solving linear equation systems if you have a large model and a lot of data.
Regards
Ulrik