Learning Parameters Confidence level

The front end.
Post Reply
Christian
Posts: 44
Joined: Wed Nov 28, 2007 12:32 pm

Learning Parameters Confidence level

Post by Christian »

Hello,

Using GeNIe, what is the intervall for the confidence-level?
Is it 0...10 or 0..100 or ...?

Would it be better if I start with randomized Values and then train the network again and again with the same training data, but higher confidence level?

Which values do you use by your experience?

Thank you,

Christian
mark
Posts: 179
Joined: Tue Nov 27, 2007 4:02 pm

Re: Learning Parameters Confidence level

Post by mark »

Christian wrote:Using GeNIe, what is the intervall for the confidence-level?
Is it 0...10 or 0..100 or ...?

Would it be better if I start with randomized Values and then train the network again and again with the same training data, but higher confidence level?

Which values do you use by your experience?
There is no theoretical upper limit for the confidence level. If the confidence is infinite it means you know for sure that the parameters you have in the current network are the correct ones. GeNIe imposes some limit, I believe it is at least 10,000, but it may be higher.

I don't think learning with the same data, but different confidence levels, is going to give better results. Why do you think so? I think a useful approach would be to learn the network multiple times with different initial parameters, but using the same data, and then average the parameters in all the learned instances.

The value of the confidence level is usually provided by the expert that constructed the original network. It should reflect the number of cases that the expert based the network on.
Christian
Posts: 44
Joined: Wed Nov 28, 2007 12:32 pm

Re: Learning Parameters Confidence level

Post by Christian »

mark wrote: I don't think learning with the same data, but different confidence levels, is going to give better results. Why do you think so?
It thought if given the fact that there is no expert data available we yould use a very low confidence level. After that we have the weights in the right direction. Now we could use the same data with a higher confidence level (as we have "expert" ratios from the past run) to fine tune these results.

I don't know if this would work. It was just a guess.

mark wrote: I think a useful approach would be to learn the network multiple times with different initial parameters, but using the same data, and then average the parameters in all the learned instances.
That is a very good approach. I will try that one. Thank you.

Christian
Post Reply