Howdy,
Suppose one had a fault that results in multiple observables:
> F -> (O1, O2)
We can use the diagnosis engine to calculate the information gain
of O1 or O2 and rank them accordingly.
However, imagine that we also have the capability of running an automated test that observes O1 and O2 ...
Search found 5 matches
- Tue Dec 08, 2009 11:04 pm
- Forum: SMILE
- Topic: Diagnosis: Efficient Ranking of Joint Observations
- Replies: 1
- Views: 5574
- Tue Dec 08, 2009 11:00 pm
- Forum: SMILE
- Topic: Diagnosis Documentation Correction
- Replies: 1
- Views: 5114
Diagnosis Documentation Correction
Howdy,
I am wondering if the documentation is correct for the Diagnosis page:
http://genie.sis.pitt.edu/wiki/Support_for_Diagnosis:_Diagnostic_window
The page claims:
> The basis for ranking will be calculated for the table according to the
> following equation:
>
> E(F1) = P(F1|T1) + alpha ...
I am wondering if the documentation is correct for the Diagnosis page:
http://genie.sis.pitt.edu/wiki/Support_for_Diagnosis:_Diagnostic_window
The page claims:
> The basis for ranking will be calculated for the table according to the
> following equation:
>
> E(F1) = P(F1|T1) + alpha ...
- Sat Dec 05, 2009 3:23 am
- Forum: GeNIe
- Topic: Information gain calculation oddities
- Replies: 2
- Views: 5875
- Fri Dec 04, 2009 6:42 pm
- Forum: GeNIe
- Topic: Oddities in information gain calculation
- Replies: 1
- Views: 5132
Oddities in information gain calculation
Dear SMILE Team,
Here is another variation of a network that has odd information implications.
We have a link from C-> B, but this link is uninformative. The outcome of C does not affect the outcome of B so observing B does not tell us about C.
However, observing B still tells us about A ...
Here is another variation of a network that has odd information implications.
We have a link from C-> B, but this link is uninformative. The outcome of C does not affect the outcome of B so observing B does not tell us about C.
However, observing B still tells us about A ...
- Fri Dec 04, 2009 3:20 am
- Forum: GeNIe
- Topic: Information gain calculation oddities
- Replies: 2
- Views: 5875
Information gain calculation oddities
Dear SMILE Team,
I have a network where 'A ' and 'C' are 'target's and
'B' and 'D' are observations
A-> B
C-> {D,B}
If I use the 'test view' from the diagnosis menu I see that both
'D' and 'B' are informative.
If I break this into two networks, I should in theory still be able to
still compute ...
I have a network where 'A ' and 'C' are 'target's and
'B' and 'D' are observations
A-> B
C-> {D,B}
If I use the 'test view' from the diagnosis menu I see that both
'D' and 'B' are informative.
If I break this into two networks, I should in theory still be able to
still compute ...