Degree of Influence and Sensitivity Analysis

The engine.
amirsad
Posts: 3
Joined: Mon Apr 24, 2017 7:59 pm

Re: Degree of Influence and Sensitivity Analysis

Post by amirsad » Mon Apr 24, 2017 8:08 pm

Hi,

since sensitiviy.h is included in the current distribution of SMILE, I was wondering if it is now officially public and whether the interface can be used to compute the degree of influence. I could not find any documentation of sensitivity analysis in SMILE.
Thanks!

shooltz[BayesFusion]
Site Admin
Posts: 1247
Joined: Mon Nov 26, 2007 5:51 pm

Re: Degree of Influence and Sensitivity Analysis

Post by shooltz[BayesFusion] » Mon Apr 24, 2017 9:59 pm

Sensitivity in SMILE is officially public, but we don't have the documentation yet. I'm attaching a program which compares the output from SMILE's sensitivity with the derivative values calculated numerically (vs. closed form solution from sensitivity).

The syntax for the program is:
derivative xdslfilename targetNode [targetNode...] [evidenceNode:outcome ...]

For example, assuming that you're using the hepar.xdsl from GeNIe's examples directory:
derivative hepar.xdsl THepatitis gallstones:present transfusion:absent

Note that this program doesn't print any error messages related to invalid input parameters, if you don't see any output, check the node identifiers.

Look for the use of DSL_sensitivity class in the code; post questions here if you need further assistance.
Attachments
derivative.zip
source code
(3.4 KiB) Downloaded 135 times

amirsad
Posts: 3
Joined: Mon Apr 24, 2017 7:59 pm

Re: Degree of Influence and Sensitivity Analysis

Post by amirsad » Fri Apr 28, 2017 8:49 am

Thank you very much for the quick response and sharing this program, it helps a lot!
I've got some questions:

- In the function DSL_sensitivity::Calculate(DSL_network &net, bool relevance), what is the relevance argument standing for?

- In GeNIe's sensitivity analysis you can set the the percentage of changes of the parameters as Parameter Spread. Is this also possibe in SMILE?

- Using SMILE, I perform sensitivity analysis on a Bayesian network model which is learned from data using Clustering inference and EM. When I change the random seed of the EM learning engine, I get every time a totally different result of sensitivity analysis, although my binary classification results are quit robust (in numbers, running with 3 different seeds I get an AUC of 0.89, 0.88 and 0.87; but the mean sensitivity of an example node are 0.28, 0.04 and 0.06). Does it really mean that the learned model relies every time on a completely different set of nodes, and nevertheless achieves the same good results?

shooltz[BayesFusion]
Site Admin
Posts: 1247
Joined: Mon Nov 26, 2007 5:51 pm

Re: Degree of Influence and Sensitivity Analysis

Post by shooltz[BayesFusion] » Fri Apr 28, 2017 12:03 pm

In the function DSL_sensitivity::Calculate(DSL_network &net, bool relevance), what is the relevance argument standing for?
Relevance parameter controls the network decomposition performed before sensitivity is calculated. With relevance=true (default), each target is set separately. With relevance=false the internal data structures are created for all targets at once. The value of the relevance parameter does not change the calculated sensitivity values. The parameter is there mostly for us to verify that we get correct results using both approaches.
In GeNIe's sensitivity analysis you can set the the percentage of changes of the parameters as Parameter Spread. Is this also possibe in SMILE?
The spread is applied to sensitivity results, so it's up to your program to do that. Sensivitity gives you the coefficients in the equation t=(a*p+b)/(c*p+d). If you need to know how much the target can change, you need to assume how much p can change, hence the spread in GeNIe. Note that the t is monotonic assuming p in [0..1].
When I change the random seed of the EM learning engine, I get every time a totally different result of sensitivity analysis
I think this is a possiblitity when you start EM with randomized parameters. You can try running the program I've attached on the learned networks and compare the sensitivity output with approximations based on numerically calculated derivatives. If they agree to a good degree, you can be sure the effect you're observing is real (sensitivity differs for EM outputs based on different seeds).

Post Reply