Search found 22 matches

by Yan
Sat Nov 25, 2023 7:58 am
Forum: GeNIe
Topic: enter evidence, virtual evidence, and controlling values
Replies: 4
Views: 1291

Re: enter evidence, virtual evidence, and controlling values

Haha, great, thanks Marek!
by Yan
Fri Nov 24, 2023 5:41 am
Forum: GeNIe
Topic: enter evidence, virtual evidence, and controlling values
Replies: 4
Views: 1291

Re: enter evidence, virtual evidence, and controlling values

Thanks Marek. For the question 3, I'm still a bit confused and would like to discuss with you. For example, node A "sport" (low frequency, medium frequency, high frequency) directs to node B drink water (low frequency, medium frequency, high frequency), and both sport and drink water direc...
by Yan
Fri Nov 17, 2023 1:45 am
Forum: GeNIe
Topic: enter evidence, virtual evidence, and controlling values
Replies: 4
Views: 1291

enter evidence, virtual evidence, and controlling values

Dear staff, I'd like to ask some questions about entering evidence, virtual evidence, and controlling values: 1. My understanding is that entering evidence can only give 100% for one state of node A, while virtual evidence can give the probability distribution (e.g., 20%, 40%, 40%, assuming there ar...
by Yan
Tue Oct 10, 2023 4:13 am
Forum: GeNIe
Topic: missing value
Replies: 4
Views: 1610

Re: missing value

and i try PC algorithm again, the output doesn't show outcomes/states like S_99 (replace missing data with 99), but if i use other structure learning algorithms (e.g., bayesian search), the bar chart view shows S_99, that's a bit weird. could you please check that by randomly using some data? thanks...
by Yan
Tue Oct 10, 2023 12:34 am
Forum: GeNIe
Topic: missing value
Replies: 4
Views: 1610

Re: missing value

The structure learning algorithms in SMILE/GeNIe currently require that a dataset has no missing values. From the POV of the learning algorithm the missing value replacement is no different from any state label. The PC algorithm should output the nodes with outcomes like S_99, just like the attache...
by Yan
Sun Oct 08, 2023 12:35 pm
Forum: GeNIe
Topic: missing value
Replies: 4
Views: 1610

missing value

Dear staff, I have some questions about missing values, please see below: (1) if i open the data file and replace all missing values with specified value, will these missing data affect the structure/parameter learning? for example, node A has three values "L, M, H", if i replace the missi...
by Yan
Tue Sep 26, 2023 11:57 pm
Forum: GeNIe
Topic: structure learning
Replies: 4
Views: 1583

Re: structure learning

Hi Yan, I have looked at your data and have managed to replicate the problem. Clearly, there is a bug in PC and, as you wrote, the algorithm ignores the max. adjacency size parameter. We will work on it and will fix it in the next release. When I face a problem like yours, I usually play with the a...
by Yan
Fri Sep 15, 2023 6:56 am
Forum: GeNIe
Topic: structure learning
Replies: 4
Views: 1583

Re: structure learning

Hi Yan, Let me try to answer your questions. (1) This is hard to say, as the two algorithms work on different principles. I would try both and look for commonalities in the output. This especially that your data set is rather small. Please play with the significance level alpha in PC -- it has a bi...
by Yan
Thu Sep 14, 2023 3:03 am
Forum: GeNIe
Topic: structure learning
Replies: 4
Views: 1583

structure learning

Dear staff, I'd like to ask some questions about structure learning. 1. I have around 130 cases and 7 nodes. In this case, is it better to use "Bayesain Search algorithm" rather than "PC algorithm"? 2. The DAG learned by "Bayesain Search algorithm" is very different fro...
by Yan
Fri Jun 09, 2023 12:04 am
Forum: GeNIe
Topic: parameter learning and validation in GeNIe 4.0 vs 3.0
Replies: 3
Views: 2006

Re: parameter learning and validation in GeNIe 4.0 vs 3.0

Is there any missing data items in the learning data set? In GeNIe 4.0 when the data is complete, the EM procedure switches to simple counting, regardless of the parameter initialization. There's no missing data. So in this case, which one is reliable? Or I can follow the results of either version?...
by Yan
Thu Jun 08, 2023 1:48 pm
Forum: GeNIe
Topic: parameter learning and validation in GeNIe 4.0 vs 3.0
Replies: 3
Views: 2006

parameter learning and validation in GeNIe 4.0 vs 3.0

Dear staff, I just find the parameter learning of GeNIe 4.0 is a bit different from GeNIe 3.0. Specifically, in 3.0 version, after I select "Randomize" for parameter initialization, the value of Log(p) will change every time I run the model. Normally, I will run the model several times unt...
by Yan
Wed Jun 22, 2022 12:28 am
Forum: GeNIe
Topic: randomize parameter
Replies: 2
Views: 2115

Re: randomize parameter

Hi Yan, I believe you are asking about randomization in the context of the EM algorithm (please correct me if I am wrong). In validation, this parameter also refers to the EM algorithm part (learning the parameters at each cross-validation step). It is an empirical finding that randomization gets y...
by Yan
Fri Jun 17, 2022 1:02 am
Forum: GeNIe
Topic: randomize parameter
Replies: 2
Views: 2115

randomize parameter

Dear staff, Can I ask some questions about "Randomize" option: 1. In the user mannual, it says "Using the Randomize option may be especially useful when learning parameters with latent variables." Could you please explain it in more detail? 2. If we select "radomize" in...
by Yan
Fri Jun 03, 2022 5:51 am
Forum: GeNIe
Topic: diagnostic values are too small
Replies: 1
Views: 1808

diagnostic values are too small

Dear staff,

I would like to know the reason that diagnostic values of obervation ranked nodes are too small (e.g., 0.006) in the diagnosis of GeNIe. Is it common or usual? How can we explain this? Thanks.

Kind regards
by Yan
Thu Apr 14, 2022 1:18 am
Forum: GeNIe
Topic: explain the tornado diagram
Replies: 1
Views: 2369

explain the tornado diagram

Dear staff, Please see the attached example of an BN model. I would like to ask some questions about the tornado diagram. For example, when you set node L as target and run the sensitivity analysis, you will find the most sensitivity parameter for "L = good" is "I = L | D = L" . ...