Modelling ongoing development with Dynamic Bayesian Networks

The engine.
Post Reply
rmorales
Posts: 12
Joined: Mon Oct 29, 2018 11:21 pm

Modelling ongoing development with Dynamic Bayesian Networks

Post by rmorales » Thu Dec 20, 2018 3:26 am

I have a situation of using some Dynamic Bayesian Networks to follow in "real time" the development of the competences of students. That means I have to update beliefs on any new evidence, as they come, so SMILE behaviour of considering all temporal evidences at once, unrolling the network, is not what I need.

So, I am thinking of something like:
  1. To initialize the beliefs with maximum uncertainty (e.g. flat distribution on top node).
  2. When new evidence comes, at time 1, to set current beliefs as initial conditions, set the evidence and update beliefs.
  3. To repeat previous step as new evidence come.
Now the questions:
  • Is that a proper way to apply SMILE to my problem? If not, how it should be used?
  • If so, shall I link initial conditions directly to slice 1, or shall I connect the to a new slice 0 (representing slice 1)?
  • Should I use terminal nodes to get the results on every cycle?
I will very much appreciate your guidance in properly using SMILE.

Regards.

marek [BayesFusion]
Site Admin
Posts: 267
Joined: Tue Dec 11, 2007 4:24 pm

Re: Modelling ongoing development with Dynamic Bayesian Networks

Post by marek [BayesFusion] » Sun Dec 23, 2018 1:09 am

I'm not sure if I understand your modeling situation and your needs, so I will try to restate it and give you a chance to correct me in case I have it all wrong. You have a DBN model of a student knowledge and you need the DBN to predict the trajectory of how the student knowledge will develop in the future. I assume that you are interested in the trajectory, as this seems to me the only valid reason for why you would want to use a DBN rather than a static BN. Now, as you observe the student behavior, you want to enter evidence in the network and have it update its prediction of the trajectory, correct?

It seems to me that this (entering successive evidence and updating after every piece of evidence) is the typical way of using a Bayesian network package, so I am not sure why you want to do something different. Is it a question of performance, i.e., you find that the DBN is updating too slowly?

Marek

rmorales
Posts: 12
Joined: Mon Oct 29, 2018 11:21 pm

Re: Modelling ongoing development with Dynamic Bayesian Networks

Post by rmorales » Sat Dec 29, 2018 5:37 am

Certainly it was a misunderstanding on my behalf, as I had used PGMPY previously and it did exactly what I needed (for prototyping) by simply creating the dynamic network, adding the evidence and running it. This does not happens in PYSMILE, as evidence in t+n can affect beliefs in t, but it is closer to what I will need to do later: restore the net from the DB, initialize with previous status (ANCHOR), add evidences (PLATE), gather results (TERMINAL), and store the results in the DB.

Now my prototype seems to be working for the net shown in the picture below:
colaborar_din_genie.png
colaborar_din_genie.png (68.97 KiB) Viewed 526 times
Given the evidence

Code: Select all

cn = CollaborationNetwork()
cn.add_evidence(cn.evidence_name(cn.PROPONER_E),1,cn.BAJO)
cn.add_evidence(cn.evidence_name(cn.APORTAR_E),2,cn.BAJO)
cn.skip_evidence(3)
cn.add_evidence(cn.evidence_name(cn.PROPONER_E),4,cn.MEDIO)
cn.add_evidence(cn.evidence_name(cn.APORTAR_E),5,cn.MEDIO)
cn.skip_evidence(6)
cn.add_evidence(cn.evidence_name(cn.COLABORAR_E),7,cn.MEDIO)
cn.skip_evidence(8)
cn.skip_evidence(9)
cn.add_evidence(cn.evidence_name(cn.PROPONER_E),10,cn.ALTO)
cn.add_evidence(cn.evidence_name(cn.APORTAR_E),11,cn.MEDIO)
cn.skip_evidence(12)
cn.skip_evidence(13)
cn.add_evidence(cn.evidence_name(cn.PROPONER_E),14,cn.MEDIO)
cn.skip_evidence(15)
cn.skip_evidence(16)
it produces what I was expecting (Promedio means Average, and Incertidumbre means Uncertainty, the later calculated as normalised entropy):
caso-bajoMedio-16-pysmile.png
caso-bajoMedio-16-pysmile.png (82.01 KiB) Viewed 526 times
caso-bajoMedio-16-incertidumbre-pysmile.png
caso-bajoMedio-16-incertidumbre-pysmile.png (84.27 KiB) Viewed 526 times

marek [BayesFusion]
Site Admin
Posts: 267
Joined: Tue Dec 11, 2007 4:24 pm

Re: Modelling ongoing development with Dynamic Bayesian Networks

Post by marek [BayesFusion] » Tue Feb 05, 2019 3:29 pm

rmorales wrote:
Sat Dec 29, 2018 5:37 am
This does not happens in PYSMILE, as evidence in t+n can affect beliefs in t, but it is closer to what I will need to do later: restore the net from the DB, initialize with previous status (ANCHOR), add evidences (PLATE), gather results (TERMINAL), and store the results in the DB.
I'm sorry I did not look at your response earlier -- I was traveling over the holidays and just got to look back at your post. I'm not sure whether what you described above is a PySMILE-only feature. This is the way all DBNs work or should work, i.e., observations made at some time step influence the probability distributions over variables at earlier time steps. Think about it in the following way "If I observe that a student is proficient at time t, then, other information absent, my belief that the student was proficient at time t-1 goes up." If you want new observations not to influence the past, just observe the past, i.e., add past observations to the evidence. Does this help?

Marek

Post Reply