<< Click to Display Table of Contents >> Navigation: Building blocks of GeNIe > Components of GeNIe models > Node types |
GeNIe supports the following node types:
Chance nodes, drawn as ovals, denote uncertain variables.
There are three basic types of discrete chance nodes: General, Noisy Max and NoisyAdder. There is no distinction between the three types in the Graph View, as they differ only in the way their conditional probability distributions are specified. See Canonical models section for more information on the Noisy Max and NoisyAdder nodes. Chance nodes are discrete but are capable of representing continuous quantities discretized as Identifiers, Intervals, Identifiers with intervals, and Identifiers with point values. Defining a node as Intervals or Point values makes the node numerical, even though it is discrete. We will discuss these nodes in detail in the Node properties section.
Deterministic nodes, usually drawn as double-circles or double-ovals, represent either constant values or values that are algebraically determined from the states of their parents. In other words, if the values of their parents are known, then the value of a deterministic node is also known with certainty. Deterministic nodes are quantified similarly to Chance nodes. The only difference is that their probability tables contain all zeros and ones (note that there is no uncertainty about the outcome of a deterministic node once all its parents are known).
There is a popular error, made by novices in the area of probabilistic graphical models, to equip models with deterministic parentless nodes. This is a bad practice that is perhaps not changing the numerical properties of the model but it serves no purpose, obscure the picture, and make the model larger, hence, harder to update. You will typically not notice the impact of these nodes on the speed of calculations in GeNIe because it is so efficient and fast but at some point, even GeNIe may choke - after all, calculations in Bayesian networks are worst-case NP-hard. A typical motivation of modelers is to add prior knowledge that is relevant to the model. If this is the motivation, then it is best to make this prior knowledge described in on-screen text boxes or annotations. For example, a text box that lists model assumptions may state that "the economy is struggling", etc. From the theoretical point of view, every probability in a model is conditional on the background knowledge, so for any probability p, one could write p=Pr(X|ζ), where X is the event in question and ζ is the background knowledge. Because every model parameter would have to be conditioned on ζ, one typically omits it. If there is a non-zero chance that the economy will change during the lifetime of the model (for example, one might want to use the same model for a different region/country), then it is best to make it a chance node. Chance nodes allow for observing their states (e.g., economy is low now). It is possible to calculate the Value of Information (VOI) for such a chance node.
Decision nodes, drawn as rectangles, denote variables that are under decision maker's control and are used to model decision maker's options. Decision nodes in GeNIe are always discrete and specified by a list of possible states/actions.
Value nodes (also called Utility nodes), drawn as hexagons, denote variables that contain information about the decision maker's goals and objectives. They express the decision maker's preferences over the outcomes of their direct predecessors.
There are two fundamental types of value nodes: Utility and Multi-Attribute Utility. The latter include a special case of Additive Linear Utility (ALU) functions. There is no distinction between the two in the graph view, as they differ only in the way they specify the utility functions. Utility nodes specify the numerical valuation of utility and Multi-Attribute Utility nodes specify the way simple Utility nodes combine to form a Multi-Attribute Utility function. Utility nodes cannot have other utility nodes as parents. The Multi-Attribute Utility nodes can have only Utility nodes as parents. See Multiple Utility Nodes section for more information.
Equation nodes, which are relatives of chance nodes, drawn as ovals with a wave symbol, denoting that they can take continuous values. Instead of a conditional probability distribution table, which describes the interaction of a discrete node with its parents, an equation node contains an equation that describes the interaction of the equation node with its parents. The equation can contain noise, which typically enters the equation in form of a probability distribution.
Deterministic equation nodes, drawn as double ovals with a wave symbol, denote equation nodes without noise, i.e., they are either constants or equations that do not contain a noise element. Once we know the states of their parents, the state of the child is, thus, determined.
Submodel nodes, drawn as rounded rectangles, denote submodels, i.e., conceptually related groups of variables. Submodel nodes are essentially holders for groups of nodes, existing only for the purpose of the user interface, and helping with making models manageable.
To learn how to create nodes and arcs between them, see the introductory sections on Building a Bayesian network and Building an influence diagram.
There is one useful functionality that is best discussed in this section. When creating new nodes, one has to select one of the node creation tools (Chance, Deterministic, NoisyMAX, Equation, Decision and Value). To facilitate fast construction of models, GeNIe allows for quick change of the selected tool to a related one. This is useful especially when using the "sticky mode," i.e., when double-clicking on a tool. When the CTRL key is pressed when a node tool is active, GeNIe selects temporarily an alternate (closely related) node type. The list below shows the transition to alternate types when the CTRL key is pressed:
Chance -> Equation
Equation -> Chance
Deterministic -> Chance
Decision -> Chance
NoisyMAX -> NoisyAdder (the cursor remains the same)
Value -> ALU (the cursor remains the same)
This functionality is useful, for example, when working on hybrid Bayesian networks, as the user may remain in the sticky mode, creating Chance and Equation nodes alternatively, depending on the need.