Bernoulli
A Bernoulli distribution models a single yes/no (two-outcome) event.
Typical real-world meanings:
“Did it rain today?” (yes/no)
“Did the user click the link?” (yes/no)
“Is the item defective?” (yes/no)
In WebPPL, Bernoulli samples are booleans: true / false.
Constructor
Bernoulli({p: ...})
p: success probability in[0, 1]support:
{true, false}
Relationship to booleans
Bernoulli is the canonical distribution when your random variable is literally a boolean. That means:
sample(Bernoulli({p: 0.7}))returns eithertrueorfalse.The exact meaning of “success” is up to you. Often we interpret: -
true= success / yes / event happens -false= failure / no / event does not happen
Relationship to flip
flip is shorthand for a Bernoulli draw:
flip(p)is equivalent tosample(Bernoulli({p: p})).flip()uses the defaultp = 0.5(a fair coin).
Rule of thumb:
Use
flipwhen you just want a quick boolean coin flip.Use
Bernoulli({p: ...})when you want an explicit distribution object (e.g. to callscoreor pass it around).
Relationship to Categorical
You can also express the same two-outcome distribution using Categorical by making
the outcomes explicit:
sample(Categorical({ps: [p, 1-p], vs: [true, false]}))
This is useful when you want non-boolean outcomes, e.g. ['H','T'] or [1,0],
or when you later generalize to more than two outcomes.
Scoring
For Bernoulli:
d.score(true) = log(p)d.score(false) = log(1 - p)
These are natural logs (base e). If you want the ordinary probability back,
use Math.exp(logp).
Gotcha: booleans vs 0/1
Bernoulli returns booleans (true/false).
If you need numeric 0/1 values, either:
convert explicitly:
(sample(Bernoulli({p: p})) ? 1 : 0)or if you need a vector of 0/1 outcomes, consider
MultivariateBernoulli({ps: ...}).
Executable example: basics (sample, score, flip)
1var p = 0.7;
2var d = Bernoulli({p: p});
3
4var out = {
5 p: p,
6
7 // Samples are booleans (true/false)
8 samples: repeat(8, function() { return sample(d); }),
9
10 // score returns natural log probs
11 logp_true: d.score(true),
12 logp_false: d.score(false),
13
14 // flip(p) is shorthand for sampling Bernoulli({p: p})
15 flipSamples: repeat(8, function() { return flip(p); })
16};
17
18out;
{
p: 0.7,
samples: [
true, true, true,
true, false, true,
true, true
],
logp_true: -0.35667494393873245,
logp_false: -1.203972804325936,
flipSamples: [
false, true,
false, true,
true, false,
false, false
]
}
A real-life example: estimating a biased coin (discrete grid)
Suppose you flipped a coin 10 times and observed the outcomes (true = heads).
We want to infer which p values are plausible.
To keep everything finite and exactly enumerable, we put a prior on a discrete grid
(e.g. p in {0.1, 0.2, ..., 0.9}) and use Infer({method: 'enumerate'}).
1// Real-life story: a biased coin.
2// true = heads, false = tails.
3var observations = [
4 true, true, false, true, true,
5 false, true, true, true, false
6];
7
8// Discrete grid prior so we can enumerate exactly.
9var grid = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9];
10var prior = Categorical({vs: grid}); // uniform over grid (ps omitted)
11
12var model = function() {
13 var p = sample(prior);
14 map(function(o) {
15 observe(Bernoulli({p: p}), o);
16 }, observations);
17 return p;
18};
19
20var posterior = Infer({method: 'enumerate', model: model});
21
22// Convert log scores to ordinary probabilities
23var supp = posterior.support();
24var probs = map(function(v) { return Math.exp(posterior.score(v)); }, supp);
25
26var out = {
27 support: supp,
28 probs: probs,
29 sum: sum(probs)
30};
31
32out;
{
support: [
0.9, 0.8, 0.7,
0.6, 0.5, 0.4,
0.3, 0.2, 0.1
],
probs: [
0.0630726246485273,
0.22123978796752938,
0.29321985989557897,
0.2362555743579042,
0.12877850558581388,
0.046667767774400876,
0.009892048584565563,
0.0008642179217481641,
0.000009613263930578787
],
sum: 0.9999999999999989
}