-
-
Notifications
You must be signed in to change notification settings - Fork 10
Truncnormal #32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Truncnormal #32
Conversation
|
The two sided algorithm can fail (very low acceptance probability) when the range is big but far away from the mean. There is a method missing, which uses the one sided algorithm and does rejection sampling to get to the two sided case. (I will implement this) The problem is how to determine efficiently which of the 4 methods to use.
The paper gives useful probabilities, but they are all quite expensive to evaluate. This would not matter if a lot of samples are drawn but could be quite a bit if only a few are used. I think we could also use the inverse cdf approach implemented by @Caellian with some numerical optimizations. This has the advantage that it does not use rejection sampling and would be vectorized well (maybe?) |
|
In the paper you're referencing (Robert, Christian P. (1995)) and later in Chopin 2012, a cutoff is computed analytically as 0.47 σ (or 0.477 σ) for finite intervals and 0.5 σ for semi-finite ones, so if "a ≥ 0.477 σ" switch to exponential-tail sampler (Robert/Marsaglia/Chopin). So you can compare both bounds to figure out where they fall in the constructor, and then just use whichever algorithm is most efficient for that specific range. Branching is unavoidable - it will always have to be either a vtable lookup or a jump (preferable), rust is more likely to optimize away a simple jump if the range is constant, but CPU branch prediction should really remove this cost in most cases. let a = (start - mean) / std;
let b = (end - mean) / std;
match () {
// Extremely narrow interval: treat as degenerate
_ if (b - a).abs() < 1e-6 => 0.5 * (a + b)
// Narrow interval
// Inverse CDF works best here, with f64
// Use log-space for extreme a,b (e.g., > 8 sigma or < -8 sigma)
_ if (b - a) < 1e-3 => sample_inverse_cdf(a, b)
// Both tails positive (left cutoff above mean)
_ if a >= 0.47 => sample_exponential_tail(a, b)
// Both tails negative (right cutoff below mean)
// symmetric: flip and reuse upper tail sampler
_ if b <= -0.47 => -sample_exponential_tail(-b, -a)
// Straddling zero (typical central case)
// Standard rejection sampler works efficiently
_ if a < 0.47 && b > -0.47 => sample_rejection(a, b)
// Asymmetric truncation (one tail + near-zero cutoff)
// mixed region: tail on one side, cutoff on the other
// use exponential or hybrid rejection
_ if a >= 0.0 => sample_rejection(a, b)
_ if b <= 0.0 => -sample_rejection(-b, -a)
_ => panic!("Invalid truncation bounds or NaN"),
} |
|
Thanks, this looks already like a good approach :) |
|
|
Only if range includes [-0.47 σ, 0] or [0, 0.47 σ] should it use rejection sampling. [-0.47 σ, 0.47 σ] is where majority of the mass is. For [0.45 σ, &infty;] use a tail algorithm (Robert (lemma 2.2)/Marsaglia/Chopin); this is what I mean by Wikipedia says Marsaglia is on average faster than Robert even though it has higher rejection rate because it does not require the costly numerical evaluation of the exponential function. /// Marsaglia
fn sample_exponential_tail<R: Rng + ?Sized>(rng: &mut R, a: f64, b: f64) -> f64 {
assert!(a > 0.477 && a < b); // this is here only for example purposes, remove it
// NOTE: caller reversed a & b if b < 0.0, so same function is called and
// only the returned value is negated
// NOTE: if range intersects 0, then use current sampler impl with rejection
loop {
let u1: f64 = rng.random::<f64>().max(1e-16); // sample uniform [0, 1] f64
let x = (a * a - 2.0 * u1.ln()).sqrt();
if x > b {
// reject if beyond upper bound; will be always true if b is
// infinity, assume it's optimized away by compiler or branch
// prediction
continue;
}
let u2: f64 = rng.random::<f64>(); // and another one
if u2 < a / x {
return x;
}
}
} |
CHANGELOG.mdentrySummary
Added a
TruncatedNormaldistributionMotivation
#7
Details
It follows Robert, Christian P. (1995). "Simulation of truncated normal variables"
The test still needs to be improved. The code seems to be working, but it is still a draft.