Skip to content

Conversation

@benjamin-lieser
Copy link
Member

@benjamin-lieser benjamin-lieser commented Oct 27, 2025

  • Added a CHANGELOG.md entry

Summary

Added a TruncatedNormal distribution

Motivation

#7

Details

It follows Robert, Christian P. (1995). "Simulation of truncated normal variables"

The test still needs to be improved. The code seems to be working, but it is still a draft.

@benjamin-lieser
Copy link
Member Author

benjamin-lieser commented Oct 28, 2025

The two sided algorithm can fail (very low acceptance probability) when the range is big but far away from the mean.

There is a method missing, which uses the one sided algorithm and does rejection sampling to get to the two sided case. (I will implement this)

The problem is how to determine efficiently which of the 4 methods to use.

  • Naive rejection sampling
  • One sided
  • Two sided with the uniform proposal
  • Two sided with rejection sampling on the one sided case

The paper gives useful probabilities, but they are all quite expensive to evaluate. This would not matter if a lot of samples are drawn but could be quite a bit if only a few are used.
Maybe we actually put a tuning option to spend more upfront to get better sampling later.

I think we could also use the inverse cdf approach implemented by @Caellian with some numerical optimizations. This has the advantage that it does not use rejection sampling and would be vectorized well (maybe?)

@Caellian
Copy link

Caellian commented Oct 29, 2025

In the paper you're referencing (Robert, Christian P. (1995)) and later in Chopin 2012, a cutoff is computed analytically as 0.47 σ (or 0.477 σ) for finite intervals and 0.5 σ for semi-finite ones, so if "a ≥ 0.477 σ" switch to exponential-tail sampler (Robert/Marsaglia/Chopin).

So you can compare both bounds to figure out where they fall in the constructor, and then just use whichever algorithm is most efficient for that specific range. Branching is unavoidable - it will always have to be either a vtable lookup or a jump (preferable), rust is more likely to optimize away a simple jump if the range is constant, but CPU branch prediction should really remove this cost in most cases.

let a = (start - mean) / std;
let b = (end - mean) / std;
match () {
    // Extremely narrow interval: treat as degenerate
    _ if (b - a).abs() < 1e-6 => 0.5 * (a + b)

    // Narrow interval
        // Inverse CDF works best here, with f64
        // Use log-space for extreme a,b (e.g., > 8 sigma or < -8 sigma)
    _ if (b - a) < 1e-3 => sample_inverse_cdf(a, b)

    // Both tails positive (left cutoff above mean)
    _ if a >= 0.47 => sample_exponential_tail(a, b)
    // Both tails negative (right cutoff below mean)
        // symmetric: flip and reuse upper tail sampler
    _ if b <= -0.47 => -sample_exponential_tail(-b, -a)

    // Straddling zero (typical central case)
        // Standard rejection sampler works efficiently
    _ if a < 0.47 && b > -0.47 => sample_rejection(a, b)

    // Asymmetric truncation (one tail + near-zero cutoff)
        // mixed region: tail on one side, cutoff on the other
        // use exponential or hybrid rejection
    _ if a >= 0.0 => sample_rejection(a, b)
    _ if b <= 0.0 => -sample_rejection(-b, -a)
    _ => panic!("Invalid truncation bounds or NaN"),
}

@benjamin-lieser
Copy link
Member Author

Thanks, this looks already like a good approach :)
I don't think this works well in all cases. I guess with sample_exponential_tail you mean doing the sampling from the one sided distribution and then doing rejection sampling. Depending on the bounds you have poor acceptance rates and the uniform proposal would be better (It is also significantly faster per proposal). The narrow range would probably also better served with the uniform proposal.

@benjamin-lieser
Copy link
Member Author

a = 0.45, b = inf uses the naive rejection sampling and has a acceptance rate of 32% which seems like it could be improved.

@Caellian
Copy link

Caellian commented Oct 31, 2025

a = 0.45, b = inf uses the naive rejection sampling and has a acceptance rate of 32% which seems like it could be improved.

Only if range includes [-0.47 σ, 0] or [0, 0.47 σ] should it use rejection sampling. [-0.47 σ, 0.47 σ] is where majority of the mass is. For [0.45 σ, &infty;] use a tail algorithm (Robert (lemma 2.2)/Marsaglia/Chopin); this is what I mean by sample_exponential_tail.

Wikipedia says Marsaglia is on average faster than Robert even though it has higher rejection rate because it does not require the costly numerical evaluation of the exponential function.

/// Marsaglia
fn sample_exponential_tail<R: Rng + ?Sized>(rng: &mut R, a: f64, b: f64) -> f64 {
    assert!(a > 0.477 && a < b); // this is here only for example purposes, remove it

    // NOTE: caller reversed a & b if b < 0.0, so same function is called and
    // only the returned value is negated
    // NOTE: if range intersects 0, then use current sampler impl with rejection

    loop {
        let u1: f64 = rng.random::<f64>().max(1e-16); // sample uniform [0, 1] f64
        let x = (a * a - 2.0 * u1.ln()).sqrt();
        if x > b {
            // reject if beyond upper bound; will be always true if b is
            // infinity, assume it's optimized away by compiler or branch
            // prediction
            continue;
        }
        let u2: f64 = rng.random::<f64>(); // and another one
        if u2 < a / x {
            return x;
        }
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants