Skip to content

Reworking the fog #1795

@illwieckz

Description

@illwieckz

This thread is dedicated to discussing technical (re)implementations for the fog.

As other transparent things, fog works differently in naive blending and linear blending.

The currently implementation is to use in GLSL values from the alpha channel of a “fog” image that is generated by the renderer.

I noticed that linearizing it seem to bring interesting results in linear blending, at least for the atcshd map.

Then @slipher said:

The fog image is pretty pointless, all it really does it calculate sqrt(x). We should just do that in GLSL. I think it only exists because someone ported the code from GL1 in an overly mechanical way.

So, disregarding to naive and linear blending, we may drop that image and do the computation in GLSL.

Now, about that computation, @slipher said:

Square root doesn't make much sense as a model for the fog; probably they just tried random functions and picked something that looked OK. Instead of sqrt(a*x), 1 - exp(b*x) would be an obvious model to use (as if the fog is formed from layered alpha blending). a/b is a constant based on the fog density.

So maybe that square root computation for naive blending is already a workaround for the naive blending being broken by design, and linearizing is like another random guess to get something that pleases, a bit like the known fact that using quadratic light attenuation with naive blending is somewhat canceling the mistake, while both light attenuation and blending should be linear to be done properly.

So, we may need two computations:

  • one for naive blending (we have to keep sqrt() for compatibility I guess)
  • one for linear blending

Once those two computations are defined, we have to define how to do them. We can do them in a precomputed image the GLSL code picks value from, or we may do the computations in GLSL directly.

Doing the computations in GLSL directly would void the image sampling (and then, binding, I guess), but switching the computation would require to compile the two different codes, while using images can use the same compiled GLSL while we just switch the image.

Please share any thought you may have on the topic!

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions