Re: Implementation for Blue-noise Dithered Sampling back
Board:
Home
Board index
Raytracing
Tools, Demos & Sources
(L) [2017/01/31] [ost
by jbikker] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!It's currently improving my 128^2 tile... Spent 78 hours so far. [SMILEY :)] Added progress counter and resume functionality. I'll report when I have something running, but it's going to be after the weekend; tracer is broken atm due to some BSDF experiments.
(L) [2017/02/01] [ost
by josh247] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!Good catch on that instruction toxie, I've pushed up a fix for this. I'd not thought about MSVC support but it's a good point, this should also be working now. Although saying that, I've not got a windows environment to test this on. So if anyone does, and tries compiling the source, then let us know the result.
I've also tried simulating 128x128 sample mask at a depth of 4, and got some comparable results to that displayed in the paper. This took 12 hours with 131072 iterations, although that was on a relatively modern Xeon with 8 cores (16 threads). Would be interested to see what we get with more iterations, let us know what you find jbikker [SMILEY :)]
(L) [2017/02/01] [ost
by josh247] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!>> ingenious wrote:toxie wrote:Would be interesting to hear your experiences when rendering with it, when optimizing for this 10D case. Cause IMHO the scheme should fall apart when using a large number of dimensions.
Indeed, in my experience it's hard (impossible?) to achieve high-quality blue noise in high dimensions  
I'd imagine the benefit of the blue noise properties would give diminishing returns with greater depth rather quickly. And as you say, a higher depth value will also produce a sample mask of lower quality. This approach will probably be best suited to dealing with earlier integration problems such as motion blur or spectral sampling. Although they did give an example of good results with light sampling.
If higher dimensions are required then we might see better results by padding multiple sample masks of a low depth, each using a different seed value.
(L) [2017/02/02] [ost
by toxie] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!To be honest, i don't even know if it can bring any real benefit for anything that is not directly visible from the camera. After all one exploits the human visual system here, so the higher dimensions/bounces should be better tackled via a properly distributed sample set (e.g. most likely some modern QMC set or another "hand-optimized" set with a fixed number of samples, especially if one tackles interactive/realtime usage).
(L) [2017/02/02] [ost
by papaboo] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!Wouldn't it yield some improvement if the first surface the ray intersects is nearly or completely specular? In that case the noise-buildup doesn't really start until the second intersection.
(L) [2017/02/02] [ost
by toxie] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!Yup, that too, of course.
(L) [2017/02/02] [ost
by ingenious] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!>> toxie wrote:To be honest, i don't even know if it can bring any real benefit for anything that is not directly visible from the camera. After all one exploits the human visual system here, so the higher dimensions/bounces should be better tackled via a properly distributed sample set (e.g. most likely some modern QMC set or another "hand-optimized" set with a fixed number of samples, especially if one tackles interactive/realtime usage).
Dithering aims to improve is the correlation between the pixel estimates so that the distribution of the error is visually pleasing. It does not address the quality of these estimates, i.e. the amount of error. Of course, for each pixel you want to use a good integration pattern, e.g. a QMC one, to lower the amount of error, but that's an orthogonal objective. The two objectives can be combined.
For example, while motion blur and dispersion can be classified as "directly visible from the camera", direct illumination and ambient occlusion are not. And dithering helps with those too.
(L) [2017/02/02] [ost
by toxie] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!True, unless one uses a distribution over the screen instead of per-pixel for example (and then merges the two schemes). So maybe i was a bit distracted here by my "own" usecases and experiments, sorry. [SMILEY ;)] And then you sacrifice that part in addition to potential quality loss for these "directly visible" dimensions (as one optimizes the sample set offsets for a larger set of dimensions, where-as the lower ones are (most likely?) the more important).
Which brings me to a simple idea: What about weighting the dimensions/bounces in the optimization process? So that early dimensions are more important than later ones? So using kind of a custom vector length that favors lower dimensions over larger ones?
But maybe that's not even true. Thinking more about it, it could also be that the lower dimensions are only important in the beginning/low sample count, but the higher dimensions become more important for growing sample counts? All very scene dependent of course.
EDIT: as for the example: Yes, that's why i wrote dimensions/bounces, so basically everything > first hit incl. collecting the stuff (direct light, pre-computed data or something like AO) in there.
(L) [2017/02/08] [ost
by jbikker] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!OK, did some quick tests:
- Applying the method to direct light sampling yields the results presented in the paper.
- Applying the method to the first diffuse bounce yields no perceivable improvement in quality.
In general, the number of dimensions is a problem: I tried 6 dimensions (sampling direct light on first diffuse surface, then the first diffuse bounce and finally direct light on the second diffuse surface) but this already seems to decrease the quality of the penumbras compared to using just 4 dimensions. This would suggest that just using 2 dimensions could yield the best quality; this way additional dimensions do not affect the quality of the distribution of the first two. It could be that slightly more converged tiles yield better results; I had 128x128 / d=10 tiles of high quality and produced 32x32 / d=6 in just a few minutes (didn't expect to need them).
So that's pretty much what everyone expected.
That being said, the method obviously improves image quality for the first couple of samples, it's straight-forward to implement and should have only a tiny impact on performance.
- Jacco.
EDIT: slightly more converged tile. Method applied to 6 dimensions (NEE-1, diff bounce, NEE-2). Obvious tiling pattern due to small tiles; this disappears for larger tiles.
[IMG #1 Image]
[IMG #1]:Not scraped:
https://web.archive.org/web/20210509180052im_/http://www.cs.uu.nl/docs/vakken/mov/files/bluenoise.png
(L) [2017/02/09] [ost
by toxie] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!What do you compare it against in that picture?
EDIT: maybe also due to the semi-magical weighting function, smaller tiles are better? if thats the case then one could also have different small tiles that are then used "randomly" over the screen to get rid of the tiling patterns.
(L) [2017/02/09] [ost
by jbikker] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!This is unidirectional path tracing with uniform random variables everywhere, 8 bounces, RR, MIS, NEE, Lambert BRDF.
Left side replaces uniform random variables with the tile. For subsequent pixels I shift the tile randomly over x and y, instead of adding an offset to each value.
(L) [2017/02/09] [ost
by toxie] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!This is from one of my private projects where i included the masks some months ago, using 3 dimensions to compute SSAO, left with 64x64 tiled blue noise, right with 64x64 tiled white noise (but both using the same QMC set for the samples in each pixel then). For such a case it can make a really nice difference.
Untitled.png
(L) [2017/04/16] [ost
by friedlinguini] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!I was thinking of giving this technique a spin, and I had a couple of random thoughts.
1. The idea is basically to add the pixel-dependent blue-noise shift to a pixel-independent random number, and subtract 1 if necessary to bring things in the range of [0, 1), right? But during tile generation, the algorithm would think of (for one dimension in both domain and range) a set of values like [0.0, 0.99, 0.0, 0.99, 0.0] to be "good", but [0.0, 0.01, 0.0, 0.01, 0.0] to be "bad", even though they have very similar effects on the result? Maybe some other definition of distance (e.g., sin(2 * pi * (ps - qs)) * 0.5 + 0.5 for one dimension) is more appropriate?
2. Generating high-dimensional blue noise tiles is hard, but can you get a good-enough effect by just using multiple independent 1- and/or 2-dimensional tiles, or even the same tile at a different sufficiently-large offset per dimension?
(L) [2017/04/16] [ost
by friedlinguini] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!>> sin(2 * pi * (ps - qs)) * 0.5 + 0.5
OK, that makes no sense. I tried to jam a couple of ideas together without making sure that one didn't break the other.
Maybe c * (x - x * x), where x = ps - qs and c is some constant I haven't worked out yet.
(L) [2017/04/17] [ost
by ingenious] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!The idea ideas not adding a deterministic (blue-noise) number to a random number. In fact, the idea is to get rid of the randomness altogether - you use the deterministic number to post your pattern, instead of using a random number.
Regarding the (0, 0.99) vs (0, 0.01) issue, the answer is that it depends. If your 1D sampling domain is "wrapped", i.e. 0.99 and 0.01 are next to each other, then yes, you should ideally take this into account in the distance function when you construct the dither mask. If the domain is not wrapped, then doing so will be suboptimal, that is, you won't get the best blue noise distribution of the error in your image. So you may want to have two dither masks - one with wrapped values and one with non-wrapped.
In 2D you have the same thing, but with some combinations. A torus light source would benefit from a mask with values wrapped along both dimensions. For a disk light or for hemisphere sampling you would use a mask wrapped along one dimension only. For a rectangular light the optimal choice would be a non-wrapped mask.
For higher dimensions using independent (or independently screen-space shifted) masks use possible, but if there's a lot of variance along all dimensions, then you will get some white noise that might destroy/mask the blue noise from the dithering. So it's not optimal, but it's very practical. Generating higher-dimensional blue noise masks is hard, because achieving high blue noise quality in high dimensions is hard in general.
(L) [2017/04/17] [ost
by friedlinguini] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!>> ingenious wrote:The idea ideas not adding a deterministic (blue-noise) number to a random number. In fact, the idea is to get rid of the randomness altogether - you use the deterministic number to post your pattern, instead of using a random number.
Right, I was using "random" loosely. My understanding of this algorithm is that a Monte Carlo renderer where samples are taken per pixel might have a random number generator implemented like
  fmod(halton(primes[dimension], sampleNumber) + blueNoise(pixelColumn, pixelRow), 1.0f)
And the halton() function might stand in for the "pixel-indendent random number".
 >> Regarding the (0, 0.99) vs (0, 0.01) issue, the answer is that it depends. If your 1D sampling domain is "wrapped",  i.e. 0.99 and 0.01 are next to each other, then yes, you should ideally take this into account in the distance function when you construct the dither mask.
Isn't the wrapping fundamental to the algorithm? If halton() returned 0.5 in the above code, then blue noise values of 0.0 and 0.99 would produce final results of 0.5 and 0.49, respectively.
(L) [2017/05/07] [ost
by josh247] [Re: Implementation for Blue-noise Dithered Sampling] Wayback!Just to clarify, here the implementation (as described in the paper) computes the blue-noise mask using a distance over wrapped boundaries. If this is not required then it's a rather simple change to alter this behavior. On another note, I've recently added some progress feedback in the terminal as per jbikker's suggestion.
back