Camera Weighting in Eye Tracing back
Board:
Home
Board index
Raytracing
General Development
(L) [2014/07/05] [tby Geometrian] [Camera Weighting in Eye Tracing] Wayback!Hi,
In real life the aperture size (and exposure time) affect how bright the image is. This happens because a larger aperture means more light paths hit the sensor, and a longer exposure time integrates more energy.
How is this implemented in a unidirectional eye tracer (trace rays from camera until this hit light sources)? And, what does it mean to just send rays out of the camera not accounting for this (e.g. like SmallPT)?
Thanks,
-G
(L) [2014/07/05] [tby ingenious] [Camera Weighting in Eye Tracing] Wayback!In a physical camera, the exposure length and aperture size determine the amount of photons that will hit the sensor. More photons means less noise in the image. So bright daylight photos are typically much smoother than low-light photos. We don't have this constraint in graphics, because we can "freeze time" and sample as many paths as we need to obtain a noise-free image, regardless of the overall brightness.
(L) [2014/07/07] [tby Geometrian] [Camera Weighting in Eye Tracing] Wayback!Yes . . . but how is the proper weighting computed?
Look at SmallPT. It does not model depth-of-field, which means it is a pinhole camera. Since you can see anything at all, it has an infinite exposure time--for a very fuzzily defined notion of "infinite". If you were to increase the aperture at all (thus simulating depth of field), the image would suddenly be infinitely overexposed. But, in practice, you just generate the camera rays slightly differently, which doesn't affect the overall exposure at all.
The question is: what is the correct way to account for this in eye-to-light path tracing?
(L) [2014/07/07] [tby tarlack] [Camera Weighting in Eye Tracing] Wayback!Why would the exposure time be infinite because the camera is a pinhole ? In rendering I don't see any link between time and aperture size. In the limit, as ingenious said, the basic setup is to consider a dirac in time, and you simulate for a single value of time. In rendering, we can simulate a lot more photons than would be available in the reality, but that's not related to the shutter opening time, otherwise you would have to manage animation for correct motion blur.
here, the temporal weighting is implicit : you divide the temporal response of your theoretical sensor (a dirac with respect to time) with the pdf with respect to time, which is also a dirac with respect to the same measure, and there you have your implicit weighting of 1.
(L) [2014/07/07] [tby Geometrian] [Camera Weighting in Eye Tracing] Wayback!>> tarlack wrote:Why would the exposure time be infinite because the camera is a pinhole ?Think about it this way: the larger the aperture, the more paths there are more light to hit the sensor. For a true pinhole camera, there is only one light path that hits each point on the sensor. This is why real "pinhole" cameras need very long exposure times to get visible images.
 >> In rendering I don't see any link between time and aperture size.To clarify, these are two separate effects, both contributing to exposure -> brightness. However, neither is being simulated in, e.g., SmallPT.
(L) [2014/07/07] [tby tarlack] [Camera Weighting in Eye Tracing] Wayback!Well, we do not deal with real cameras here neither with real photons, we do not consider true monochromatic light propagation, but we deal with a virtual scene and a virtual world where an arbitrary number of spectral samples can be traced for any arbitrarily small interval of time. The pinhole camera you're talking about does not exist in reality, paths with spectral energy informations you are talking about do not exist in reality: there are billions of monochromatic photons that strike the sensor through a non-zero finite aperture during a non-zero time interval when shooting a real photo, but you are not simulating them all, aren't you ?
We are talking about maths here, not reality, that's why we can handle paths that carry spectral information and compute full integrals with just a few samples.
(L) [2014/07/07] [tby Geometrian] [Camera Weighting in Eye Tracing] Wayback!I . . . understand that. You can send as many photons as you like. That's not the issue though.
The issue is that aperture size and shutter speed affect brightness. This is true for virtual cameras too. It has nothing to do with how many paths you trace or how many exist. Here's a decent image:
[IMG #1 Image]
The f-number is effectively the aperture size. Notice how changing the aperture size (y-axis) affects the brightness. Similarly, the shutter speed (x-axis) affects the brightness.
Both of these effects are completely ignored in SmallPT, but they are important and I want to know how to model them for eye-to-light sampling.
[IMG #1]:Not scraped:
https://web.archive.org/web/20161005162753im_/http://imgsv.imaging.nikon.com/history/basics/04/img/0402_15.jpg
(L) [2014/07/07] [tby tarlack] [Camera Weighting in Eye Tracing] Wayback!...the aperture size and exposition time affect the total amount of energy you will collect, which you compute using the equations of physically based rendering. The more samples the more precise, you can add extra weighting functions to simulate the effect of ISO or noise due to discrepancy of exposition between each sensor pixel (which will give you the grainy noise we expect from high-ISO / low exposure time / high aperture photos).
The effect on brightness only comes from processing the collected energy to get the sensor response (with some kind of sensor saturation model, maybe temporal for more accuracy) + tone mapping (close to raw processing in photo), with the famous exposure compensation setting.
(L) [2014/07/07] [tby Geometrian] [Camera Weighting in Eye Tracing] Wayback!I believe I have figured it out myself.
To figure out the total power hitting a point on the image sensor, you need to figure out the integral of the radiance hitting it. For Monte Carlo, this is just your radiance estimate (one ray from image sensor to light) divided by the pdf (for evenly sampled lens, 1/lens_area). For a circular lens/aperture, this works out to be a final estimate of the power of: 2*\pi*aperture*estimate. Note that this has the correct behavior. For a large aperture, the power is larger. For a small aperture, it is smaller--and a pinhole camera gets zero.
To figure out total energy, you need to integrate this power over time. In fact, aperture can be rewritten with the shutter as a function of time. But the point is that integrating the power will give you energy.
The energy gets plugged into the sensor's response curve, thus giving a "measured value", and therefore a color.
To answer my other question: if you don't take this into account, you're doing it wrong.  E.g., SmallPT effectively is using a circular aperture with radius \sqrt(1/\pi) (corresponding to an area of 1 square meter) but ignoring the massive depth of field this would produce, and is integrating over one second with a square-wave shutter.  Alternately, a smaller aperture is possible with a longer exposure time (for example an aperture of radius 1cm, and exposure time ~318.3 seconds).
(L) [2014/07/07] [tby tarlack] [Camera Weighting in Eye Tracing] Wayback!I agree with you on most of the things, except the last one about how smallPT (and not specifically it, note that I did not use it) deals with exposure time/aperture.
smallPT does not use a large aperture or extra shutter time. It just models a non physically-plausible sensor, whose response function over time is a dirac (with weight - its integral - equal to 1 most of the time), and a non physical lens, whose response function with respect to incident direction is a dirac as well (also with weight 1). This is completely non physical, we agree on that, and that's why it is replaced by more physical models and temporal response functions in more advanced renderers.
back