using monte carlo integration
to solve the rendering equation

dr. jon denning
assistant professor of cse
taylor university

Frank S. Brenneman Lecture Series

the rendering equation

computer-generated photo-realistic images are created by accurately simulating the way light interacts with the world

good simulations require good modeling of lighting, materials, lenses, and the transport of light

(not photo-real example)

the rendering equation

rendering systems, such as a ray tracer or path tracer, use the rendering equation to describe how the light bounces around the virtual scene until it enters the camera's lens

\[L_o (\mathbf{x}, \omega_o) = L_e(\mathbf{x}, \omega_o) + \int_\Omega \rho(\mathbf{x}, \omega_i, \omega_o) L_i(\mathbf{x}, \omega_i) (\omega_i \cdot \mathbf{n}) d\omega_i\]

the rendering equation describes the total amount of light (\(L_o\)) leaving from a point (\(\mathbf{x}\)) along a particular direction (\(\omega_o\)) given a function for all incoming light (\(L_i\)) about the hemisphere (\(\Omega\)) and a reflectance function (\(\rho\))

the rendering equation

\[L_o (\mathbf{x}, \omega_o) = L_e(\mathbf{x}, \omega_o) + \int_\Omega \rho(\mathbf{x}, \omega_i, \omega_o) L_i(\mathbf{x}, \omega_i) (\omega_i \cdot \mathbf{n}) d\omega_i\]

\(L_e\) and \(\rho\) are relatively easy to define well, and the equation is fairly straight-forward, yet the results from such a simple equation can be quite stunning

[ luc bégin, link ]
[ jungwon park, link ]
[ valkyrie, link ]
[ valkyrie, link ]
[ valkyrie, link ]

solving the rendering equation

so, how do we (efficiently) solve this?

\[L_o (\mathbf{x}, \omega_o) = L_e(\mathbf{x}, \omega_o) + \int_\Omega \rho(\mathbf{x}, \omega_i, \omega_o) L_i(\mathbf{x}, \omega_i) (\omega_i \cdot \mathbf{n}) d\omega_i\]

notes:

solving the rendering equation

so, how do we (efficiently) solve this?

\[L_o (\mathbf{x}, \omega_o) = L_e(\mathbf{x}, \omega_o) + \int_\Omega \rho(\mathbf{x}, \omega_i, \omega_o) L_i(\mathbf{x}, \omega_i) (\omega_i \cdot \mathbf{n}) d\omega_i\]



monte carlo integration to the rescue!

note

in the interest of time, the focus of this talk is to provide intuition and motivation, not necessarily for a deeper understanding of the application of statistics or in computer graphics

just sit back and enjoy

integrals and averages

integral of a function over a domain

\[\int_{\mathbf{x}\in D} f(\mathbf{x}) dA_\mathbf{x}\]

"size" of a domain

\[A_D = \int_{\mathbf{x} \in D} dA_\mathbf{x}\]

average of a function over a domain

\[\frac{\int_{\mathbf{x}\in D} f(\mathbf{x}) dA_\mathbf{x}}{\int_{\mathbf{x} \in D} dA_\mathbf{x}} = \frac{\int_{\mathbf{x}\in D} f(\mathbf{x}) dA_\mathbf{x}}{A_D}\]

integrals and averages examples

average "daily" snowfall in Hillsboro last year

\[\frac{\int_{\mathit{day}\in\mathit{year}} s(\mathit{day}) d\mathit{length}(\mathit{day})}{\mathit{length}(\mathit{year})}\]

integrals and averages examples

"today" average snowfall in Kansas

\[\frac{\int_{\mathit{location}\in\mathit{Kansas}} s(\mathit{location})d\mathit{area}(\mathit{location})}{\mathit{area}(\mathit{location})}\]

integrals and averages examples

"average" snowfall in Kansas per day this year

\[\frac{\int_{\mathit{day}\in\mathit{year}}\int_{\mathit{loc}\in\mathit{KS}} s(\mathit{loc},\mathit{day}) d\mathit{area}(\mathit{loc})d\mathit{length}(\mathit{day})}{ \mathit{area}(\mathit{loc}) \mathit{length}(\mathit{day})}\]

discrete random variable

expected value and variance

estimating expected values

\[E[x] \approx \frac{1}{N} \sum_{i=1}^N x_i\]

(strong) law of large numbers

\[\text{P} \left[ E[x] = \lim_{N \rightarrow \inf} \frac{1}{N} \sum_{i=1}^N x_i \right] = 1\]

continuous random variable

uniformly distributed random variable

\[p(x) = \frac{1}{b-a}\]

[ bala ]

expected value and variance

multidimensional random variables

\[E[g(\mathbf{x})] = \int_{\mathbf{x} \in D} g(\mathbf{x}) p(\mathbf{x}) dA_\mathbf{x}\]

deterministic numerical integration

\[I = \int_a^b f(x)dx \qquad\qquad I \approx \sum_j f(x_j) \Delta x\]

[ bala ]

monte carlo numerical integration

\[I = \int_a^b \frac{f(x)}{p(x)} p(x)dx \approx \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{p(x_i)}\]

monte carlo numerical integration

intuition: compute the area randomly and average the results

\[I = \int_a^b f(x) dx \qquad\qquad I \approx \bar{I} = \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{p(x_i)}\]

[ bala ]

monte carlo numerical integration

formally, we can prove that

\[\bar{I} = \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{p(x_i)} \quad\Rightarrow\quad E[\bar{I}] = E[g(x)]\]

meaning that if we were to try multiple times to evaluate the integral using our new procedure, we would get, on average, the same result

variance of the estimate: \(\sigma^2[\bar{I}] = \frac{1}{N} \sigma^2[g(x)]\)

example: integral of constant function

analytic integration

\[I = \int_a^b f(x)dx = \int_a^b kdx = k(b-a)\]

monte carlo integration

\[\begin{array}{rl} I & = \int_a^b f(x) dx = \int_a^b k dx \approx \\ & \approx \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{p(x_i)} = \frac{1}{N} \sum_{i=1}^N k(b-a) = \\ & = \frac{N}{N} k(b-a) = k(b-a) \end{array}\]

example: computing \(\pi\)

take the square \([0,1]^2\) with a quarter-circle in it

\[\begin{array}{rcl} A_{\mathit{qcircle}} & = & \int_0^1 \int_0^1 f(x,y) dx dx \\ f(x,y) & = & \begin{cases} 1 & (x,y) \in \mathit{qcircle} \\ 0 & \text{otherwise} \end{cases} \end{array}\]

[ mit opencourseware ]

example: computing \(\pi\)

estimate area of quarter-circle by tossing point in the plane and evaluating \(f\)

\[A_\mathit{qcircle} \approx \frac{1}{N} \sum_{i=1}^N f(x_i, y_i)\]

[ mit opencourseware ]

example: computing \(\pi\)

\[\pi \approx \frac{4}{N} \sum_{i=1}^N f(x_i,y_i)\]

monte carlo numerical integration

\[ I = \int_{\mathbf{x} \in D} f(\mathbf{x}) dA_\mathbf{x} \approx \frac{1}{N} \sum_{i=1}^N \frac{f(\mathbf{x})}{p(\mathbf{x})}\]

monte carlo numerical integration

[ bala ]