Camera Simulator is a real-time photography learning tool that lets users adjust ISO, aperture, and shutter speed and see the photographic result immediately. The core challenge: how do you simulate film grain, exposure curves, and motion blur in the browser at 60fps? The answer: WebGL fragment shaders composited over a Canvas 2D scene.
Architecture overview
- Canvas 2D draws the animated scene each frame
- Three.js reads the canvas as a CanvasTexture
- Pass 1: Exposure fragment shader (gamma-corrected brightness)
- Pass 2: ISO noise fragment shader (time-seeded random grain)
- WebGLRenderer outputs to screen
The exposure shader
The key insight is working in linear light space — you need to undo the gamma encoding before applying the exposure multiplier, then re-apply gamma for display. Skipping this causes the highlights to clip incorrectly and midtones to shift unnaturally.
uniform sampler2D tDiffuse;
uniform float uExposure;
varying vec2 vUv;
void main() {
vec4 color = texture2D(tDiffuse, vUv);
// linearize (undo gamma)
vec3 linear = pow(color.rgb, vec3(2.2));
// apply exposure multiplier
vec3 exposed = clamp(linear * uExposure, 0.0, 1.0);
// re-apply gamma for display
gl_FragColor = vec4(pow(exposed, vec3(1.0 / 2.2)), color.a);
}The exposure value comes from the EV formula: EV = log2((N² / t) / (ISO / 100)), where N is the f-number and t is shutter speed in seconds. The exposure multiplier is then 2^(EV_target - EV_actual), keeping the math physically accurate to real camera behavior.
The ISO noise shader
Real camera noise has two components: luma noise (brightness variation) and chroma noise (color variation, especially in R and B channels). Separating them is what makes the result look like actual sensor noise rather than uniform grey static.
uniform sampler2D tDiffuse;
uniform float uISO;
uniform float uTime;
varying vec2 vUv;
float rand(vec2 co) {
return fract(sin(dot(co, vec2(12.9898, 78.233))) * 43758.5453);
}
void main() {
vec4 color = texture2D(tDiffuse, vUv);
// noise only visible above ISO 800
float nf = clamp((uISO - 800.0) / 5600.0, 0.0, 1.0);
nf = pow(nf, 1.5); // perceptual curve
if (nf < 0.01) { gl_FragColor = color; return; }
float luma = rand(vUv + mod(uTime, 100.0)) * 2.0 - 1.0;
float chromaR = rand(vUv * 1.3 + mod(uTime * 1.1, 100.0)) * 2.0 - 1.0;
float chromaB = rand(vUv * 0.7 + mod(uTime * 0.9, 100.0)) * 2.0 - 1.0;
float amt = nf * 0.10;
color.r = clamp(color.r + luma * amt + chromaR * amt * 0.4, 0.0, 1.0);
color.g = clamp(color.g + luma * amt, 0.0, 1.0);
color.b = clamp(color.b + luma * amt + chromaB * amt * 0.4, 0.0, 1.0);
gl_FragColor = color;
}Motion blur: accumulated canvas frames
Shutter speed affects how much motion is captured during the exposure window. For slow shutters, multiple canvas frames are rendered across the simulated exposure window and averaged together using globalAlpha compositing.
const samples = Math.max(1, Math.round(shutterSeconds * 120));
ctx.clearRect(0, 0, W, H);
for (let i = 0; i < samples; i++) {
const sampleT = t - shutterSeconds + (i / samples) * shutterSeconds;
ctx.globalAlpha = 1 / samples;
drawScene(sceneId, ctx, W, H, sampleT);
}
ctx.globalAlpha = 1;At 1/1000s this is a single sample — instant. At 1s it accumulates 120 samples, producing natural motion blur on anything that moved between the first and last frame. The helicopter rotor becomes a translucent disc; the waterfall becomes a smooth curtain.
Performance notes
The bottleneck is Canvas 2D, not WebGL. The shader passes are essentially free at this resolution. Two optimizations matter: cache the static background on a separate canvas (hills, buildings, road — anything that doesn't animate) and only redraw the moving elements on top each frame. This dropped the helicopter scene from ~4ms to ~0.3ms per frame for background rendering.