[back to digital matting]

Shadow Matting and Compositing

Yung-Yu Chuang1     Dan B Goldman1,3     Brian Curless1     David Salesin1,2     Richard Szeliski2

1University of Washington     2Microsoft Research     3Industrial Light and Magic



Abstract
In this paper, we describe a method for extracting shadows from one natural scene and inserting them into another. We develop physically-based shadow matting and compositing equations and use these to pull a shadow matte from a source scene in which the shadow is cast onto an arbitrary planar background. We then acquire the photometric and geometric properties of the target scene by sweeping oriented linear shadows (cast by a straight object) across it. From these shadow scans, we can construct a shadow displacement map without requiring camera or light source calibration. This map can then be used to deform the original shadow matte. We demonstrate our approach for both indoor scenes with controlled lighting and for outdoor scenes using natural lighting.

Citation (bibTex)
Yung-Yu Chuang, Dan B. Goldman, Brian Curless, David H. Salesin, and Richard Szeliski. Shadow Matting and Compositing. ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2003), Vol. 22, No. 3, pages 494-500, July 2003

Paper

SIGGRAPH 2003 paper (1.8MB PDF)

Video in SIGGRAPH 2003 video proceedings
720x480 DivX Avi (52.3MB)
360x240 DivX Avi (38.5MB)
(Download DivX codec from www.divx.com)

Results

Shadow matting
(a) Source frame (b) Source lit image (c) Source shadow image (d) Recovered beta
Shadow matting. Starting from a source image sequence (one frame shown in (a)), we first remove the foreground character using video matting. Our shadow matting algorithm recovers lit and shadow images (b,c) using max/min compositing. It then estimates â by projecting observed pixel colors onto the color lines between them (d).

Shadow compositing
(a) Target lit image (b) Target shadow image (c) Composite without displacement (d) Geometric shadow scanning
(e) Regular grid on the reference plane (f) Displacement map visualization (g) Composite with displacement (h) Using another background
Shadow compositing. Lit and shadow images are recovered for target geometry as well (a,b). Our composite remains unconvincing because the shadow does not conform to the background geometry (c). We acquire multiple, oriented scans of a straight line shadow (d), and compute line equations in the user-specified reference plane region, shown here with hatched blue lines. We then recover a displacement map (f) from the target scene to the source reference plane (e). This map distorts the shadow into the correct shape (g). The results using a second background are shown in (h).

More results
(a) Source frame (b) Target frame (c) Composite
An example of more complicated background geometry. The foreground element and shadow are extracted from a source frame (a) and transferred to a target frame (b) containing ivy to obtain a composite (c).

(a) Source frame (b) Target frame (c) Composite
Honey, I Shrunk the Grad Student! A source (a) and target (b) for a miniature composite (c).

Additional results
Input
Composite
(a) Multiple light sources (b) Color-filtered shadows (c) Caustics
Our method can be extended to transfer shadows of multiple light sources (a), color-filtered shadows (b), and caustics (c). For multiple light sources, the shadow image is taken with all the lights turned off. Using multichannel beta’s permits color-filtered shadows. We obtain caustics by allowing beta to exceed unity.


cyy -a-t- cs.washington.edu