This article will describe my approach for creating the environment-based lighting and reflections shown in the tweet above. It consists of several parts:

  1. Acquire a video stream from the webcam
  2. Create a fragment shader that converts view angle to a sample from the video
  3. Create a cubemap that renders the shader for all view angles to produce the metalic/reflective texture
  4. Create a set of mipmaps based on the cubemap to produce the diffuse/rough texture

I can't take credit for coming up with this - I was inspired by other similar demos I've seen, such as this video by Bob Burrough. I just want to try to lay out the approach I used (which is likely full of errors or otherwise suboptimal) and explain some of the challenges I encountered.

TL;DR: Demo link (Looking Glass optional)

Streaming video from the webcam

For this demo, I used a 180 degree fisheye lens webcam (not a referral link). I found it to be a bit less than 180 degrees at the edges, but was pretty close to 180 degrees from corner to corner. The wider the angle the better for this demo, but it's okay if you have a more narrow angle too.

Using a set of helping hands from an electronics kit, I positioned the webcam above the Looking Glass.

helping-hands-lkg

It's easy to access the webcam with HTML5 features. Add a hidden <video> tag somewhere in your html body.

<video id="video" autoplay style="display:none"></video>

I started by initializing the webcam and attaching the stream to the video element. I used a relatively low resolution (640x480) to prioritize performance.

To get the video stream onto a texture in three.js, I made a THREE.VideoTexture referencing the video element.

/**
 * Initializes a webcam video stream for an HTML video element
 * @param {HTMLVideoElement} videoElement 
 */
async function initWebcam(videoElement) {
  // create a video stream
  let stream;
  try {
    stream = await navigator.mediaDevices.getUserMedia({
      video: {width: 640, height: 480},
    });
  } catch(error) {
    return console.error('Unable to access the webcam.', error);
  }

  // apply the stream to the video element
  videoElement.srcObject = stream;
  videoElement.play();
}


/**
 * Initializes the demo scene.
 */
function initScene() {
  const videoElement = document.getElementById('video');
  
  // initialize the webcam
  initWebcam(videoElement);

  // create a video texture
  videoTexture = new THREE.VideoTexture(videoElement);
}

initScene();

Let's build a quick demo to check if the webcam is working.

Click to run demo
Demo source
function basicWebcamDemo(container) {
  const videoElement = document.getElementById('video');
  
  // initialize the webcam
  const videoWidth = 640, videoHeight = 480;
  initWebcam(videoElement, videoWidth, videoHeight);

  // create a renderer, scene and camera
  const renderer = initRenderer(container);
  const {width, height} = renderer.getSize();
  const scene = new THREE.Scene();
  const camera = new THREE.OrthographicCamera(
    -width/2, width/2, height/2, -height/2,
    1, 10000,
  );
  camera.position.z = 1000;
  scene.add(camera);

  // create a video texture
  const videoTexture = new THREE.VideoTexture(videoElement);

  // populate the scene with a textured plane
  const aspectRatio = videoHeight / videoWidth;
  const plane = new THREE.PlaneGeometry(width, width*aspectRatio);
  const videoMaterial = new THREE.MeshBasicMaterial({
    map: videoTexture
  });
  const mesh = new THREE.Mesh(plane, videoMaterial);
  mesh.scale.x = -1;
  scene.add(mesh);
  
  animate(renderer, scene, camera);
}

Looks good so far! However, there's a big problem present in most setups. Webcams will by default try to adjust their gain and white balance depending on the content of the scene, which means that as things move around in front of the camera, the overall view will not be captured with consistent exposure settings.

You can probably live without fixing that for the sake of this article, but for the demo to work properly I needed to be able to have control of these exposure settings. There is no option for this in getUserMedia and no way to modify the resulting video stream in the browser, so I had to look for another way to control this.

I found that by adding a Video Capture Device in OBS, I could use the "Configure Video" button to access a panel that would let me disable automatic white balance and automatic exposure. I adjusted them manually until I reached a combination of settings that worked for my environment. These settings persist after closing the settings panel.

obs-settings

Create the shaders

In order to convert view angle to a texture coordinate in the video, I created a fragment shader. This shader is not very physically accurate, it just assumes that the edges of the texture will map roughly to the edges of a hemispheric view looking out from the camera. Of course, a vertex shader is also necessary, but this one just sets the standard vertex positions and normals.

I stored my shaders on script elements somewhere in the DOM and read their contents into a THREE.ShaderMaterial.

<script type="x-shader/x-vertex" id="vertexshader">
  #ifdef GL_ES
  precision highp float;
  #endif

  varying vec2 vUv;
  varying vec3 norm;
    
  void main()
  {
    vUv = uv;
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
    norm = normal;
  }  
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
  #ifdef GL_ES
  precision highp float;
  #endif

  varying vec2 vUv;
  varying vec3 norm;
  uniform sampler2D texture;
    
  void main()
  {
    // use the xy normal to look up the texture position
    // and convert the [-1, 1] range to [0, 1]
    vec2 lookup = (norm.xy + 1.0) / 2.0;
    
    // generate an attenuation factor to darken the back
    float attenuation = min(1.0, norm.z + 1.0);
    
    // flip the x component to mirror the image
    lookup.x = 1.0 - lookup.x;
    
    // look up and output the attenuated texture color
    vec3 color = texture2D(texture, lookup).rgb;
    gl_FragColor = vec4(color * attenuation, 1.0);
  }
</script>

These shaders were then applied to a THREE.ShaderMaterial.

const shaderMaterial = new THREE.ShaderMaterial({
  vertexShader:   document.getElementById('vertexshader').innerText,
  fragmentShader: document.getElementById('fragmentshader').innerText,
  uniforms: {
    texture: new THREE.Uniform(videoTexture),
  },
});

Let's build another demo to check if the shader is working.

Click to run demo
Demo source
function shaderDemo(container) {
  const videoElement = document.getElementById('video');
  
  // initialize the webcam
  const videoWidth = 640, videoHeight = 480;
  initWebcam(videoElement, videoWidth, videoHeight);

  // create a renderer, scene and camera
  const renderer = initRenderer(container);
  const {width, height} = renderer.getSize();
  const scene = new THREE.Scene();
  const camera = new THREE.OrthographicCamera(
    -width/2, width/2, height/2, -height/2,
    1, 10000,
  );
  camera.position.z = 1000;
  scene.add(camera);

  // create a video texture
  const videoTexture = new THREE.VideoTexture(videoElement);
  
  // create the shader material
  const shaderMaterial = new THREE.ShaderMaterial({
    vertexShader:   document.getElementById('vertexshader').innerText,
    fragmentShader: document.getElementById('fragmentshader').innerText,
    uniforms: {
      texture: new THREE.Uniform(videoTexture),
    },
  });

  // populate the scene with geometry & a shader
  const geometry = new THREE.SphereGeometry(100, 10, 10);
  const mesh = new THREE.Mesh(geometry, shaderMaterial);
  scene.add(mesh);
  
  function update(t, dt) {
    // wiggle the mesh a bit
    mesh.rotation.y = 0.5*Math.sin(t) + 0.25 * t;
    mesh.rotation.x = 0.5*Math.cos(t * 0.8);
  }
  
  animate(renderer, scene, camera, update);
}

You'll notice there's some trickery happening here. To account for the limited field of view of the camera, I mirrored the view on the rear side of the sphere. To try to limit how much this distorts reality, I applied an attenuation factor towards the back of the sphere.

If you have a 360 degree camera (likely made from a pair of front + back 180 degree cameras), you can use two textures & use a lookup from the reverse camera when norm.z < 0. You'd omit the attenuation factor in this case.

Create a cubemap for reflections

I created a cubemap by placing some cameras inside the sphere. They look out on the world and see the view as rendered by the shader, and render their views to a set of textures that we can use for reflections. There is a helper class called THREE.CubeCamera that can do this.

This part required two scenes. In one scene, I placed the sphere and CubeCamera. This scene is rendered independently. The second scene used the texture rendered by the first scene as an environment map.

The cubemap resolution does not need to be very high - 128x128 for each face seemed like plenty. Once the cubemap was rendered, I applied it to a THREE.MeshStandardMaterial with a full metallic value and no roughness.

Creating a scene for the cubemap is pretty straightforward.

// create a scene for the cubemap
const cubeMapScene = new THREE.Scene();
const cubeCamera = new THREE.CubeCamera(1, 1000, 128);

// populate the cubemap scene with a sphere & shader
const sphere = new THREE.SphereGeometry(100, 15, 15);
const sphereMesh = new THREE.Mesh(sphere, shaderMaterial);
cubeMapScene.add(sphereMesh);

I then used the cube camera's render target texture as the environment map in a THREE.MeshStandardMaterial.

 const material = new THREE.MeshStandardMaterial( {
   color: 0xffffff,
   metalness: 1.0,
   roughness: 0.0,
   envMap: cubeCamera.renderTarget.texture,
 });

One thing to remember is that on every frame, the cube camera must be updated.

// on every frame...
cubeCamera.update(renderer, cubeMapScene);
Click to run demo
Demo source
function cubeMapDemo(container) {
  
  const videoElement = document.getElementById('video');
  
  // initialize the webcam
  const videoWidth = 640, videoHeight = 480;
  initWebcam(videoElement, videoWidth, videoHeight);
  
  // create a video texture
  const videoTexture = new THREE.VideoTexture(videoElement);
  
  // create the shader material
  const shaderMaterial = new THREE.ShaderMaterial({
    vertexShader: document.getElementById('vertexshader').innerText,
    fragmentShader: document.getElementById('fragmentshader').innerText,
    uniforms: {
      texture: new THREE.Uniform(videoTexture),
    },
    side: THREE.DoubleSide,
  });

  // create a scene for the cubemap
  const cubeMapScene = new THREE.Scene();
  const cubeCamera = new THREE.CubeCamera(1, 1000, 128);

  // populate the cubemap scene with a sphere & shader
  const sphere = new THREE.SphereGeometry(100, 15, 15);
  const sphereMesh = new THREE.Mesh(sphere, shaderMaterial);
  cubeMapScene.add(sphereMesh);

  // create a renderer, scene and camera
  const renderer = initRenderer(container);
  const {width, height} = renderer.getSize();
  const scene = new THREE.Scene();
  const camera = new THREE.OrthographicCamera(
    -width/2, width/2, height/2, -height/2,
    1, 10000,
  );
  camera.position.z = 1000;
  scene.add(camera);

  // populate the scene with a textured plane
  const aspectRatio = videoHeight / videoWidth;
  const geometry = new THREE.TorusKnotBufferGeometry(80, 30, 100, 16);
  const material = new THREE.MeshStandardMaterial( {
    color: 0xffffff,
    metalness: 1.0,
    roughness: 0.0,
    envMap: cubeCamera.renderTarget.texture,
  });
  const mesh = new THREE.Mesh(geometry, material);
  scene.add(mesh);

  function update(t, dt) {
    cubeCamera.update(renderer, cubeMapScene);
    
    // wiggle the mesh a bit
    mesh.rotation.y = Math.sin(t);
    mesh.rotation.x = Math.cos(t * 0.8);
  }
  
  animate(renderer, scene, camera, update);
}

Create a diffuse texture

The cubemap is fine for reflective surfaces, but I wanted to illuminate diffuse surfaces as well. To do that I used THREE.PMREMGenerator. PMREM stands for Prefiltered Mipmap Radiance Environment Map, and is mainly a downsampled version of the environment map that roughly preserves the total radiance of the image. For example, here is one face of the original cubemap and its four downsampled PMREM versions.

radiance

Once the PMREMGenerator has rendered all of the filtering levels of each cube face, it will have rendered dozens of individual render targest. It's not efficient or necessary to use so many individual textures, which is where THREE.PMREMCubeUVPacker comes in. It's used to generate a single texture that can store all the resulting images.

uv-packed-1

Here's how I created these helpers.

// when initializing the cube map scene...
const cubeCamera = new THREE.CubeCamera(1, 1000, 128);
const pmremGenerator = new THREE.PMREMGenerator(
  cubeCamera.renderTarget.texture
);
const pmremCubeUVPacker = new THREE.PMREMCubeUVPacker(
  pmremGenerator.cubeLods
);

In the non-cubemap scene, the material must also take in the PMREMCubeUVPacker's result.

const material = new THREE.MeshStandardMaterial({
  color: 0xffffff,
  metalness: 0.0,
  roughness: 1.0,
  // this is the important change
  envMap: pmremCubeUVPacker.CubeUVRenderTarget.texture,
});

Since we need this to react in real time, the PMREMGenerator and PMREMCubeUVPacker must be updated on each frame.

// on every frame...
cubeCamera.update(renderer, cubeMapScene);
pmremGenerator.update(renderer);
pmremCubeUVPacker.update(renderer);

The result is that now we can control the relative roughness and "metalness" of the material using information from the real world environment. Try adjusting the relative values of these parameters in the demo below.

Click to run demo
Demo source
function mipMapDemo(container) {
  
  const videoElement = document.getElementById('video');
  
  // initialize the webcam
  const videoWidth = 640, videoHeight = 480;
  initWebcam(videoElement, videoWidth, videoHeight);

  // create a scene for the cubemap
  const cubeMapScene = new THREE.Scene();
  const cubeCamera = new THREE.CubeCamera(1, 1000, 128);
  const pmremGenerator = new THREE.PMREMGenerator(
    cubeCamera.renderTarget.texture
  );
  const pmremCubeUVPacker = new THREE.PMREMCubeUVPacker(
    pmremGenerator.cubeLods
  );
  const cubeRenderTarget = pmremCubeUVPacker.CubeUVRenderTarget;
  
  // create a video texture
  const videoTexture = new THREE.VideoTexture(videoElement);
  
  // create the shader material
  const shaderMaterial = new THREE.ShaderMaterial({
    vertexShader: document.getElementById('vertexshader').innerText,
    fragmentShader: document.getElementById('fragmentshader').innerText,
    uniforms: {
      texture: new THREE.Uniform(videoTexture),
    },
    side: THREE.DoubleSide,
  });

  // populate the cubemap scene with a sphere & shader
  const sphere = new THREE.SphereGeometry(100, 15, 15);
  const sphereMesh = new THREE.Mesh(sphere, shaderMaterial);
  cubeMapScene.add(sphereMesh);

  // create a renderer, scene and camera
  const renderer = initRenderer(container);
  const {width, height} = renderer.getSize();
  const scene = new THREE.Scene();
  const camera = new THREE.OrthographicCamera(
    -width/2, width/2, height/2, -height/2,
    1, 10000,
  );
  camera.position.z = 1000;
  scene.add(camera);

  // populate the scene with a textured plane
  const aspectRatio = videoHeight / videoWidth;
  const geometry = new THREE.TorusKnotBufferGeometry( 80, 30, 100, 16 );
  const material = new THREE.MeshStandardMaterial({
    color: 0xffffff,
    metalness: 0.0,
    roughness: 1.0,
    envMap: cubeRenderTarget.texture,
  });
  const mesh = new THREE.Mesh(geometry, material);
  scene.add(mesh);
  
  const gui = new dat.GUI({autoPlace: false});
  container.append(gui.domElement);
  const params = {
    roughness: material.roughness, 
    metalness: material.metalness,
  };
  gui.add( params, 'roughness', 0, 1, 0.01 );
  gui.add( params, 'metalness', 0, 1, 0.01 );
  gui.open();

  function update(t, dt) {
    cubeCamera.update(renderer, cubeMapScene);
    pmremGenerator.update(renderer);
    pmremCubeUVPacker.update(renderer);
    
    // update the material in response to the gui
    material.roughness = params.roughness;
    material.metalness = params.metalness;
    
    // wiggle the mesh a bit
    mesh.rotation.y = 0.5*Math.sin(t);
    mesh.rotation.x = 0.5*Math.cos(t * 0.8);
  }
  
  animate(renderer, scene, camera, update);
}

Besides putting it on the Looking Glass, that's about it! If you have a Looking Glass display, definitely try the full demo. Thanks for reading.

result
Is any graphics demo complete without a teapot somewhere in the scene?