Topics |
Refs |
|
This document is updated frequently, so remember to refresh your browser.
|
|
Thursday, November 30 (Week 14) |
|
|
|
Tuesday, November 28 (Week 14) |
- Exam review
- Shadow mapping, con't
- See the beginning of Shadow.js for a summary of all the tricks we
used to improve the quality of the results
- (Note the shaders are in the html file)
- Also: we did not have time to talk about this in class but
TextureThreejsWithFBO is a simple example of using a framebuffer object with three.js
|
Shadow.html
TextureThreejsWithFBO.html
|
|
Thursday, November 16 (Week 13) |
- More examples with FBOs
- Shadow mapping
|
CubeOnCube.html
CubeOnCubeOnCube.html
Mirror.html
|
|
Tuesday, November 14 (Week 13) |
- three.js, continued
- The three.js website has an extremely rich collection of examples.
The ones listed at right are attempts to do a few basic things that we have
done before, but do them in three.js, and are much simpler than most of what's on the three.js website.
- The plugin we used to look at the loaded shaders on the GPU is called
Spector.js. It captures the rendering of one frame and then allows you
to examine the WebGL function calls that were made, the graphics context state, and the updates to the framebuffer that occurred.
- Framebuffer objects
|
- Code examples - three.js:
RotatingCubeWithModel.html
ShaderMaterial.html
TextureThreejs.html
RotatingCubeWithTexture.html
HierarchyThreejs.html
SkyboxWithCamera.html
SkyboxWithReflection.html
CubeWithFBO.html
|
|
Thursday, November 9 (Week 12) |
- Cube maps, con't: Option 3) Use it as an environment map (reflection and refraction)
- Sample from the cube map using a vector R that is the reflection (or refraction)
of the view vector at each vertex
- Note: the R vector needs to be in world coordinates, so the vector E towards the view point
and the normal vector N need to be in world coordinates too (this means using a normal
matrix that is the inverse transpose of the model matrix, not the view*model matrix)
- Once you have E and N in world coordinates, you can calculate R = reflect(-E, N)
- Introduction to three.js
- The base type THREE.Object3D encapsulates a position, rotation, and scale; maintains a list of child objects
- Rotation options (remember angles are always in radian measure)
- Basic abstractions: You combine a geometry (vertex attributes) and a material (surface properties) to make a mesh, which is a subtype of Object3D.
- Objects can be added to a scene, and can be added as children of other objects
- lights can be added to a scene (lights are also subtypes of Object3D)
- The
render operation of a renderer takes in a scene and camera (which is a subtype of Object3D)
- A base Object3D can serve as a dummy object in a hierarchy
- How it works: behind the scenes, three.js dynamically generates, loads, and compiles the shader source code needed to implement the geometry and material
- Can also incorporate your own shader code via ShaderMaterial or RawShaderMaterial
|
- Take a look at the first couple of chapters of the book by Tony Parisi (see the Resources page)
- Code examples:
TextureCubeWithReflection.html
(Edit the shader to choose reflection or refraction, choose the higher-quality
sphere with vertex normals to make it look like a lens or fishbowl.)
Basic.html
RotatingCube.html
|
|
Tuesday, November 7 (Week 12) |
- Cube maps - texture "coordinate" is a vector pointing
to a spot on one of six images arranged in a cube. (Does not need
to be normalized.)
- A few things we can do:
- 1) Use it as a texture for a model without its own texture coordinates
- In vertex shader, just treat the vertex position as a vector pointing
out from the origin, and pass the xyz value to fragment shader.
- Optionally, choose a different center point for the vector
- Fragment shader has a uniform of type
samplerCube (containing the texture unit number)
- Sample from the cube map using GLSL function
textureCube
- 2) Use it as a skybox
- Render the cube texture on a big stationary cube centered at the origin
|
TextureCube.html
LightingWithTextureAndModelCube.html
|
|
Thursday, November 2 (Week 11) |
- Texture mapping, con't
- You can use the sampled value from texture memory
in many different ways, e.g.
- replace existing surface color
- blend with surface color
- modulate some aspect of surface properties, e.g. specular reflectivity
- just use texture coordinates to procedurally generate a surface effect
- (Edit the lines labeled (1), (2), and (3) in the fragment shader for LightingWithTextureAndModel.html)
- Texture sampling parameters
- Magnification - one texel to many fragments
- Causes pixelation
- NEAREST ("box" filter - single sample) or LINEAR ("tent" filter - average of four samples)
- Minification - many texels to one fragment
- Causes artifacts when there is a lot of detail in the texels
- NEAREST (single sample) or LINEAR (four samples)
- Mipmaps - pre-calculate an average of a region of texels
- MIPMAP_NEAREST or MIPMAP_LINEAR (interpolate between values obtained from two mip levels)
- Anisotropic filtering - by taking more samples in one direction, can use
a more detailed mip level
|
- To review the basics of texture mapping, see the section "Pasting an image onto a rectangle" in Chapter 5 of the teal book
- Code examples:
LightingWithTextureAndModel.html
(Try experimenting with the fragment shader options labeled 1-3)
LightingWithProceduralTexture.html
LightingWithTextureAndModelTwoTexture.html
Step through this animation to better visualize how "texture units" work
|
|
Tuesday, October 31 (Week 11) |
- Introduction to texture mapping
- Modeler associates texture coordinates (s, t), aka (u, v), with each vertex
- These coordinates are interpolated as varying variables
- Fragment shader uses them to sample from a location in a 2D image
- Sampled value can be used as surface color, or something else...
- Setting up (see loadImagePromise and createAndLoadTexture in cs336util.js):
- Generate a handle for a texture
- Choose an active "texture unit" to use during initialization (default is 0, usually ok)
- Bind texture to the binding point TEXTURE_2D (or TEXTURE_CUBE)
- Load data (image)
- Set wrapping and sampling parameters
- Rendering:
- In shaders:
- Declare attribute variable for texture coordinates
- Declare varying variable for texture coordinates
- In fragment shader, declare uniform variable of type
sampler2D (or samplerCube )
- Use
texture2D function (or textureCube to sample a value from texture based on the interpolated texture coordinates
- Use the resulting vec4 as a color, or in some other way
- In JS:
- Choose an active "texture unit" under which to bind the texture
- Bind texture to binding point
- Set uniform variable for sampler - value to pass is the texture unit number
|
Texture.html
(Try experimenting with the choice of texture coordinates, or the image, at the top of the file)
|
|
Thursday, October 26 (Week 10) |
- Using recursion to render a hierarchy
- Each CS336Object has an array of child objects
- When rendering an object, it recursively renders its child objects,
passing its own matrix to children to serve as the childrens' "world"
- Dummy objects do not actually draw anything, but serve as the world for one or more child objects
|
HierarchyWithTree3.html
|
|
Tuesday, October 24 (Week 10) |
- Using CS336Object to define a movable camera
- Aside - how inheritance works in Javascript
- Add methods for extracting the view matrix and projection matrix
- Hierarchical objects - defining "child" objects relative to a "parent" object's frame
- The problem of defining child objects relative to a frame that has been scaled
- In these examples, if world frame is F, and parent object's vertices are FTRSc, we want child vertices to be rendered relative to frame FTR, not FTRS
- In traditional OpenGL, programmer would explicitly manage a matrix stack
-
|
RotatingSquare1.html
|
|
Thursday, October 19 (Week 9) |
- Contribution from each light source is: $$k_aL_a + k_dL_d\cos(\theta) + k_sL_s(\cos(\phi))^t$$
- where $\vec{L} \cdot \vec{N} = \cos(\theta)$ and $\vec{R} \cdot \vec{V} = \cos(\phi)$
- Constants above are all vector quantities with a red, green and blue component
Filling out the lighting model: 19 degrees of freedom (?!)
- Material properties: $k_a, k_d, k_s$, 9 numbers (ambient, diffuse, specular reflectivity for red, green, and blue), plus one additional number for the specular exponent
- Light properties: $L_a, L_d, L_s$, 9 numbers (ambient, diffuse, specular intensity for red, green, and blue)
- Can be passed to shader as two 3 x 3 matrices, plus the exponent
- See Lighting3.js
- Defining multiple lights
- array sizes and loop bounds in
GLSL have to be compile-time constants
- options for passing array data to shaders
- The OBJ file format for models
- The three.js OBJ loader can be found in examples/three/OBJLoader.js
- To load a local file, you'll need to access the page using a local server (see the Resources page)
- The JS runtime loads resources asynchronously: Using async/await (See loadOBJPromise in cs336util.js)
|
Lighting3.html
Lighting3Multiple.html
Lighting3WithObj.html
NOTE: You will need to load the page from a local server and have the obj file
in a sibling directory models/ for this to work. See examples/models for some basic
examples such as the teapot.
You will also need examples/three/OBJLoader.js.
See the Resources page for suggestions
on running a local server.
Edit the top of the js file to change the model, and use the 'n' key to toggle
face normals or vertex normals.
You can find OBJ files for the dragon model (75MB) and the hairball model (236MB) at https://casual-effects.com/data/. (Note that our code is not able to handle the multi-object OBJ files or material properties.)
|
|
Tuesday, October 17 (Week 9) |
- Recap of lighting and shading: The Phong or ADS lighting model (reflection model)
- Transforming normal vectors with the view * model will not generally work!
- Instead, use the normal matrix, which is the inverse transpose of
the view * model matrix (taking just the upper left 3 x 3 submatrix)
See the Lighting2 examples from before for demonstration of the differences
between Gouraud shading and Phong shading, and the effects of adding in the specular component)
| |
|
Thursday, October 12 (Week 8) |
|
|
|
Tuesday, October 10 (Week 8) |
|
|
|
Thursday, October 5 (Week 7) |
- The illusion of the surface shape is determined by the direction of the
normal vectors
- Face normals - perpendicular to each triangular face
- Vertex normals - usually obtained by averaging the face normals for adjacent triangles
- For dot products, coordinates have to be relative to the same frame
- Common convention - transform everything into eye coordinates
for the lighting calculation
- Adding in a constant to simulate ambient light
- For a perfectly diffuse surface, the illumination depends only
on the angle $\theta$ between the normal vector $\vec{N}$ and the light direction
$\vec{L}$
- The position of the viewer doesn't affect it
- For a partially reflective surface, the specular reflection depends on the angle $\phi$ between the direction
$\vec{V}$ to the viewer (camera) and the reflected direction $\vec{R}$ of the light source
- The Phong or ADS lighting model (reflection model) consists of
- An ambient term that is constant
- ...plus a diffuse term that is proportional to $\vec{L} \cdot \vec{N} = \cos(\theta)$
- ...plus a specular term that is proportional to $(\vec{R} \cdot \vec{V})^t$ where $t$ is a constant affecting the apparent "shininess", and
$\vec{R} \cdot \vec{V} = \cos(\phi)$
- Shading techniques
- Gouraud shading: use lighting model to calculate color at vertices, interpolate color to fragments
- Phong shading: calculate vectors L, N, and V at vertices and interpolate vectors to fragments, then use lighting model to calculate color per-fragment
- The L, N, V vectors need to be normalized after interpolation
- Issues with Gouraud shading
- Edges are still visible, since linear interpolation of vertex colors across a face is not completely realistic (effect is exaggerated by Mach banding)
- Severe artifacts when specular component is added in (specular highlight is either missed or exaggerated)
|
Using a sphere model, diffuse only, calculated in vertex shader (Gouraud shading):
Lighting2.html
Diffuse, calculated in fragment shader (Phong shading):
Lighting2a.html
Including specular, calculated in vertex shader (Gouraud shading):
Lighting2b.html
Including specular, calculated in fragment shader (Phong shading):
Lighting2c.html
(See comments at the top of the Lighting2x html files.
For these four examples, the shader code is in the html file, since all of them use
the same Javascript code Lighting2.js.
Edit the main function to change the model, or to change from vertex normals to face normals. See the comments marked "***" in the main function.)
Use Lighting2c.html to experiment with various values of the specular exponent.
|
|
Tuesday, October 3 (Week 7) |
- Summary of coordinate systems we have seen:
- (designer creates ->) Model coordinates
- (model transformation ->) World coordinates
- (view transformation ->) Eye (Camera) coordinates
- (projection transformation ->) Clip coordinates
- (perspective division ->) Normalized device coordinates (NDC)
- (viewport transformation ->) Window (Screen) coordinates
- Clip coordinates are 4-dimensional homogeneous coordinates with a w-component that is not normally equal to 1 (if perspective projection is being used)
- NDC are 3-dimensional with x, y, and z between -1 and 1
- z is NOT linearly related to the original z-value in eye space (if perspective projection is being used).
- In fact $z_{NDC}$ is proportional to $\frac{-1}{z_{eye}}$, so it does preserve correct depth ordering
- Window coordinates are what we see as gl_FragCoord.xy
- By default, the viewport transformation scales x and y to range from 0 to the framebuffer width/height
- The viewport transformation is set by the function
gl.viewport( $x_0$, $y_0$ width, height)
- maps NDC x value from [-1, 1] into [$x_0$, $x_0$ + width] in window coordinates
- maps NDC y value from [-1, 1] into [$y_0$, $y_0$ + height]
- (These are simple Fahrenheit-to-Celsius linear rescalings)
- gl_FragCoord.z is (usually) rescaled to be from 0 to 1
- this can be configured with
gl.depthRange(near, far)
- Introduction to lighting and normal vectors
- Lambert's law: for a perfectly diffuse surface, reflected illumination at a vertex is proportional to $\cos(\theta)$, where $\theta$ is the angle between the
light direction and the surface direction
- Not affected by the location of the viewer
-
$\cos(\theta) = \vec{L} \cdot \vec{N}$, where $\vec{L}$ is a unit vector in the direction of the light and $\vec{N}$ is a unit vector perpendicular to the surface (known as a normal vector for the surface)
- The normal vector is an additional attribute for each vertex
- Programming diffuse lighting in the vertex shader
|
See the fragment shader for experiments with gl_FragCoord.x:
GL_example1a_gradient.html
Experimenting with the viewport transformation:
GL_example1a_resizable.html
First experiment with rotating cube and diffuse lighting:
Lighting1.html
|
|
Thursday, September 28 (Week 6) |
- Arts and crafts, continued...
- Issues with Euler angles
- Can be difficult to interpolate between rotations for animation
- Subject to "gimbal lock" - losing a degree of freedom when two of the axes end up aligned with each other
- Even if you smoothly interpolate all the angles correctly, the
resulting motion may be unnatural
- Every combination of Euler angles can be described as a single rotation with some axis and angle
- For every orthogonal 3d matrix $M$ there is a vector $\vec{v}$ such that
$M\vec{v} = \vec{v}$ (an eigenvector with eigenvalue 1)
- Therefore $\vec{v}$ is the actual axis of rotation!
- A quaternion describes a rotation as four numbers representing an axis and an angle
- The quaternion for angle $\theta$ with axis $\shortcvec{v_1}{v_2}{v_3}^T$ is, literally, the four numbers:
\[\cvec{\sin\left(\frac{\theta}{2}\right) v_1}
{\sin\left(\frac{\theta}{2}\right) v_2}
{\sin\left(\frac{\theta}{2}\right) v_3}
{\cos\left(\frac{\theta}{2}\right)}
\]
- The scaling by sine and cosine looks bizarre, but it's basically there to make the multiplication operations work nicely...
- ...but we don't care; we rely on library functions to do the gory stuff
- There are library functions to create a quaternion from a rotation matrix or axis/angle, and to create a rotation matrix from a quaternion
- See function
calculateAxisAngle around lines 255 - 290 of RotationsWithQuaternion.js to see how the three.js library is being used to get the axis and angle from the quaternion.
- The three.js library internally converts all rotations to quaternions
- Most importantly: Quaternions can be linearly interpolated to get smooth animation between rotations
- Spherical linear interpolation of quaternions (slerp) using the three.js
library
- Compare the ad hoc rotation about the quaternion axis (around lines 445-455 of RotationsWithQuaternion.js) to the use of the slerp method (line 477 of RotationsWithQuaternionSlerp.js)
- Introducing the idea of the CS336Object from hw3
- The convention TRS: scale, then rotate, then translate
- encapsulate the scale, rotation, and translation for an object in a scene
- instead of calculating matrices, provide operations such as "move forward" or
"turn right" or "look at"
|
- See chapter 7 of the Gortler book for a brief, graphics-centric explanation of quaternions (available online through ISU library, see the Resources page)
- Code examples:
Rotations.html
RotationsWithQuaternion.html
(Put the model into any rotation, then press the 'a' key to see it rotate back to the identity, about the quaternion axis (the magenta line) Use Shift-'A' and 'a' to see it again.)
RotationsWithQuaternionSlerp.html
|
|
Tuesday, September 26 (Week 6) |
- Understanding the perspective matrix
- The THREE.Matrix4 function
makePerspective specifies the viewing region as a rectangular frustum (chopped off rectangular pyramid) using left, right, top, bottom to describe dimensions of the near plane
- See also the helper function
createPerspectiveMatrix (in cs336util.js), which specifies the viewing region using a field of view angle plus an aspect ratio
- Experiment with comments #1-5 in RotatingCube.js
- A perspective projection maintains the right linear relationships in the x and y values, but the mapping from camera z to NDC z is highly nonlinear
- z-fighting
- Depth buffer has finite precision, so when z-values are close together,
the roundoff errors may cause a farther surface to "bleed" through to a closer one
- The problem is worsened by the nonlinearity of the perspective projection,
especially when the near plane is close to the camera
(See
depth_graph.pdf)
- Arts and crafts!
- Use a physical object to help visualize 3D rotations
- Any possible orientation of the coordinate axes that doesn't change the lengths or the angles between the basis vectors, is a rotation
- If you write down the coordinates of the transformed basis vectors as columns of a matrix, that's a rotation matrix
- A rotation matrix is always orthogonal (the columns are an orthonormal set of vectors comprising the basis for the rotated frame)
- But it's not necessarily a rotation about one of the coordinate axes
- Composing rotations about the three coordinate axes
- Euler angles
- Choose an ordering of two or three axes, such as YZY or XYZ
- Any rotation can be obtained as a sequence of three rotations about that sequence of axes
- When we think about Euler angles, the most convenient convention for us is usually
the ordering YXZ, known as "head-pitch-roll" or "yaw-pitch-roll"
|
Zfighting.html
Rotations.html
|
|
Thursday, September 21 (Week 5) |
- Spherical coordinates
- Using head and pitch angles to describe a direction
- (See the last page of homework 3 for details)
- Rotations about an arbitrary axis
- One strategy:
- Let $YX$ denote the pitch and head rotation needed to align the y-axis with the desired axis of rotation.
- Let $\theta$ be the desired angle of rotation
- Then $YX$ * RotateY($\theta$) * $X^{-1}Y^{-1}$ is the rotation matrix
(using the $AMA^{-1}$ pattern)
- Or, use the THREE.Matrix4 method
makeRotationAxis (See #2 in the animation loop for RotatingCubeAxisTest.js )
- Orthographic vs perspective projections
- Homogeneous coordinates revisited
- For any nonzero $w$, $\cvec{wx}{wy}{wz}{w}^T$ describes the same 3D
point as $\cvec{x}{y}{z}{1}^T$
- Homogeneous coordinates are what we feed in to the pipeline from the vertex shader, and are used throughout the rasterization process
- This representation allows us to do a perspective projection by matrix multiplication
- Also allows the rasterizer to more efficiently do perspective-dependent calculations
- After rasterization the first three coordinates are then divided by the $w$-coordinate to recover a 3D point in "Normalized Device Coordinates" (NDC)
- (This operation is called perspective division)
- Deriving a basic perspective matrix using similar triangles: if the center of projection is at the camera position, and the projection plane is $n$ units in front of the camera at $z = -n$, then the projected values $x$ and $y$ values are $x' = -\frac{xn}{z}$ and $y' = -\frac{yn}{z}$. Note:
\[\begin{bmatrix} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & \frac{-1}{n} & 0
\end{bmatrix}
\begin{bmatrix}x \\ y \\ z \\ 1 \end{bmatrix}
= \begin{bmatrix}x \\ y \\ z \\ -\frac{z}{n} \end{bmatrix}
\mbox{which is the same point as}
\begin{bmatrix} -\frac{xn}{z} \\ -\frac{yn}{z} \\ -n \\ 1 \end{bmatrix}
\]
- Basic perspective matrix above works great for $x$ and $y$, but loses depth information since all $z$-values are collapsed to $-n$.
- The standard OpenGL perspective matrix performs the same transformation on $x$ and $y$, but uses some mathematical trickery to keep some information about $z$ too
|
- The section "Specifying the Visible Range (Pyramid)" in teal book ch. 7 illustrates the perspective projection
- This excellent page by Song Ho Anh goes over the derivation of the perspective matrix (and has some illustrations of the "viewing frustum")
- Experiment with perspective matrices in the rotating cube example (see lines 90 - 115)
- Code examples:
GL_example1a_homogeneous.html
DepthWithPerspective.html
(You can see the basic perspective matrix created around line 66 of the .js file. Notice we have lost
the depth information, since all z-values are -4.)
RotatingCube.html
RotatingCubeAxisTest.html
|
|
Tuesday, September 19 (Week 5) |
- To perform a transformation $M$ wrt some other frame $\mathcal{F'} = \mathcal{F}A$, use the matrix $AMA^{-1}$
- A general matrix for orthographic projections
- Specified by six "clipping planes" defining a rectangular
region relative to the camera frame
- Deriving an orthographic projection matrix: ranges [left, right], [bottom, top], and [-near, -far] are all scaled into [-1, 1] using three Fahrenheit to Celsius conversions
- An orthographic projection does not account for perspective
- See comment #8 in
DepthWithView.js
- Digression:
- With an orthonormal basis, we can calculate the dot product of vectors
from their coordinates (like in a physics book!)
- If $\mathcal{B}$ is an orthonormal basis, $\vec{u}$ and $\vec{v}$ have coordinates $\underline{c}$ and $\underline{d}$, respectively, w.r.t. $\mathcal{B}$, then
$\vec{u} \cdot \vec{v}$ = $\underline{c}^T\underline{d}
= c_1d_1 + c_2d_2 + c_3d_3$
- When $\vec{u}$ and $\vec{v}$ are unit vectors, $\vec{u} \cdot \vec{v}$ is the cosine of the angle between them
- Thus $\vec{u} \cdot \vec{v}$ is 1 when $\vec{u}$ and $\vec{v}$ are parallel, and is 0 when they are orthogonal
- A matrix is orthonormal (sometimes called orthogonal) if its columns represent unit-length vectors that are orthogonal to each other.
- Key point: If $A$ is orthogonal, then $A^{-1} = A^T$ (the inverse is just the transpose)
- The lookAt matrix: define the camera frame by specifying:
- $\widetilde{\rm{eye}}$ - where is the camera?
- $\widetilde{\rm{at}}$ - what's it pointed at?
- $\vec{up}$ - which way is up?
- Calculate basis $\shortcvec{\vec{x}}{\vec{y}}{\vec{z}}$ for camera with two cross products:
- $\vec{z} = \widetilde{\rm{eye}} - \widetilde{\rm{at}}$, normalized
- $\vec{x} = \vec{up} \times \vec{z}$, normalized
- $\vec{y} = \vec{z} \times \vec{x}$
- $R$ = matrix whose columns are the coordinates of
$\vec{x},\vec{y},\vec{z}$. $R$ is orthogonal, so $R^{-1} = R^T$
- $T$ = Translate($\widetilde{\rm{eye}})$, so $T^{-1}$ = Translate($-\widetilde{\rm{eye}}$)
- view matrix is inverse of $TR$, which is $R^TT^{-1}$, i.e., the transpose of $R$ times Translate($-\widetilde{\rm{eye}}$)
- See comment #9 in DepthWithView.js
- Spinning cube example
- updates the model transformation once each frame, multiplying it by a one-half degree rotation about one of the axes
- Edit the animation loop (lines 255-295 of RotatingCube.java) to see the difference between multiplying on the left vs multiplying on the right
|
RotatingCube.html
|
|
Thursday, September 14 (Week 4) |
- "Pseudocode" for standard matrices
- RotateX($\theta$), RotateY($\theta$), RotateZ($\theta$)
- Scale($s_x, s_y, s_z$)
- Translate($t_x, t_y, t_z$)
- Remarks on matrix inverse
- $AA^{-1} = A^{-1}A = I$
- Not every matrix has an inverse, but...
- Standard transformation matrices do, and are easy to invert, e.g.
inverse of Scale(2, 3, 1) is Scale(1/2, 1/3, 1)
- For the inverse of a product, reverse the order: $(AB)^{-1} = B^{-1}A^{-1}$
- The depth buffer
- The depth buffer algorithm for hidden surface removal
initialize all locations of depth buffer to "infinity"
in each fragment (x, y)
let z' = value in depth buffer at (x, y)
if gl_FragCoord.z < z'
set depth buffer value at (x, y) to gl_FragCoord.z
run the fragment shader
else
do nothing
-
gl_FragCoord.z is a built-in variable available in the fragment
shader that contains the relative depth (distance from view point)
of the fragment, scaled to the range [0. 1]
(see the fragment shader in Depth0.js)
-
Clip space is left-handed! (i.e. larger z-coordinate means "farther away")
- To enable depth testing, we need to include:
gl.enable(gl.DEPTH_TEST); in the main function
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BIT); in the draw function
- (See the comments marked "***" in Depth.js)
- In almost all cases, we think of all vertices as being transformed by
three matrices, called the model, view, and projection
(i.e. projection * view * model)
- The model transformation takes model coordinates to world coordinates (i.e. places an instance of a model in the scene)
- view transformation takes world coordinates to eye (camera) coordinates
- projection transformation takes eye coordinates to clip coordinates
(determines the visible region of the scene)
- Last time: if point $\widetilde{p}$ has coordinates $\underline{c}$ w.r.t. a frame $\mathcal{F}$, and $M$ transforms $\mathcal{F}$ to a new frame $\mathcal{F}' = \mathcal{F}M$, then $\widetilde{p}$ has coordinates $M^{-1}\underline{c}$ with respect to the new frame $\mathcal{F}'$.
- This is how we define a camera or view matrix: If $M$ is the transformation to the camera frame, then multiplying by $M^{-1}$ is the view transformation, i.e., it gives you the coordinates of everything in the scene, relative to the camera.
- The idea of the projection matrix is to choose what region gets mapped into clip space. By default, this is just a cube centered at the camera's origin that goes from -1 to 1 in all three dimensions.
- A projection will always flip the z-axis (since clip space is left-handed)
- Simple case: to shift the location of the near and far clipping planes, just do a Celsius-to-Fahrenheit conversion of the z values. See the comments for the numbered examples
3 and 4 in DepthWithView.js.
-
|
- The section "Correctly Handling Foreground and Background Objects" in teal book ch. 7 describes the depth buffer
- Code examples:
DepthWithView.html
(Work through the detailed comments numbered 1 through 9 - uncomment the relevant code to try each one out.)
Depth0.html
(Visualization of gl_FragCoord.z)
Depth.html
(Illustrates how to enable depth testing)
|
|
Tuesday, September 12 (Week 4) |
- An affine transformation is a linear transformation followed by a translation (shift)
- in one dimension: $f(x) = mx + b$
- An affine matrix is any 4x4 matrix with $\cvec{0}{0}{0}{1}$ in bottom row
- The product of affine matrices is an affine matrix
- An affine matrix represents an affine transformation
- An affine matrix $M$ can always be decomposed into
a linear (or "rotational") part $R$ followed by a translational part $T$, that is, $M = TR$ where $T$ is a translation and $R$ is linear
\[
\begin{bmatrix}1 & 0 & 0 & t_x \\
0 & 1 & 0 & t_y \\
0 & 0 & 1 & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}a & d & g & 0 \\
b & e & h & 0 \\
c & f & i & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}a & d & g & t_x \\
b & e & h & t_y \\
c & f & i & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\]
- Note that $RT$ is also an affine transformation, and its linear part is $R$, but its translational part is not $T$. That is, $RT = T'R$, where $T'$ is a translation that is generally not equal to $T$.
- Observation 1: Suppose point $\widetilde{p}$ has coordinates $\underline{c}$ with respect to frame $\mathcal{F}$, and we transform by matrix $M$ to get a point
$\widetilde{p} = \mathcal{F}(M\underline{c})$.
There are two ways to look at this new point:
- $\widetilde{p} = \mathcal{F}(M\underline{c})$ - new coordinates, interpreted w.r.t. existing frame $\mathcal{F}$
- $\widetilde{p} = (\mathcal{F}M)\underline{c}$ - same coordinates, interpreted w.r.t a transformed frame $\mathcal{F}M$
- Note that for an affine matrix $M$ and frame $\mathcal{F}$, the product $\mathcal{F}M$ is also a frame, transformed by $M$
- Observation 2: Consider a transformation $A$ and consider a set of transformed points $\mathcal{F}A\underline{c}$. When we apply another transformation $B$, we can either do
- $\mathcal{F}BA\underline{c}$ - $B$ is applied w.r.t. the original frame $\mathcal{F}$ ("extrinsically")
- $(\mathcal{F}A)B\underline{c}$ - $B$ is applied w.r.t. the transformed (local) frame $\mathcal{F}T$ ("intrinsically")
- The "left-of"" rule: a transformation matrix is always applied with respect to the frame immediately to its left
- Try this using the first two radio buttons in
Transformations2.html
- Observation 3: Suppose $T$ is "translate 3 units right" and $R$ is "rotate 90 degrees ccw". There are two ways of thinking about the transformation $RT$, say:
- first shift 3 units in the x direction, then rotate 90 degrees about the original origin (thinking extrinsically: multiply by R on the left)
- first rotate 90 degrees about the origin, and then shift 3 units in the rotated x direction, i.e., up (thinking intrinsically: multiply by T on the right)
- Important: The difference only exists in our thinking! It's the same transformation $RT$ in both cases
- Try this using the first two radio buttons in
Transformations2.html
- A different question: suppose we want to keep the same point, but find its new coordinates w.r.t a transformed frame $\mathcal{F}M$?
- Motive: we need to define a transformation enabling us to describe
locations of existing vertices with respect to the location and orientation of
a virtual camera or view point
- Example: Suppose we have a coordinate system $\mathcal{F}$ centered at Main and Duff, and a coordinate system $\mathcal{F'}$ at the clocktower, and that $M$ is the transformation that takes Main and Duff to the Clocktower, i.e., $\mathcal{F'} = \mathcal{F}M$. If $\underline{c}$ is the coordinate vector that takes you from Main and Duff to Steve's house, then how do you get from the Clocktower to Steve's house?
- First invert $M$ to get from Clocktower to Main & Duff;
then use coordinates $\underline{c}$ to get to Steve's.
- That is, $M^{-1}\underline{c}$ gives you the coordinates of Steve's house with respect to the Clocktower
- More generally, if point $\widetilde{p}$ has coordinates $\underline{c}$ w.r.t. a frame $\mathcal{F}$, and $\mathcal{F}' = \mathcal{F}M$, then $\widetilde{p}$ has coordinates $M^{-1}\underline{c}$ w.r.t. $\mathcal{F}'$. That is:
$$\begin{align}
\mathcal{F}' &= \mathcal{F}M \\
\Longrightarrow \mathcal{F}'M^{-1} &= \mathcal{F} \\
\Longrightarrow \mathcal{F}'M^{-1}\underline{c} &= \mathcal{F}\underline{c} = \widetilde{p}
\end{align}$$
|
- The idea of finding the coordinates of a point with respect to some other frame has a lovely explanation in Chapter 13 ("Change of basis") of the linear algebra videos at 3blue1brown.com. Chapters 2, 3, and 4 are also relevant to what we have been doing lately.
- Code examples: Experiment with compositions of transformations using the key controls:
Transformations2.html
|
|
Thursday, September 7 (Week 3) |
- Recall that we use 4 coordinates to represent 3d points and vectors.
- The 4th coordinate is 0 for a vector and 1 for a point
- A linear transformation can be represented by a 4x4 matrix whose bottom row and right column are both $\cvec{0}{0}{0}{1}$ (The upper left 3x3 submatrix is the same as what we derived last time.)
- Example: A matrix that scales by a factor of $s_x$ in the x direction, $s_y$ in the y-direction, and $s_z$ in the z-direction looks like:
\[
\begin{bmatrix}s_x & 0 & 0 & 0 \\
0 & s_y & 0 & 0 \\
0 & 0 & s_z & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}x \\ y \\ z \\ 1 \end{bmatrix}
=
\begin{bmatrix}s_x \cdot x \\ s_y \cdot y\\ s_z \cdot z \\ 1 \end{bmatrix}
\]
- For any "standard basis"", a general rotation matrix (for rotations of angle $\theta$ about the z-axis) looks like this:
\[
\begin{bmatrix}\cos(\theta) & -\sin(\theta) & 0 & 0 \\
\sin(\theta) & \cos(\theta) & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]
- An orthonormal basis consists of three unit vectors that are orthogonal to each other
- A standard basis is an orthonormal basis that is right-handed
- A translation, or shift, is not a linear transformation!
- A linear transformation always takes the zero vector to the zero vector (and hence always takes the origin to the origin)
- A translation is represented by a matrix of the form below:
\[
\begin{bmatrix}1 & 0 & 0 & t_x \\
0 & 1 & 0 & t_y \\
0 & 0 & 1 & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}x \\ y \\ z \\ 1 \end{bmatrix}
=
\begin{bmatrix}x + t_x \\ y + t_y \\ z + t_z \\ 1 \end{bmatrix}
\]
- This illustrates one more reason it's useful to represent points and vectors using four coordinates: we can implement translations using matrix multiplication
- (In reality, we are performing a linear transformation in 4-dimensional space, but then looking only at the subspace in which the 4th coordinate is 1.)
- Composing transformations is matrix multiplication
- If $R$ and $S$ are matrices for two transformations and $\underline{c}$ is a coordinate vector, then
\[
(SR)\underline{c} = S(R\underline{c})
\]
that is, the matrix $SR$ represents the transformation that does $R$ first, then $S$
- In general $SR \neq RS$, for example, "rotate and then scale" is almost never the same transformation as "scale and then rotate"
- Overview of matrix operations from the three.js library
|
- See sections 3.2 - 3.5 of Gortler (ignore 3.6 on normal vectors for now)
- The teal book section "Translate and Then Rotate" in ch. 4 goes into composing multiple transformations
- You'll need the Matrix4 type from three.js, see below for link.
- See the documentation for the three.js Matrix4 type
- Also see chapter 4 of the Linear Algebra series by 3blue1brown:
https://www.3blue1brown.com/topics/linear-algebra
- Code examples (edit the main method to try different transformations):
Transformations.html
Transformations1.html
Note:Transformations1 requires the three.js library. Please use
this version:
https://stevekautz.com/cs336f23/three.js, (1.2MB) not the latest one from threejs.org.
|
|
Tuesday, September 5 (Week 3) |
- Preview of typical vertex processing steps:
- Model transformation places instance of model into world coordinates
- View transformation transforms world into eye/camera coordinates (scene from eye/camera view point)
- Projection transformation transforms a viewable region of scene into clip coordinates (a 2x2x2 cube representing the "viewable" triangles to be handed to the rasterizer)
- A 3d vector space consists of all 3D vectors with the basic operations above
- A 3d basis for a vector space is a set of three independent vectors
$\vec{b_1}, \vec{b_2}, \vec{b_3}$
such that every possible 3d vector can be written as a linear combination of $\vec{b_1}, \vec{b_2}, \vec{b_3}$
- A 3d affine space consists of all 3d vectors and points
- A 3d frame (coordinate system) is a basis along with a designated point called the origin
- Matrices and matrix multiplication
- Representing a basis as a row matrix of three vectors,
$\mathcal{B} = \shortcvec{\vec{b_1}}{\vec{b_2}}{\vec{b_3}}$
- Representing a frame as a row matrix of three vectors and a point,
$\mathcal{F} = \cvec{\vec{b_1}}{\vec{b_2}}{\vec{b_3}}{\widetilde{o}}$
- If vector $\vec{u} = u_1\vec{b_1} + u_2\vec{b_2} + u_3\vec{b_3}$, then the column matrix \[ \underline{c} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ 0 \end{bmatrix}
= \cvec{u_1}{u_2}{u_3}{0}^T
\]
is a coordinate vector for $\vec{u}$ with respect to frame $\mathcal{F}$. Note that in terms of matrix multiplication, we can write
\[
\mathcal{F}\underline{c} = \cvec{\vec{b_1}}{\vec{b_2}}{\vec{b_3}}{\widetilde{o}}\cvec{u_1}{u_2}{u_3}{0}^T = \vec{u}
\]
(where the superscript $T$ indicates the matrix transpose)
- Likewise, if point $\widetilde{p} = u_1\vec{b_1} + u_2\vec{b_2} + u_3\vec{b_3} + \widetilde{o}$, then the column matrix $\cvec{u_1}{u_2}{u_3}{1}^T$ is a coordinate
vector for $\widetilde{p}$ w.r.t. frame $\mathcal{F}$,
- Key point: the coordinates of a vector or point depend on what basis or frame is being used, and the coordinates only have meaning with respect to a known basis or frame.
- Linear transformations
- key feature: if you transform points that are collinear, then the resulting points are also collinear...
- ...so triangles are transformed to triangles
- For a given basis
$\mathcal{B} = \shortcvec{\vec{e_1}}{\vec{e_1}}{\vec{e_1}}$, a linear transformation $f$ is represented by a matrix $M$ such that if $\underline{c}$
is a coordinate vector w.r.t. $\mathcal{B}$ for some vector $\vec{u}$, then
$M\underline{c}$ is the coordinate vector for the transformed vector $f(\vec{u})$
- The columns of $M$ are just the coordinate vectors for
$f(\vec{e_1}), f(\vec{e_2})$, and $f(\vec{e_3})$
- How we know this: Look at what $f$ does to the basis vectors. Each of the vectors $f(\vec{e_1}), f(\vec{e_2})$, and $f(\vec{e_3})$ has a coordinate vector w.r.t. the original basis $\mathcal{B}$. Suppose we label those coordinates as
\[ \begin{bmatrix} m_{11} \\ m_{21} \\ m_{31} \end{bmatrix},
\begin{bmatrix} m_{12} \\ m_{22} \\ m_{32} \end{bmatrix},
\begin{bmatrix} m_{13} \\ m_{23} \\ m_{33} \end{bmatrix},
\]
respectively, (ignoring the 4th coordinate for readability).
Suppose that some vector $\vec{u}$ has coordinates $\shortcvec{c_1}{c_2}{c_3}^T$ w.r.t. $\mathcal{B}$. Then using the fact that $f$ is linear, and rearranging to find
the three coefficients of $\vec{e_1}$, $\vec{e_2}$, and $\vec{e_3}$,
\[
\begin{eqnarray*}
f(\vec{u}) &=& f(c_1\vec{e_1} + c_2\vec{e_2} + c_3\vec{e_3}) \\
&=& c_1f(\vec{e_1}) + c_2f(\vec{e_2}) + c_3f(\vec{e_3}) \\
&=& c_1(m_{11}\vec{e_1} + m_{21}\vec{e_2} + m_{31}\vec{e_3})
+ c_2(m_{12}\vec{e_1} + m_{22}\vec{e_2} + m_{32}\vec{e_3})
+ c_3(m_{13}\vec{e_1} + m_{23}\vec{e_2} + m_{33}\vec{e_3}) \\
&=& (m_{11}c_1 + m_{12}c_2 + m_{13}c_3)\vec{e_1}
+ (m_{21}c_1 + m_{22}c_2 + m_{23}c_3)\vec{e_2}
+ (m_{31}c_1 + m_{32}c_2 + m_{33}c_3)\vec{e_3}
\end{eqnarray*}
\]
which looks like a mess, but notice that if we define
$M$ to be the matrix formed by taking the three coordinate vectors for
$f(\vec{e_1}), f(\vec{e_2})$, and $f(\vec{e_3})$ as its columns,
\[
M = \begin{bmatrix} m_{11} & m_{12} & m_{13} \\
m_{21} & m_{22} & m_{23} \\
m_{31} & m_{32} & m_{33}
\end{bmatrix}
\]
then the coefficients of $\vec{e_1}$, $\vec{e_2}$, and $\vec{e_3}$ above --- that is,
the coordinates of $f(\vec{u})$ --- are just the three
entries of the matrix $M\underline{c}$.
|
- See sections 2.3 and 3.3 of Gortler
- In the teal book, the section "Moving, Rotating, and Scaling" in chapter 3 is a basic introduction to transformations
- Also see chapter 3 of the Linear Algebra series by 3blue1brown:
https://www.3blue1brown.com/topics/linear-algebra
|
|
Thursday, August 31 (Week 2) |
- When you invoke
gl.vertexAttribPointer , you can specify a stride
and offset in bytes
- This means that you can interleave the data for several vertex attributes in one buffer
- See
gl_example1a_two_colors for an example
- Indexed rendering (see GL_example1_indexed)
- Debugging graphics code is hard!
- Always work incrementally, starting from a working example
- You can use the Chrome or Firefox debugger to check what data you're sending to the GPU, or to verify how JS functions work
- In graphics, we need to deal with multiple "frames of reference" (coordinate systems)
- A point is a geometric location (and may
have different coordinates depending on the frame)
- The difference between two points is a vector, representing the distance and direction from one point to another
- A point plus a vector is a point
- Vector operations - scaling and addition
- A vector with length 1 is called a unit vector
- A nonzero vector can always be divided by its length to create a unit vector with the same direction (called "normalizing" the vector)
- The dot product between two unit vectors is the cosine of the angle between them
- Two nonzero vectors are orthogonal (perpendicular) if their dot product is zero
- Linear combinations of vectors (scale and add)
- A basis for a 2d vector space (e.g., a plane) is a set of two non-parallel vectors
$\vec{b_1}, \vec{b_2}, $
such that every possible 2d vector can be written as a linear combination of $\vec{b_1}, \vec{b_2}$
|
- Read Sections 2.1, 2.2, and 3.1 of the book by Gortler (Foundations of 3D computer graphics). This is just a few pages and summarizes pretty much everything we talked about today. Just log into the ISU library and search for the title. The entire book is available online. You will need to log into the ISU VPN if you are off campus.
- For nice visuals and simple explanations of vectors and linear combinations, see chapters 1 and 2 (the second and third videos) of the Linear Algebra series by 3blue1brown:
https://www.3blue1brown.com/topics/linear-algebra
|
|
Tuesday, August 29 (Week 2) |
- Recap of basic steps in our Hello, World example
- The purpose of an
attribute variable in the vertex shader is to pass per-vertex data from CPU to GPU
- The function
vertexAttribPointer associates the vertex attribute
with the data in your buffer
- Vertex shader must always set the built-in variable
gl_Position
- Fragment shader normally sets the built-in variable
gl_FragColor
- Animation by updating a uniform variable in each frame
- Using
requestAnimationFrame to create animation loop
- There are three kinds of variable modifiers in GLSL
- attribute variables are used to pass per-vertex data from the CPU to GPU
- uniform variables are used to pass uniform data from CPU to GPU
(same value in every vertex/fragment shader instance)
- varying variables are used to pass data from vertex shader to fragment shader (values are interpolated by the rasterizer)
- Using functions such as
uniform1f - set a uniform with one float value
uniform4f - set a vec4 uniform with four floating point values
uniform4fv - - set a vec4 uniform with an array of values
- etc...
- See the "WebGL Reference Card" for GLSL types and functions
- Linear interpolation!
- Basic example: Converting Fahrenheit to Celsius
- Suppose we have some Farenheit temperature $x$ and we want the Celsius temperature $y$. Then
$$\beta = \frac{x - 32}{212 - 32}$$
tells you "how far" $x$ is along the scale from 32 to 212. We want to go the same fraction of the way along the Celsius scale from 0 to 100, i.e.,
$$y = 0 + \beta(100 - 0)$$
To say that the Fahrenheit and Celsius scales are "linearly related" just means, e.g., that if we are 25% of the way from 32 to 212 in Fahrenheit, we should be 25% of the way from 0 to 100 in Celsius.
More generally, to map the 32-to-212 scale to any range $A$ to $B$, you have $$\begin{eqnarray}
y &=& A + \beta\cdot(B - A) \\
&=& (1 - \beta)\cdot A + \beta\cdot B \\
&=& \alpha\cdot A + \beta\cdot B
\end{eqnarray}$$
where $\alpha = 1 - \beta$.
One way to think of this is that you have two quantities $A$ and $B$ to be "mixed", and the proportion $\beta$ tells you "how much $B$" and $\alpha$ is "how much $A$".
This is nice because it generalizes to interpolating within a triangle using barycentric coordinates.
|
- Code examples:
- Note: GL_example1a is the same as GL_example1 except that all the
boilerplate helper functions have been moved into ../util/cs336util.js
GL_example1a.html
GL_example1a_uniform_color.html
GL_example1a_with_animation.html
GL_example2_varying_variables.html
|
|
Thursday, August 24 (Week 1) |
- Odds and ends:
- Basic application structure; the
onload event in JS. See
foo.html and foo.js
(output goes to JS console).
- Setting up a WebGL context and clearing the canvas (see GL_example0)
- RGBA encoding of color as four floats
- The graphics context is a state machine (function calls rely on lots of internal state)
- The idea of binding a buffer or shader to become "the one I'm currently talking about"
- Binding points ARRAY_BUFFER, ELEMENT_ARRAY_BUFFER
- Overview of the steps involved in our "Hello, World!" application
- (Initialization)
- Create context
- Load and compile shaders
- Create buffers
- Bind each buffer and fill with data
- (Each frame)
- Bind shader
- For each attribute...
- Find attribute index
- Enable the attribute
- Bind a buffer with the attribute data
- Set attribute pointer to the buffer
- Set uniform variables, if any
- Draw, specifying primitive type
- Options for primitives: TRIANGLES, LINES, LINE_STRIP, etc.
- See ListExample.html for a demo
|
- Chapters 2 and 3 of the teal book provide a detailed and careful overview of the steps described above.
- Also highly recommended: the first chapter (but only the first chapter!) of
https://webglfundamentals.org/
- Read and experiment with GL_example1 below
- try changing the vertices
- try the commented-out lines in the draw() function
- Code examples:
GL_example0.html
GL_example1.html
foo.html
ListExample.html
(You can view the associated html and javascript source in the developer tools (Ctrl-Alt-i), or just grab everything directly from the examples/intro/ directory of https://stevekautz.com/cs336f23/.
)
|
|
Tuesday, August 22 (Week 1) |
- Introduction
- This is a course in 3D rendering using OpenGL, not a course in developing GUIs!
- WebGL is a set of browser-based JavaScript bindings for OpenGL ES 2.0, which is essentially OpenGL 3.2 with the deprecated stuff and fancy features removed
- Overview of the GPU pipeline:
- (Model - a set of vertices organized into a "mesh" of triangles)
- -> Vertex processing (*)
- -> Primitive assembly (and clipping)
- -> Rasterization
- -> Fragment processing (*)
- -> (Framebuffer - graphics memory mapped to an actual display window)
- (*) Vertex and fragment processing stages are programmed via "shaders" using GLSL, the OpenGL shading language
|
- Read the syllabus
- See the Resources page for textbook information
- Learn JavaScript (see Resources page for ideas)
|