Topics |
Refs |
|
This document is updated frequently, so remember to refresh your browser.
|
|
Tuesday, September 19 (Week 5) |
- To perform a transformation $M$ wrt some other frame $\mathcal{F'} = \mathcal{F}A$, use the matrix $AMA^{-1}$
- A general matrix for orthographic projections
- Specified by six "clipping planes" defining a rectangular
region relative to the camera frame
- Deriving an orthographic projection matrix: ranges [left, right], [bottom, top], and [-near, -far] are all scaled into [-1, 1] using three Fahrenheit to Celsius conversions
- An orthographic projection does not account for perspective
- See comment #8 in
DepthWithView.js
- Digression:
- With an orthonormal basis, we can calculate the dot product of vectors
from their coordinates (like in a physics book!)
- If $\mathcal{B}$ is an orthonormal basis, $\vec{u}$ and $\vec{v}$ have coordinates $\underline{c}$ and $\underline{d}$, respectively, w.r.t. $\mathcal{B}$, then
$\vec{u} \cdot \vec{v}$ = $\underline{c}^T\underline{d}
= c_1d_1 + c_2d_2 + c_3d_3$
- When $\vec{u}$ and $\vec{v}$ are unit vectors, $\vec{u} \cdot \vec{v}$ is the cosine of the angle between them
- Thus $\vec{u} \cdot \vec{v}$ is 1 when $\vec{u}$ and $\vec{v}$ are parallel, and is 0 when they are orthogonal
- A matrix is orthonormal (sometimes called orthogonal) if its columns represent unit-length vectors that are orthogonal to each other.
- Key point: If $A$ is orthogonal, then $A^{-1} = A^T$ (the inverse is just the transpose)
- The lookAt matrix: define the camera frame by specifying:
- $\widetilde{\rm{eye}}$ - where is the camera?
- $\widetilde{\rm{at}}$ - what's it pointed at?
- $\vec{up}$ - which way is up?
- Calculate basis $\shortcvec{\vec{x}}{\vec{y}}{\vec{z}}$ for camera with two cross products:
- $\vec{z} = \widetilde{\rm{eye}} - \widetilde{\rm{at}}$, normalized
- $\vec{x} = \vec{up} \times \vec{z}$, normalized
- $\vec{y} = \vec{z} \times \vec{x}$
- $R$ = matrix whose columns are the coordinates of
$\vec{x},\vec{y},\vec{z}$. $R$ is orthogonal, so $R^{-1} = R^T$
- $T$ = Translate($\widetilde{\rm{eye}})$, so $T^{-1}$ = Translate($-\widetilde{\rm{eye}}$)
- view matrix is inverse of $TR$, which is $R^TT^{-1}$, i.e., the transpose of $R$ times Translate($-\widetilde{\rm{eye}}$)
- See comment #9 in DepthWithView.js
- Spinning cube example
- updates the model transformation once each frame, multiplying it by a one-half degree rotation about one of the axes
- Edit the animation loop (lines 255-295 of RotatingCube.java) to see the difference between multiplying on the left vs multiplying on the right
|
RotatingCube.html
|
-->
|
Thursday, September 14 (Week 4) |
- "Pseudocode" for standard matrices
- RotateX($\theta$), RotateY($\theta$), RotateZ($\theta$)
- Scale($s_x, s_y, s_z$)
- Translate($t_x, t_y, t_z$)
- Remarks on matrix inverse
- $AA^{-1} = A^{-1}A = I$
- Not every matrix has an inverse, but...
- Standard transformation matrices do, and are easy to invert, e.g.
inverse of Scale(2, 3, 1) is Scale(1/2, 1/3, 1)
- For the inverse of a product, reverse the order: $(AB)^{-1} = B^{-1}A^{-1}$
- The depth buffer
- The depth buffer algorithm for hidden surface removal
initialize all locations of depth buffer to "infinity"
in each fragment (x, y)
let z' = value in depth buffer at (x, y)
if gl_FragCoord.z < z'
set depth buffer value at (x, y) to gl_FragCoord.z
run the fragment shader
else
do nothing
-
gl_FragCoord.z is a built-in variable available in the fragment
shader that contains the relative depth (distance from view point)
of the fragment, scaled to the range [0. 1]
(see the fragment shader in Depth0.js)
-
Clip space is left-handed! (i.e. larger z-coordinate means "farther away")
- To enable depth testing, we need to include:
gl.enable(gl.DEPTH_TEST); in the main function
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BIT); in the draw function
- (See the comments marked "***" in Depth.js)
- In almost all cases, we think of all vertices as being transformed by
three matrices, called the model, view, and projection
(i.e. projection * view * model)
- The model transformation takes model coordinates to world coordinates (i.e. places an instance of a model in the scene)
- view transformation takes world coordinates to eye (camera) coordinates
- projection transformation takes eye coordinates to clip coordinates
(determines the visible region of the scene)
- Last time: if point $\widetilde{p}$ has coordinates $\underline{c}$ w.r.t. a frame $\mathcal{F}$, and $M$ transforms $\mathcal{F}$ to a new frame $\mathcal{F}' = \mathcal{F}M$, then $\widetilde{p}$ has coordinates $M^{-1}\underline{c}$ with respect to the new frame $\mathcal{F}'$.
- This is how we define a camera or view matrix: If $M$ is the transformation to the camera frame, then multiplying by $M^{-1}$ is the view transformation, i.e., it gives you the coordinates of everything in the scene, relative to the camera.
- The idea of the projection matrix is to choose what region gets mapped into clip space. By default, this is just a cube centered at the camera's origin that goes from -1 to 1 in all three dimensions.
- A projection will always flip the z-axis (since clip space is left-handed)
- Simple case: to shift the location of the near and far clipping planes, just do a Celsius-to-Fahrenheit conversion of the z values. See the comments for the numbered examples
3 and 4 in DepthWithView.js.
-
|
- The section "Correctly Handling Foreground and Background Objects" in teal book ch. 7 describes the depth buffer
- Code examples:
DepthWithView.html
(Work through the detailed comments numbered 1 through 9 - uncomment the relevant code to try each one out.)
Depth0.html
(Visualization of gl_FragCoord.z)
Depth.html
(Illustrates how to enable depth testing)
|
|
Tuesday, September 12 (Week 4) |
- An affine transformation is a linear transformation followed by a translation (shift)
- in one dimension: $f(x) = mx + b$
- An affine matrix is any 4x4 matrix with $\cvec{0}{0}{0}{1}$ in bottom row
- The product of affine matrices is an affine matrix
- An affine matrix represents an affine transformation
- An affine matrix $M$ can always be decomposed into
a linear (or "rotational") part $R$ followed by a translational part $T$, that is, $M = TR$ where $T$ is a translation and $R$ is linear
\[
\begin{bmatrix}1 & 0 & 0 & t_x \\
0 & 1 & 0 & t_y \\
0 & 0 & 1 & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}a & d & g & 0 \\
b & e & h & 0 \\
c & f & i & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}a & d & g & t_x \\
b & e & h & t_y \\
c & f & i & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\]
- Note that $RT$ is also an affine transformation, and its linear part is $R$, but its translational part is not $T$. That is, $RT = T'R$, where $T'$ is a translation that is generally not equal to $T$.
- Observation 1: Suppose point $\widetilde{p}$ has coordinates $\underline{c}$ with respect to frame $\mathcal{F}$, and we transform by matrix $M$ to get a point
$\widetilde{p} = \mathcal{F}(M\underline{c})$.
There are two ways to look at this new point:
- $\widetilde{p} = \mathcal{F}(M\underline{c})$ - new coordinates, interpreted w.r.t. existing frame $\mathcal{F}$
- $\widetilde{p} = (\mathcal{F}M)\underline{c}$ - same coordinates, interpreted w.r.t a transformed frame $\mathcal{F}M$
- Note that for an affine matrix $M$ and frame $\mathcal{F}$, the product $\mathcal{F}M$ is also a frame, transformed by $M$
- Observation 2: Consider a transformation $A$ and consider a set of transformed points $\mathcal{F}A\underline{c}$. When we apply another transformation $B$, we can either do
- $\mathcal{F}BA\underline{c}$ - $B$ is applied w.r.t. the original frame $\mathcal{F}$ ("extrinsically")
- $(\mathcal{F}A)B\underline{c}$ - $B$ is applied w.r.t. the transformed (local) frame $\mathcal{F}T$ ("intrinsically")
- The "left-of"" rule: a transformation matrix is always applied with respect to the frame immediately to its left
- Try this using the first two radio buttons in
Transformations2.html
- Observation 3: Suppose $T$ is "translate 3 units right" and $R$ is "rotate 90 degrees ccw". There are two ways of thinking about the transformation $RT$, say:
- first shift 3 units in the x direction, then rotate 90 degrees about the original origin (thinking extrinsically: multiply by R on the left)
- first rotate 90 degrees about the origin, and then shift 3 units in the rotated x direction, i.e., up (thinking intrinsically: multiply by T on the right)
- Important: The difference only exists in our thinking! It's the same transformation $RT$ in both cases
- Try this using the first two radio buttons in
Transformations2.html
- A different question: suppose we want to keep the same point, but find its new coordinates w.r.t a transformed frame $\mathcal{F}M$?
- Motive: we need to define a transformation enabling us to describe
locations of existing vertices with respect to the location and orientation of
a virtual camera or view point
- Example: Suppose we have a coordinate system $\mathcal{F}$ centered at Main and Duff, and a coordinate system $\mathcal{F'}$ at the clocktower, and that $M$ is the transformation that takes Main and Duff to the Clocktower, i.e., $\mathcal{F'} = \mathcal{F}M$. If $\underline{c}$ is the coordinate vector that takes you from Main and Duff to Steve's house, then how do you get from the Clocktower to Steve's house?
- First invert $M$ to get from Clocktower to Main & Duff;
then use coordinates $\underline{c}$ to get to Steve's.
- That is, $M^{-1}\underline{c}$ gives you the coordinates of Steve's house with respect to the Clocktower
- More generally, if point $\widetilde{p}$ has coordinates $\underline{c}$ w.r.t. a frame $\mathcal{F}$, and $\mathcal{F}' = \mathcal{F}M$, then $\widetilde{p}$ has coordinates $M^{-1}\underline{c}$ w.r.t. $\mathcal{F}'$. That is:
$$\begin{align}
\mathcal{F}' &= \mathcal{F}M \\
\Longrightarrow \mathcal{F}'M^{-1} &= \mathcal{F} \\
\Longrightarrow \mathcal{F}'M^{-1}\underline{c} &= \mathcal{F}\underline{c} = \widetilde{p}
\end{align}$$
|
- The idea of finding the coordinates of a point with respect to some other frame has a lovely explanation in Chapter 13 ("Change of basis") of the linear algebra videos at 3blue1brown.com. Chapters 2, 3, and 4 are also relevant to what we have been doing lately.
- Code examples: Experiment with compositions of transformations using the key controls:
Transformations2.html
|
|
Thursday, September 7 (Week 3) |
- Recall that we use 4 coordinates to represent 3d points and vectors.
- The 4th coordinate is 0 for a vector and 1 for a point
- A linear transformation can be represented by a 4x4 matrix whose bottom row and right column are both $\cvec{0}{0}{0}{1}$ (The upper left 3x3 submatrix is the same as what we derived last time.)
- Example: A matrix that scales by a factor of $s_x$ in the x direction, $s_y$ in the y-direction, and $s_z$ in the z-direction looks like:
\[
\begin{bmatrix}s_x & 0 & 0 & 0 \\
0 & s_y & 0 & 0 \\
0 & 0 & s_z & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}x \\ y \\ z \\ 1 \end{bmatrix}
=
\begin{bmatrix}s_x \cdot x \\ s_y \cdot y\\ s_z \cdot z \\ 1 \end{bmatrix}
\]
- For any "standard basis"", a general rotation matrix (for rotations of angle $\theta$ about the z-axis) looks like this:
\[
\begin{bmatrix}\cos(\theta) & -\sin(\theta) & 0 & 0 \\
\sin(\theta) & \cos(\theta) & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]
- An orthonormal basis consists of three unit vectors that are orthogonal to each other
- A standard basis is an orthonormal basis that is right-handed
- A translation, or shift, is not a linear transformation!
- A linear transformation always takes the zero vector to the zero vector (and hence always takes the origin to the origin)
- A translation is represented by a matrix of the form below:
\[
\begin{bmatrix}1 & 0 & 0 & t_x \\
0 & 1 & 0 & t_y \\
0 & 0 & 1 & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}x \\ y \\ z \\ 1 \end{bmatrix}
=
\begin{bmatrix}x + t_x \\ y + t_y \\ z + t_z \\ 1 \end{bmatrix}
\]
- This illustrates one more reason it's useful to represent points and vectors using four coordinates: we can implement translations using matrix multiplication
- (In reality, we are performing a linear transformation in 4-dimensional space, but then looking only at the subspace in which the 4th coordinate is 1.)
- Composing transformations is matrix multiplication
- If $R$ and $S$ are matrices for two transformations and $\underline{c}$ is a coordinate vector, then
\[
(SR)\underline{c} = S(R\underline{c})
\]
that is, the matrix $SR$ represents the transformation that does $R$ first, then $S$
- In general $SR \neq RS$, for example, "rotate and then scale" is almost never the same transformation as "scale and then rotate"
- Overview of matrix operations from the three.js library
|
- See sections 3.2 - 3.5 of Gortler (ignore 3.6 on normal vectors for now)
- The teal book section "Translate and Then Rotate" in ch. 4 goes into composing multiple transformations
- You'll need the Matrix4 type from three.js, see below for link.
- See the documentation for the three.js Matrix4 type
- Also see chapter 4 of the Linear Algebra series by 3blue1brown:
https://www.3blue1brown.com/topics/linear-algebra
- Code examples (edit the main method to try different transformations):
Transformations.html
Transformations1.html
Note:Transformations1 requires the three.js library. Please use
this version:
https://stevekautz.com/cs336f23/three.js, (1.2MB) not the latest one from threejs.org.
|
-->
|
Tuesday, September 5 (Week 3) |
- Preview of typical vertex processing steps:
- Model transformation places instance of model into world coordinates
- View transformation transforms world into eye/camera coordinates (scene from eye/camera view point)
- Projection transformation transforms a viewable region of scene into clip coordinates (a 2x2x2 cube representing the "viewable" triangles to be handed to the rasterizer)
- A 3d vector space consists of all 3D vectors with the basic operations above
- A 3d basis for a vector space is a set of three independent vectors
$\vec{b_1}, \vec{b_2}, \vec{b_3}$
such that every possible 3d vector can be written as a linear combination of $\vec{b_1}, \vec{b_2}, \vec{b_3}$
- A 3d affine space consists of all 3d vectors and points
- A 3d frame (coordinate system) is a basis along with a designated point called the origin
- Matrices and matrix multiplication
- Representing a basis as a row matrix of three vectors,
$\mathcal{B} = \shortcvec{\vec{b_1}}{\vec{b_2}}{\vec{b_3}}$
- Representing a frame as a row matrix of three vectors and a point,
$\mathcal{F} = \cvec{\vec{b_1}}{\vec{b_2}}{\vec{b_3}}{\widetilde{o}}$
- If vector $\vec{u} = u_1\vec{b_1} + u_2\vec{b_2} + u_3\vec{b_3}$, then the column matrix \[ \underline{c} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ 0 \end{bmatrix}
= \cvec{u_1}{u_2}{u_3}{0}^T
\]
is a coordinate vector for $\vec{u}$ with respect to frame $\mathcal{F}$. Note that in terms of matrix multiplication, we can write
\[
\mathcal{F}\underline{c} = \cvec{\vec{b_1}}{\vec{b_2}}{\vec{b_3}}{\widetilde{o}}\cvec{u_1}{u_2}{u_3}{0}^T = \vec{u}
\]
(where the superscript $T$ indicates the matrix transpose)
- Likewise, if point $\widetilde{p} = u_1\vec{b_1} + u_2\vec{b_2} + u_3\vec{b_3} + \widetilde{o}$, then the column matrix $\cvec{u_1}{u_2}{u_3}{1}^T$ is a coordinate
vector for $\widetilde{p}$ w.r.t. frame $\mathcal{F}$,
- Key point: the coordinates of a vector or point depend on what basis or frame is being used, and the coordinates only have meaning with respect to a known basis or frame.
- Linear transformations
- key feature: if you transform points that are collinear, then the resulting points are also collinear...
- ...so triangles are transformed to triangles
- For a given basis
$\mathcal{B} = \shortcvec{\vec{e_1}}{\vec{e_1}}{\vec{e_1}}$, a linear transformation $f$ is represented by a matrix $M$ such that if $\underline{c}$
is a coordinate vector w.r.t. $\mathcal{B}$ for some vector $\vec{u}$, then
$M\underline{c}$ is the coordinate vector for the transformed vector $f(\vec{u})$
- The columns of $M$ are just the coordinate vectors for
$f(\vec{e_1}), f(\vec{e_2})$, and $f(\vec{e_3})$
- How we know this: Look at what $f$ does to the basis vectors. Each of the vectors $f(\vec{e_1}), f(\vec{e_2})$, and $f(\vec{e_3})$ has a coordinate vector w.r.t. the original basis $\mathcal{B}$. Suppose we label those coordinates as
\[ \begin{bmatrix} m_{11} \\ m_{21} \\ m_{31} \end{bmatrix},
\begin{bmatrix} m_{12} \\ m_{22} \\ m_{32} \end{bmatrix},
\begin{bmatrix} m_{13} \\ m_{23} \\ m_{33} \end{bmatrix},
\]
respectively, (ignoring the 4th coordinate for readability).
Suppose that some vector $\vec{u}$ has coordinates $\shortcvec{c_1}{c_2}{c_3}^T$ w.r.t. $\mathcal{B}$. Then using the fact that $f$ is linear, and rearranging to find
the three coefficients of $\vec{e_1}$, $\vec{e_2}$, and $\vec{e_3}$,
\[
\begin{eqnarray*}
f(\vec{u}) &=& f(c_1\vec{e_1} + c_2\vec{e_2} + c_3\vec{e_3}) \\
&=& c_1f(\vec{e_1}) + c_2f(\vec{e_2}) + c_3f(\vec{e_3}) \\
&=& c_1(m_{11}\vec{e_1} + m_{21}\vec{e_2} + m_{31}\vec{e_3})
+ c_2(m_{12}\vec{e_1} + m_{22}\vec{e_2} + m_{32}\vec{e_3})
+ c_3(m_{13}\vec{e_1} + m_{23}\vec{e_2} + m_{33}\vec{e_3}) \\
&=& (m_{11}c_1 + m_{12}c_2 + m_{13}c_3)\vec{e_1}
+ (m_{21}c_1 + m_{22}c_2 + m_{23}c_3)\vec{e_2}
+ (m_{31}c_1 + m_{32}c_2 + m_{33}c_3)\vec{e_3}
\end{eqnarray*}
\]
which looks like a mess, but notice that if we define
$M$ to be the matrix formed by taking the three coordinate vectors for
$f(\vec{e_1}), f(\vec{e_2})$, and $f(\vec{e_3})$ as its columns,
\[
M = \begin{bmatrix} m_{11} & m_{12} & m_{13} \\
m_{21} & m_{22} & m_{23} \\
m_{31} & m_{32} & m_{33}
\end{bmatrix}
\]
then the coefficients of $\vec{e_1}$, $\vec{e_2}$, and $\vec{e_3}$ above --- that is,
the coordinates of $f(\vec{u})$ --- are just the three
entries of the matrix $M\underline{c}$.
|
- See sections 2.3 and 3.3 of Gortler
- In the teal book, the section "Moving, Rotating, and Scaling" in chapter 3 is a basic introduction to transformations
- Also see chapter 3 of the Linear Algebra series by 3blue1brown:
https://www.3blue1brown.com/topics/linear-algebra
|
|
Thursday, August 31 (Week 2) |
- When you invoke
gl.vertexAttribPointer , you can specify a stride
and offset in bytes
- This means that you can interleave the data for several vertex attributes in one buffer
- See
gl_example1a_two_colors for an example
- Indexed rendering (see GL_example1_indexed)
- Debugging graphics code is hard!
- Always work incrementally, starting from a working example
- You can use the Chrome or Firefox debugger to check what data you're sending to the GPU, or to verify how JS functions work
- In graphics, we need to deal with multiple "frames of reference" (coordinate systems)
- A point is a geometric location (and may
have different coordinates depending on the frame)
- The difference between two points is a vector, representing the distance and direction from one point to another
- A point plus a vector is a point
- Vector operations - scaling and addition
- A vector with length 1 is called a unit vector
- A nonzero vector can always be divided by its length to create a unit vector with the same direction (called "normalizing" the vector)
- The dot product between two unit vectors is the cosine of the angle between them
- Two nonzero vectors are orthogonal (perpendicular) if their dot product is zero
- Linear combinations of vectors (scale and add)
- A basis for a 2d vector space (e.g., a plane) is a set of two non-parallel vectors
$\vec{b_1}, \vec{b_2}, $
such that every possible 2d vector can be written as a linear combination of $\vec{b_1}, \vec{b_2}$
|
- Read Sections 2.1, 2.2, and 3.1 of the book by Gortler (Foundations of 3D computer graphics). This is just a few pages and summarizes pretty much everything we talked about today. Just log into the ISU library and search for the title. The entire book is available online. You will need to log into the ISU VPN if you are off campus.
- For nice visuals and simple explanations of vectors and linear combinations, see chapters 1 and 2 (the second and third videos) of the Linear Algebra series by 3blue1brown:
https://www.3blue1brown.com/topics/linear-algebra
|
|
Tuesday, August 29 (Week 2) |
- Recap of basic steps in our Hello, World example
- The purpose of an
attribute variable in the vertex shader is to pass per-vertex data from CPU to GPU
- The function
vertexAttribPointer associates the vertex attribute
with the data in your buffer
- Vertex shader must always set the built-in variable
gl_Position
- Fragment shader normally sets the built-in variable
gl_FragColor
- Animation by updating a uniform variable in each frame
- Using
requestAnimationFrame to create animation loop
- There are three kinds of variable modifiers in GLSL
- attribute variables are used to pass per-vertex data from the CPU to GPU
- uniform variables are used to pass uniform data from CPU to GPU
(same value in every vertex/fragment shader instance)
- varying variables are used to pass data from vertex shader to fragment shader (values are interpolated by the rasterizer)
- Using functions such as
uniform1f - set a uniform with one float value
uniform4f - set a vec4 uniform with four floating point values
uniform4fv - - set a vec4 uniform with an array of values
- etc...
- See the "WebGL Reference Card" for GLSL types and functions
- Linear interpolation!
- Basic example: Converting Fahrenheit to Celsius
- Suppose we have some Farenheit temperature $x$ and we want the Celsius temperature $y$. Then
$$\beta = \frac{x - 32}{212 - 32}$$
tells you "how far" $x$ is along the scale from 32 to 212. We want to go the same fraction of the way along the Celsius scale from 0 to 100, i.e.,
$$y = 0 + \beta(100 - 0)$$
To say that the Fahrenheit and Celsius scales are "linearly related" just means, e.g., that if we are 25% of the way from 32 to 212 in Fahrenheit, we should be 25% of the way from 0 to 100 in Celsius.
More generally, to map the 32-to-212 scale to any range $A$ to $B$, you have $$\begin{eqnarray}
y &=& A + \beta\cdot(B - A) \\
&=& (1 - \beta)\cdot A + \beta\cdot B \\
&=& \alpha\cdot A + \beta\cdot B
\end{eqnarray}$$
where $\alpha = 1 - \beta$.
One way to think of this is that you have two quantities $A$ and $B$ to be "mixed", and the proportion $\beta$ tells you "how much $B$" and $\alpha$ is "how much $A$".
This is nice because it generalizes to interpolating within a triangle using barycentric coordinates.
|
- Code examples:
- Note: GL_example1a is the same as GL_example1 except that all the
boilerplate helper functions have been moved into ../util/cs336util.js
GL_example1a.html
GL_example1a_uniform_color.html
GL_example1a_with_animation.html
GL_example2_varying_variables.html
|
-->
|
Thursday, August 24 (Week 1) |
- Odds and ends:
- Basic application structure; the
onload event in JS. See
foo.html and foo.js
(output goes to JS console).
- Setting up a WebGL context and clearing the canvas (see GL_example0)
- RGBA encoding of color as four floats
- The graphics context is a state machine (function calls rely on lots of internal state)
- The idea of binding a buffer or shader to become "the one I'm currently talking about"
- Binding points ARRAY_BUFFER, ELEMENT_ARRAY_BUFFER
- Overview of the steps involved in our "Hello, World!" application
- (Initialization)
- Create context
- Load and compile shaders
- Create buffers
- Bind each buffer and fill with data
- (Each frame)
- Bind shader
- For each attribute...
- Find attribute index
- Enable the attribute
- Bind a buffer with the attribute data
- Set attribute pointer to the buffer
- Set uniform variables, if any
- Draw, specifying primitive type
- Options for primitives: TRIANGLES, LINES, LINE_STRIP, etc.
- See ListExample.html for a demo
|
- Chapters 2 and 3 of the teal book provide a detailed and careful overview of the steps described above.
- Also highly recommended: the first chapter (but only the first chapter!) of
https://webglfundamentals.org/
- Read and experiment with GL_example1 below
- try changing the vertices
- try the commented-out lines in the draw() function
- Code examples:
GL_example0.html
GL_example1.html
foo.html
ListExample.html
(You can view the associated html and javascript source in the developer tools (Ctrl-Alt-i), or just grab everything directly from the examples/intro/ directory of https://stevekautz.com/cs336f23/.
)
|
|
Tuesday, August 22 (Week 1) |
- Introduction
- This is a course in 3D rendering using OpenGL, not a course in developing GUIs!
- WebGL is a set of browser-based JavaScript bindings for OpenGL ES 2.0, which is essentially OpenGL 3.2 with the deprecated stuff and fancy features removed
- Overview of the GPU pipeline:
- (Model - a set of vertices organized into a "mesh" of triangles)
- -> Vertex processing (*)
- -> Primitive assembly (and clipping)
- -> Rasterization
- -> Fragment processing (*)
- -> (Framebuffer - graphics memory mapped to an actual display window)
- (*) Vertex and fragment processing stages are programmed via "shaders" using GLSL, the OpenGL shading language
|
- Read the syllabus
- See the Resources page for textbook information
- Learn JavaScript (see Resources page for ideas)
|