# WebGL engine part2

## WebGL Engine From Scratch Part 2

(–part 1–)

### Introduction

Before moving on to more glamorous aspects of rendering 3D graphics, we must implement a scenegraph and some 3D math classes in order to instantiate and move meshes around. Doing this will also require us to implement update and render loops into the engine. All of this, will make future extension easier.

### Core Math

Before we can start to tackle Transformation matrices, we must also have basic 3D math classes to use. These are:

• 3D Vector
• Quaternion(rotation)
• Matrix

Note that I am not placing the math classes under`engine.smthsmth` because I think it is better to have the convenience to write math operations shortly. These math classes are going to be used often.

```/* longwinded inconvenient format */
var v0 = new engine.math.Vector3( 0,0,0 );
var v1 = new engine.math.Vector3( 1,1,1 );
var v3 = engine.math.Vector3.Add( v0, v1 );

/* shorter, more friendly format */
var v0 = new Vec3( 0,0,0 );
var v1 = new Vec3( 1,1,1 );
var v3 = Vec3.Add( v0, v1 );```

Defining the 3D vector is the easiest. We simply need an array of 3 numbers and a few utility methods.

```class Vec3{
constructor( x,y,z ){
this.data = new Float32Array([x||0,y||0,z||0]);
}
get x(){ return this.data[0]; } set x(val){ this.data[0] = val; }
get y(){ return this.data[1]; } set y(val){ this.data[1] = val; }
get z(){ return this.data[2]; } set z(val){ this.data[2] = val; }

Set(x,y,z){
this.data[0] = x;
this.data[1] = y;
this.data[2] = z;
}

Length(){
return Math.sqrt( this.LengthSqr() );
}
LengthSqr(){
return
this.data[0]*this.data[0]+
this.data[1]*this.data[1]+
this.data[2]*this.data[2]
}
Normalize(){
var l = this.Length();
if(l == 0){ return this; }
this.data[0] /= l;
this.data[1] /= l;
this.data[2] /= l;
return this;
}
}```

There will be a need to add more utility methods in the future but this will do at the moment. All this should be quite self-explanatory.

Quaternion is a lot trickier concept to get your head around. In my opinion it is easier to just think of it as an axis-angle rotation, but stored in a normalized 4D vector format. Using Quaternion over Euler or other methods to represent rotations, comes down to the simplicity how multiple quaternions can be combined or interpolated, without having gimbal lock or other issues. See here for more detail…

```class Quat{
constructor(){
this.data = new Float32Array([ 0.0, 0.0, 0.0, 1.0 ]);
}

var c2 = Math.cos(attitude);
var s2 = Math.sin(attitude);
var c3 = Math.cos(bank);
var s3 = Math.sin(bank);
this.data[3] = Math.sqrt(1.0 + c1 * c2 + c1*c3 - s1 * s2 * s3 + c2*c3) / 2.0;
var w4 = (4.0 * this.data[3]);
this.data[0] = (c2 * s3 + c1 * s3 + s1 * s2 * c3) / w4 ;
this.data[1] = (s1 * c2 + s1 * c3 + c1 * s2 * s3) / w4 ;
this.data[2] = (-s1 * s3 + c1 * s2 * c3 +s2) / w4 ;
}

GetEuler(){
var test = q1.x*q1.y + q1.z*q1.w;
if (test > 0.499) { // singularity at north pole
var heading = 2 * atan2(q1.x,q1.w);
var attitude = Math.PI/2;
var bank = 0;
return [ heading, attitude, bank ];
}
if (test < -0.499) { // singularity at south pole
var heading = -2 * atan2(q1.x,q1.w);
var attitude = - Math.PI/2;
var bank = 0;
return [ heading, attitude, bank ];
}
var sqx = q1.x*q1.x;
var sqy = q1.y*q1.y;
var sqz = q1.z*q1.z;
var heading = atan2(2*q1.y*q1.w-2*q1.x*q1.z , 1 - 2*sqy - 2*sqz);
var attitude = asin(2*test);
var bank = atan2(2*q1.x*q1.w-2*q1.y*q1.z , 1 - 2*sqx - 2*sqz);
return [ heading, attitude, bank ];
}

GetMat4(){
var m = new Mat4();
var xx = this.data[0] * this.data[0];
var xy = this.data[0] * this.data[1];
var xz = this.data[0] * this.data[2];
var xw = this.data[0] * this.data[3];

var yy = this.data[1] * this.data[1];
var yz = this.data[1] * this.data[2];
var yw = this.data[1] * this.data[3];

var zz = this.data[2] * this.data[2];
var zw = this.data[2] * this.data[3];

m.m00  = 1 - 2 * ( yy + zz );
m.m01  =     2 * ( xy - zw );
m.m02 =     2 * ( xz + yw );

m.m10  =     2 * ( xy + zw );
m.m11  = 1 - 2 * ( xx + zz );
m.m12  =     2 * ( yz - xw );

m.m20  =     2 * ( xz - yw );
m.m21  =     2 * ( yz + xw );
m.m22 = 1 - 2 * ( xx + yy );

m.m33 = 1.0;
return m;
}
}```

All of this is mostly just ported from the examples here. I am not going to claim to have an understanding of how these things work exactly, but rotations in general has to do with sine and cosine relationships on each of the 3 axis.

The last element we need is a 4 by 4 matrix. This will be arguably the most “core” component of a 3D rendering engine. Using a matrix to represent transformations allows for easy combining of different transformations. For example to make a child object’s transformation relative to it’s parent transformation. We will get to that in a minute at the Model Matrix section.

```    class Mat4{
/* column major formatted
+----+----+----+----+
|  0 |  4 |  8 | 12 |
+----+----+----+----+
|  1 |  5 |  9 | 13 |
+----+----+----+----+
|  2 |  6 | 10 | 14 |
+----+----+----+----+
|  3 |  7 | 11 | 15 |
+----+----+----+----+

+-----+-----+-----+-----+
| m00 | m01 | m02 | m03 |
+-----+-----+-----+-----+
| m10 | m11 | m12 | m13 |
+-----+-----+-----+-----+
| m20 | m21 | m22 | m23 |
+-----+-----+-----+-----+
| m30 | m31 | m32 | m33 |
+-----+-----+-----+-----+

+----+----+----+----+
| Xx | Yx | Zx | Tx |
+----+----+----+----+
| Xy | Yy | Zy | Ty |
+----+----+----+----+
| Xz | Yz | Zz | Tz |
+----+----+----+----+
|    |    |    |    |
+----+----+----+----+
*/

constructor(){
// 1D array, because it can be passed
// to WebGL shader as is.
this.data = new Float32Array(16);
return this;
}
/* getters and setters in "mxx" format for more convenient element access */
get m00(){ return this.data[ 0]; } set m00(val){ this.data[ 0] = val; }
get m01(){ return this.data[ 4]; } set m01(val){ this.data[ 4] = val; }
get m02(){ return this.data[ 8]; } set m02(val){ this.data[ 8] = val; }
get m03(){ return this.data[12]; } set m03(val){ this.data[12] = val; }

get m10(){ return this.data[ 1]; } set m10(val){ this.data[ 1] = val; }
get m11(){ return this.data[ 5]; } set m11(val){ this.data[ 5] = val; }
get m12(){ return this.data[ 9]; } set m12(val){ this.data[ 9] = val; }
get m13(){ return this.data[13]; } set m13(val){ this.data[13] = val; }

get m20(){ return this.data[ 2]; } set m20(val){ this.data[ 2] = val; }
get m21(){ return this.data[ 6]; } set m21(val){ this.data[ 6] = val; }
get m22(){ return this.data[10]; } set m22(val){ this.data[10] = val; }
get m23(){ return this.data[14]; } set m23(val){ this.data[14] = val; }

get m30(){ return this.data[ 3]; } set m30(val){ this.data[ 3] = val; }
get m31(){ return this.data[ 7]; } set m31(val){ this.data[ 7] = val; }
get m32(){ return this.data[11]; } set m32(val){ this.data[11] = val; }
get m33(){ return this.data[15]; } set m33(val){ this.data[15] = val; }

Set( data ){
this.data.set(data);
return this;
}

SetIdentity(){ this.data.set([1,0,0,0, 0,1,0,0, 0,0,1,0, 0,0,0,1]); return this; }

Multiply( other ){
var d = [0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0];
/* multiply rows and columns using loops */
for( var x = 0; x < 4; x++ ){
for( var y = 0; y < 4; y++ ){
for( var z = 0; z < 4; z++ ){
d[x+y*4] += other.data[x+z*4] * this.data[z+y*4];
}
}
}
this.data.set(d);
return this;
}

TRS( translation, rotation, scale ){
var T = new Mat4().Set([
1               ,0              ,0              ,0,
0               ,1              ,0              ,0,
0               ,0              ,1              ,0,
translation.x   , translation.y , translation.z ,1
]);
var R = rotation.GetMat4();
this.Set([
scale.x , 0       , 0       , 0 ,
0       , scale.y , 0       , 0 ,
0       , 0       , scale.z , 0 ,
0       , 0       , 0       , 1
]);
this.Multiply( R );
this.Multiply( T );
return this;
}

Copy( other ){
this.data.set(other.data);
return this;
}

Perspective( aspect, fov, near, far ){
// set the basic projection matrix
this.SetIdentity();
var scale = 1 / Math.tan(fov * 0.5 * Math.PI / 180);
this.m00 = scale; // scale the x coordinates of the projected point
this.m11 = scale * aspect; // scale the y coordinates of the projected point
this.m22 = -far / (far - near); // used to remap z to [0,1]
this.m23 = -far * near / (far - near); // used to remap z [0,1]
this.m32 = -1; // set w = -z
this.m33 = 0;
}

Invert(){
var m = new Mat4();
var s0 = this.m00 * this.m11 - this.m10 * this.m01;
var s1 = this.m00 * this.m12 - this.m10 * this.m02;
var s2 = this.m00 * this.m13 - this.m10 * this.m03;
var s3 = this.m01 * this.m12 - this.m11 * this.m02;
var s4 = this.m01 * this.m13 - this.m11 * this.m03;
var s5 = this.m02 * this.m13 - this.m12 * this.m03;

var c5 = this.m22 * this.m33 - this.m32 * this.m23;
var c4 = this.m21 * this.m33 - this.m31 * this.m23;
var c3 = this.m21 * this.m32 - this.m31 * this.m22;
var c2 = this.m20 * this.m33 - this.m30 * this.m23;
var c1 = this.m20 * this.m32 - this.m30 * this.m22;
var c0 = this.m20 * this.m31 - this.m30 * this.m21;

// Should check for 0 determinant

var invdet = 1 / (s0 * c5 - s1 * c4 + s2 * c3 + s3 * c2 - s4 * c1 + s5 * c0);

m.m00 = (this.m11 * c5 - this.m12 * c4 + this.m13 * c3) * invdet;
m.m01 = (-this.m01 * c5 + this.m02 * c4 - this.m03 * c3) * invdet;
m.m02 = (this.m31 * s5 - this.m32 * s4 + this.m33 * s3) * invdet;
m.m03 = (-this.m21 * s5 + this.m22 * s4 - this.m23 * s3) * invdet;

m.m10 = (-this.m10 * c5 + this.m12 * c2 - this.m13 * c1) * invdet;
m.m11 = (this.m00 * c5 - this.m02 * c2 + this.m03 * c1) * invdet;
m.m12 = (-this.m30 * s5 + this.m32 * s2 - this.m33 * s1) * invdet;
m.m13 = (this.m20 * s5 - this.m22 * s2 + this.m23 * s1) * invdet;

m.m20 = (this.m10 * c4 - this.m11 * c2 + this.m13 * c0) * invdet;
m.m21 = (-this.m00 * c4 + this.m01 * c2 - this.m03 * c0) * invdet;
m.m22 = (this.m30 * s4 - this.m31 * s2 + this.m33 * s0) * invdet;
m.m23 = (-this.m20 * s4 + this.m21 * s2 - this.m23 * s0) * invdet;

m.m30 = (-this.m10 * c3 + this.m11 * c1 - this.m12 * c0) * invdet;
m.m31 = (this.m00 * c3 - this.m01 * c1 + this.m02 * c0) * invdet;
m.m32 = (-this.m30 * s3 + this.m31 * s1 - this.m32 * s0) * invdet;
m.m33 = (this.m20 * s3 - this.m21 * s1 + this.m22 * s0) * invdet;

return m;
}
}```

With all the transformations it is good to get into the mindset  of how they are applied to a mesh. Because vertex shaders operate on single vertex at a time, we don’t really have information about the whole mesh. So rotations aren’t really “turning” some mesh around some point, but rather applying offset to each vertex to create the illusion of consistently rotating the whole mesh.

### Model Matrix

The model matrix is generally the matrix that represents our objects transformation relative to the world space. In this context a Local Space is the coordinate system that the vertices of a mesh are defined in ( raw vertex positions ) and the World Space coordinates are our “absolute” positions of things in the “world”. This is the way that we can move our mesh vertices around in the world.

Left triangle has raw vertex coordinates. Right triangle is transformed by position and rotation
To create the model matrix for an object, we must first create a matrix from the objects’ position, rotation and scale. In our `class Mat4{}` there is a method named `TRS(){}` to do this.

```TRS( translation, rotation, scale ){
var T = new Mat4().Set([
1               ,0              ,0              ,0,
0               ,1              ,0              ,0,
0               ,0              ,1              ,0,
translation.x   , translation.y , translation.z ,1
]);
var R = rotation.GetMat4();
this.Set([
scale.x , 0       , 0       , 0 ,
0       , scale.y , 0       , 0 ,
0       , 0       , scale.z , 0 ,
0       , 0       , 0       , 1
]);
this.Multiply( R );
this.Multiply( T );
return this;
}```

The position, rotation and scale matrices must be multiplied to create the full transformation matrix from local space to world space. As matrix multiplication is non cumulative operation, it is important to multiply these matrices in this order, because otherwise the resulting transformation is not logical. For example it could be rotated around some pivot other than the origin and the translation could be multiplied by the scale.

If there is a parent object for a given object, we must also multiply our TRS matrix with the parent’s TRS matrix to get the relational transformations. This is the place where matrix format really shines, because it would be a real pain to get the transformations for parent-child objects any other way.

Now that we have our geometry in the world space coordinates, we must get it into the view space: the transformation relative to the “camera” position.

### View Matrix

The view matrix transforms the world positions to view space positions. Or rather it transforms the world positions so that the new origin point is the view(camera) position.

This makes it possible to move the view around in the world and this step also prepares the coordinates for applying a projection matrix.

It can be created from camera objects’ model matrix by just inverting it.

`ModelMatrix.Copy().Invert()`

### Projection Matrix

Lastly we need to have the projection matrix for the camera. Without it our scene rendering would have no perspective and would be rendered only in an -1 to 1 cube volume in the camera view space. The Goal is to “scale our world down” to fit more of it into the NDC( normalized device coordinates, -1 to 1 cube ). In case of perspective projection, things further away from the camera need to be scaled down more than the things in front. In real life there is an illusion of far objects being smaller than near ones whereas while rendering the scene we are “actually” scaling far things to be smaller than the near things 🙂

Keep in mind that after applying our cameras’ view matrix, the coordinate origin is the camera position. So we are scaling things around the camera origin rather than the world, which makes understanding a projection matrix much easier.

But how much scaling is needed? Lets analyse the perspective function defined earlier in the Mat4 class:

```Perspective( aspect, fov, near, far ){
this.SetIdentity();
var scale = 1 / Math.tan(fov * 0.5 * Math.PI / 180);
this.m00 = scale;
this.m11 = scale * aspect;
this.m22 = -far / (far - near);
this.m23 = -far * near / (far - near);
this.m32 = -1; // set w = -z
this.m33 = 0;
}```

Scaling the X and Y `m00 and m11`“fits” required amount of the world into the NDC. Notice the aspect variable, this ensures that things look square when rendered. Otherwise there would be a similar effect like when a 4:3 video is stretched to 16:10 aspect ratio.

scaling the Z coordinate is simply to fit the required viewing distance into the NDC.

`m23` is where asymmetry gets into it, this will “skew” the matrix in order to apply more or less scaling depending on the distance of a position from the viewspace origin.

### Engine Architecture

Now is the time to think about the main loop of our engine. Firstly, because the graphics should be rendered interactively(60 frames per second) we need at least a rendering function that draws our objects 60 times per second. But before drawing we must ensure that the objects have the correct transformation matrices. So before a rendering step, we also need an update function. This gives us something like this:

```at start:
engine.Init();

every frame:
engine.Update();
engine.Render();```
```    Update:
Every object:
Update Transformation matrix
Update other things too in future

Render:
Get active camera
Every object:
set matrix values
bind Mesh
draw mesh```

We can optimize this logic later to exclude some objects from either updating or rendering based on several factors.

So, the 3D object class must have its transformation matrix and the position, rotation and scale that is used to create that matrix because manipulating the matrix directly is not impossible, but isn’t very intuitive. In addition to that, our object should also have variables for a mesh and a program to use when this object is being rendered.It is also good idea to create a placeholder callback on Update(), because adding custom functions to give the object some extra behavior is much easier this way.

```engine.Obj = class Obj{
constructor(){
this.localPosition = new Vec3(0,0,0);
this.localRotation = new Quat();
this.localScale = new Vec3(1,1,1);
this.localToWorld = new Mat4();
this.parent = null;
this.children = [];
this.matrixNeedsUpdate = true;
this.mesh = null;
this.program = null;

this.onupdate = function(){};
}

SetParent( obj ){
this.parent = obj;
obj.children.push(this);
}

UpdateMatrix(){
this.localToWorld.TRS( this.localPosition, this.localRotation, this.localScale );
if(this.parent != null){
this.localToWorld.Multiply( this.parent.localToWorld );
}
for(var i = 0; i < this.children.length; i++){
this.children[i].matrixNeedsUpdate = true;
}
this.matrixNeedsUpdate = false;
}

Update(){
this.onupdate();
if(this.matrixNeedsUpdate){
this.UpdateMatrix();
}
}

Draw(){
this.program.Use();
this.program.SetUniform("modelMatrix", this.localToWorld.data, "m4");
this.program.SetUniform("viewMatrix", engine.activeCamera.viewMatrix.data, "m4");
this.program.SetUniform("projMatrix", engine.activeCamera.projMatrix.data, "m4");
this.mesh.Draw();
}
}```

Next we need a class for a Camera object. Because the Camera needs to create a view matrix for itself every time it has been moved, we can just extend the engine.Obj. In addition to view matrix, the camera also needs to create a projection matrix.

```engine.Camera = class Camera extends engine.Obj{
constructor(){
super();
this.target = null; /* future... */
this.fov = 90;
this.near = 0.1;
this.far = 1000.0;
this.aspect = 1;
this.width = engine.canvas.width;
this.height = engine.canvas.height;
this.viewMatrix = new Mat4();
this.projMatrix = new Mat4();
this.UpdateProjection();
}

UpdateProjection(){
this.projMatrix.Perspective( this.aspect, this.fov, this.near, this.far );
}

Update(){
super.Update();
if(this.width != engine.canvas.width || this.height != engine.canvas.height ){
this.width = engine.canvas.width;
this.height = engine.canvas.height;
this.aspect = this.width / this.height;
this.UpdateProjection();
}
}

UpdateMatrix(){
super.UpdateMatrix();
this.viewMatrix.Copy( this.localToWorld );
this.viewMatrix.m03 *= -1;
this.viewMatrix.m13 *= -1;
this.viewMatrix.m23 *= -1;
}

Draw(){
// camera is invisible
}

/*
Implemented in a way that would enable
"manual" rendering through camera.
This is useful for rendering to
textures later.
*/
Render( scene ){
engine.activeCamera = this;
gl.viewport( 0,0, engine.canvas.width, engine.canvas.height );
gl.clearColor(
scene.backgroundColor[0],
scene.backgroundColor[1],
scene.backgroundColor[2],
scene.backgroundColor[3]
);
gl.clear( gl.COLOR_BUFFER_BIT );
scene.Draw();
}
}```

The last class we are going to need for this part, is the class to hold and manage a scene.

```engine.Scene = class Scene{
constructor(){
this.backgroundColor = [ 0.3, 0.3, 0.3, 1.0 ];
// creating and assigning a default camera
// for conveniences' sake
// normally only one scene is used at one time anyway
this.objects = [ new engine.Camera() ];
engine.activeCamera = this.objects[0];
this.objects[0].localPosition.z = 1;
}

this.objects.push( obj );
return obj;
}

Update(){
for(var i = 0; i < this.objects.length; i++){
this.objects[i].Update();
}
}

Draw(){
for(var i = 0; i < this.objects.length; i++){
this.objects[i].Draw();
}
}
}```

Now we can take a moment to create the main loop for our engine to tie all these new things together.

```var engine = {
canvas : null,
activeProgram : null,
activeCamera : null,
scene : null,
time : Date.now()/1000.0
};

var gl = null; // we are going to use "gl" a LOT, so it is best to have a short var name.

engine.Init = function(){
engine.canvas = document.createElement("canvas");
gl = engine.canvas.getContext("webgl");
if(!gl){
// webgl context failed to initialize
// lets display a nice message to let the user know.
var noglmsg = document.createElement("h3");
noglmsg.innerHTML = "Your browser / hardware does not seem to support webgl<br>";
document.body.appendChild(noglmsg);
return; //All is doomed, abort Init()
}
// adding css properties to the canvas
engine.canvas.style.width = "100%";
engine.canvas.style.height = "100%";
engine.canvas.style.position = "fixed";
engine.canvas.style.left = "0px";
engine.canvas.style.top = "0px";
// All is going well, lets put our canvas on the screen
document.body.appendChild(engine.canvas);
engine.Resize(); // set canvas to right size at start

engine.viewMatrix = new Mat4().SetIdentity();
engine.projMatrix = new Mat4().SetIdentity();

engine.scene = new engine.Scene();

engine.Update(); // starting mainloop
};

engine.Update = function(){
engine.time = Date.now()/1000.0;
engine.scene.Update();
if(engine.activeCamera != null){
engine.activeCamera.Render(engine.scene);
}
requestAnimationFrame(engine.Update);
};```

So now, if engine.Init() is called, it also starts the main engine.Update() loop that in turn executes rendering the current scene on active camera object to draw the scene.

Now we can test all this in our test.html

```            engine.Init();
`
precision lowp float;
attribute vec3 position;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projMatrix;
varying vec3 color;
void main(){
color = position*0.5 + 0.5;
gl_Position = projMatrix * viewMatrix * modelMatrix * vec4( position, 1.0 );
}
`,
);
vs.Compile();
`
precision lowp float;
varying vec3 color;
void main(){
gl_FragColor = vec4( color, 1.0 );
}
`,
);
fs.Compile();
var prog = new engine.GLProgram( vs, fs );
prog.Compile();

var mesh = new engine.Mesh({
"position" : { data:new Float32Array([ -1,-1,0,  0,1,0,  1,-1,0 ]) }
});
mesh.Init();

var obj = new engine.Obj();
obj.mesh = mesh;
obj.program = prog;
obj.localScale.Set(0.2, 0.2, 0.2);
obj.onupdate = function(){
this.localPosition.x = Math.sin(engine.time)*0.1;
this.localPosition.z = Math.cos(engine.time)*0.1;
this.matrixNeedsUpdate = true;
};

var obj2 = new engine.Obj();
obj2.mesh = mesh;
obj2.program = prog;
obj2.onupdate = function(){
this.localRotation.SetEuler( engine.time,0,0 );
this.matrixNeedsUpdate = true;
};
obj2.SetParent( obj );
obj2.localPosition.x = 1;
obj2.matrixNeedsUpdate = true;