Understanding WebGL - sahilshahpatel/fluid-sim GitHub Wiki

This page will go into light detail about the graphics pipeline as it relates to this project. This is the only chapter where we will give you the option to simply read and then skip ahead without implementing yourself. Why? Because everyone has their own level of interest in lower-level details of systems.

For complete beginners, it will be very useful to read this page, but you may choose whether or not to implement the ideas yourself. Whatever your interest or knowledge level, please read starting at Section 4: Our Template before moving on.

Section 1: Why have a graphics card?

If you've read our extra sources, you may have realized that the original paper which used this algorithm for fluid simulation had its code written in C++ and made no mention of GPUs. While not necessary for the algorithm, using GPUs will create much better performance. To understand why it's important to remember that GPUs are great at parallelism: applying the same operations to many pieces of data at the same time.

In our fluid simulation we have a 2D grid of cells representing the data of that point in space, and we want to apply the same physical rules or equations to each point. So while Jos Stam wrote for the CPU in his original paper, we will be writing for the GPU.

Section 2: How do we access the GPU?

The next important question is how to send our program to the GPU. Fortunately, open standards are common in the graphics world (unlike the CPU world). For our purposes we will be using WebGL because it works in browsers which makes it easier to show off our projects!

We initially described GPUs as being great for parallelism. With other graphics APIs we can exploit this parallelism in the raw via Compute Shaders. With WebGL, however, all API calls are focused on drawing things rather than computation. As you'll see, this won't hold us back, but it is a good thing to keep in mind.

To use WebGL, we'll need an HTML Canvas element via <canvas id="myCanvas"></canvas>. We can then get its so-called "rendering context" via the getContext JavaScript function. This context object is our window into WebGL, and so we'll often name it gl. This gl object will give us the ability to compile code for the GPU and run it as we choose.

Section 3: Parallels to the CPU

It is hopefully intuitive to think of our grid of fluid data (like velocity) as a 2D array of data. Then in a normal program we would use a nested for-loop to iterate over all the data and update it.

In graphics land we don't have 2D arrays of data. Instead we have textures. Textures can be 1D, 2D, or 3D. They work very much like arrays, but since they are meant for images they are accessed in special ways. For example, an index of 1.5 is impossible for traditional arrays, but a texture can have different accessing modes. In one mode 1.5 might clamp to 1. In another it might return the blend of index 1 and 2.

A program meant for a GPU is called a shader. Shaders can take textures as input and generally render to screen. Most of our shaders (beside the very last), however, will render to texture. In the same way a texture is the analog of a 2D array, a shader is the analog of our nested loop body. That is to say, a shader will automatically run for all pixels in a texture (in parallel!).

That last statement was a slight lie. A shader runs for all covered pixels in a texture. Remember that WebGL was built for displaying 2D and 3D objects. If I wanted to draw a triangle, for example, the shader would only run for the pixels covered by my triangle. For this fluid simulator we have two cases. Most of the time we will run our shaders on all pixels (by drawing a rectangle which covers the whole screen). And in order to enforce the boundary conditions of our simulation we will sometimes only run our shaders on the edges of the screen (by drawing a few 2D lines).

Section 4: Our template

Ok, let's take a first look at the code you've been given. Of course if you don't like how we organize anything, feel free to make it your own.

index.html contains the only HTML page we require. We will edit this page as we go only to add UI elements to control parameters of the simulation. This file is the one most free for personalization, so make it your own!

css/ is a place for any CSS you might need. Since we use Bootstrap in our default HTML, this is pretty empty. We include a single CSS class for making your canvas pixelated which can be useful for debugging.

glsl/ is where our shader code resides. For now it is pretty empty, with the exception of a vertex shader. We won't talk about vertex shaders here since they are irrelevant to how we are using WebGL. As we implement the algorithm this folder will fill up with different functions. GLSL is the shader language used in WebGL.

js/ is for all our JavaScript. It contains three files:

  • main.js is for initialization. It will start up your simulation and bind any UI controls required.
  • util.js is for utility helper functions. Most of these were taken from UIUC's CS 418 starter code and some others were written as needed for this project. As you develop, if you find that you need helper functions for non-simulation needs, this is the place for them.
  • FluidSimRenderer.js is where the bulk of our project will take place. It is a JavaScript class encapsulating the whole simulation process.

Let's take a closer look at the FluidSimRenderer class and its methods:

  • constructor: The constructor sets up some basic fields of the class.
  • init: In order to be able to use our setup of standalone GLSL files, we can't fully initialize in the constructor. We won't worry about the details of JavaScript promises, but they are used here to load the GLSL files.
  • update: This function is where our algorithm will take place. It takes in the time step and updates the fluid data accordingly.
  • advect, jacobi, applyForces, computeDivergence, removeDivergence, computeCurl, and enforceBoundary: These functions are wrappers for running our shaders. They will be used from update and are emtpy for now.
  • render: This is the final shader we will run each frame, the one that actually outputs to screen (rather than being a computation and rendering to a texture).
  • animate: This is the function that will be called each frame. The browser API automatically sends a high resolution timestamp as an argument to this function. From here we will call update and then render.
  • play: Sets up the browser to call animate each frame.
  • pause: Interrupts the constant calling of animate each frame.
  • reset: Clears the textures to 0 to start from a clean slate.
  • clearTexture: Helper function to clear a given texture to a given set of values.
  • copyTexture: Helper functions to copy one texture to another.

The main functions to fill in over time are update and the shader wrappers (including render). You will also have to update the constructor and init as you add features.

In the next chapter we will render something to the canvas for the first time.

⚠️ **GitHub.com Fallback** ⚠️