Real-Time Fluid Shader

As I am currently working on a surfing game for the oculus rift, having a high-quality water shader is essential. The problems is that existing water shaders for unity are mostly designed to work with flat water planes and not with complex three dimensional bodies of water as seen in  breaking waves. So I decided to write a new custom water CG shader from scratch: 

Reflection and refraction color are both sampled from a skymap and an additional directional light. They are then blended according to the Fresnel Equation which depends on surface normal and viewing direction. The surface normals are modified by multiple combined uv map scales and offsets to simulate wave movement. 

Absorption in the fluid is calculated according to the Beer–Lambert law, providing separate red, green and blue absorption coefficients. This determines the perceived color of the fluid and can be adjusted to simulate the dark blue of deep ocean water or the greenish tint of shallow coastal water. The amount of absorption requires the thickness of the object, which is calculated in screen coordinates in a separate pass by first adding up the distances of all the back-facing surfaces and then subtracting the distances of all the front-facing surfaces (Greg James, 2003). Finally, The Henyey-Greenstein phase function is used to approximate the Mie in- and out-scattering in the fluid, depending on viewing- and light angle.

S   is the distance traveled through the media and   q   is the angle between the ray and the sun.   E    sun  is the source illumination from the sun,   b    ex  is the extinction constant composed of light absorption and out-scattering properties, and   b    sc  is the angular scattering term composed of Rayleigh and Mie scattering properties

S is the distance traveled through the media and q is the angle between the ray and the sun. E sun is the source illumination from the sun, b ex is the extinction constant composed of light absorption and out-scattering properties, and b sc is the angular scattering term composed of Rayleigh and Mie scattering properties

Tough not yet optimized, the shader performs reasonably well on a integrated laptop graphics card. As it will be used for a surfing game were the water is not considered a "prop" but is actually the main point of focus, good looking water is a key factor and is certainly worth a big chunk of the time- and performance budget. 


Simulated Buoyancy in Unity

The next step for creating my very own personal surfboard simulator for the oculus rift is to tackle the missing support of  buoyancy forces in Unity. The principle is pretty straightforward:

The buoyancy is equivalent to the weight of the displaced fluid.

So after sending a raycast from the buoyant object in negative Y direction we get a surface point on the water mesh. 

Calculating the submersed volume and resulting buoyancy for complex meshes would need too much performance. At simpler solution is to approximate the volume by placing multiple "buoyancy probes" inside the mesh which are then connected using rigid joints. For each probe, the submersion depth can be easily calculated by using position, radius and surface point.  

void FixedUpdate () {
    force += calcBuoyancyForce (submergedDepth);
    rigidbody.AddForce (force);

private Vector3 calcBuoyancyForce (float submergedDepth) {
    float submergedVolume = area * submergedDepth;
    Vector3 force = Vector3.up * gravity * liquidDensity * submergedVolume;
    return force;

Because objects traveling trough water are subject to drag forces, we have to incorporate these, else the object will just keep on happily bobbing up and down forever. As the drag depends on submersion depth and directionality we can't use unity's default drag mechanism. Also because we approximated the actual volume of the submersed mesh by using multiple probes, this is the only way to account for the actual shape of the mesh (i.e. large drag resistance perpendicular to broad faces and low drag resistance perpendicular to narrow faces). For each buoyancy probe it's current velocity is decomposed into local X,Y and Z direction. Using the Drag Equation we can calculate drag force for each direction.

Cd is the drag coefficient which can be set for each local axis of each buoyancy probe. Combining the different local axis drag forces results in an object which can move easier in one direction, but harder in another.

Finally, because of the discrete time step nature of the calculations, it is important to clamp the resulting drag force above a certain threshold. Basically, drag should always only decrease the velocity, we don't want the drag force to cause velocity to jump from positive to negative in between time steps. Else we get small-scale jittering instead of smooth motion.

Vector3 clampMaxDragForce (Vector3 dragForce) {
    float dragForceMax = velocity.magnitude / (mass * Time.fixedDeltaTime);
    if (dragForce.magnitude > dragForceMax) {
        dragForce = dragForce.normalized * dragForceMax;
    return force;

Position Invariant Texture Mapping

While a finite-element hydrodynamics simulation would to be the best approach to create realistic looking waves, the hardware just isn't there yet to calculate this in real time. Also the majority of the current methods to dynamically generate realistic looking wave (such as Tessendorf 2001, Simulating Ocean Water using inverse FFT Transformation and vertex displacement) can only be used for rather shallow waves and cannot be applied to breaking or collapsing waves.

The alternative was to model the large scale waves as independent meshes and programmatically move them across the water surface, while creating the small scale details trough overlayed normal maps in the fragment shader. As the water particles in a gravity driven wave generally only move up and down (opposing common sense and general opinion), moving wave meshes using a translation invariant texture mapping proved to be a computationally inexpensive and realistic looking solution, while still giving the correct physical behaviour for the buoyancy physics discussed in the previous blog post.

Unity already provides a technique for translation invariant texture mapping called "texture projection" (used for cookies etc). But as the name states this is vertical projection of the texture onto the mesh surface and would result in a vertical tearing of the texture on steep slopes of the wave. 

By using the size and position of the wave to calculate an adequate texture space transformation it is possible to map the texture in such a way that it seems to remain stationary on the edges while "wrapping" around the deformation in the center of the uv map. This however requires that the uv map always spans the whole 0.0 - 1.0 of the texture space in both directions.

Using meshes to represent the wave also makes sense because the vertical movement of a two dimensional cross-section of a surfable wave (breaking from left to right or opposite) is basically independent and can be seen as a time-shifted version moving along the horizontal axis. This means the whole animation of a breaking wave could possibly be create by only showing an different horizontal subsection of the whole wave according to time.