### What is Refraction?

Refraction is the distortion of light path when it enters or leaves a material(medium) with different index of refraction(IOR) than the last. Refraction can be observed very easily when looking at glass, water or any other transparent material that distorts the scene behind. Even air itself can have different IOR in case of heat haze for example.

### Refraction Rendering

Refraction is an effect that is not very easy nor intuitive to implement into realtime renderer. Offline renderers can trace actual light rays and their absorptions, reflections and deflections accurately because it takes longer to compute. Realtime renderers have to resort to tricks and illusions (for a few more years at least).

Because refraction is in essence a distortion of background behind a refracting object it makes sense to focus on creating a convincing distortion shader. There is a good mathematical representation of refraction Snell’s law which can be implemented to get physically accurate directional vectors for refractions. But as the resulting vectors would be in world space they are more suited to sample a Cubemap texture. In practise rendering a high enough resolution cubemap centered on every refracting object would be too expensive for realtime rendering.

It is far more quicker to use already existing resources: rendered screen texture behind the object. And then apply distortion to that screen texture. There is only a limited field of view rendered so using physically accurate vectors would result in lots of weird visuals where directional vectors would need to sample pixels that are not part of the rendered scene.

Refraction is more of an eyecandy and not part of the gameplay itself, so using non physically accurate methods can be forgiven for faster and artistically more interesting visuals.

### Creating a Distortion Shader in Unity

for this tutorial we will be creating an unlit unity shader that warps background with a sine wave. I assume that you are familiar with Unity’s shaderlab syntax and concepts.

Firstly we need to get the texture behind our object. Fortunately Unity’s shaderlab has a convenient way to do this with GrabPass.

``````
...
// Queue is important! this object must be rendered after
// Opaque objects.
Tags { "RenderType"="Transparent" "Queue"="Transparent"}
LOD 100
GrabPass{
// "_BGTex"
// if the name is assigned to the grab pass
// all objects that use this shader also use a shared
// texture grabbed before rendering them.
// otherwise a new _GrabTexture will be created for each object.
}
Pass{
...
}
}
``````

Next step is to aquire the coordinates to actually sample the texture. Normal UVs will not do, because screen texture must be sampled in screen space coordinates. So we must calculate vertex positions in normalized screen space. There are a number of builtin functions to achieve this as seen below.

``````
GrabPass{
}
Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"

struct appdata{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f{
float2 uv : TEXCOORD0;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
// this is a slot to put our screen coordinates into
// it is a float4 instead of float2
// because we need to use tex2Dproj() instead of tex2D()
float4 screenUV : TEXCOORD1;
};

sampler2D _MainTex;
float4 _MainTex_ST;

// builtin variable to get Grabbed Texture if GrabPass has no name
sampler2D _GrabTexture;

v2f vert (appdata v){
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
UNITY_TRANSFER_FOG(o,o.vertex);
// builtin function to get screen coordinates for tex2Dproj()
o.screenUV = ComputeGrabScreenPos(o.vertex);
return o;
}

fixed4 frag (v2f i) : SV_Target{
fixed4 col = tex2D(_MainTex, i.uv);
// sampled grab texture
fixed4 grab = tex2Dproj(_GrabTexture, i.screenUV);
UNITY_APPLY_FOG(i.fogCoord, col);
//return col;
// visualize coordinates
return frac(i.screenUV * 16.0);
}
ENDCG
}
``````

If all went well you should see a material with trippy gradients which warp and change according to the objects position on the screen.

Now we can return the grab sample from the fragment shader to see if everything works as it should.

``````
fixed4 frag(v2f i) : SV_TARGET{
fixed grab = tex2Dproj(_GrabTexture, i.screenUV);
return grab;
}
``````

Now the material should appear to be “invisible”.

of course the object is not actually invisible but has a perfect representation of the background projected onto its surface: a perfect camouflage.

Now that we have our _GrabTexture and screenUV working we can start to have some fun. To apply warping in the screenspace to the texture we simply have to manipulate the screenUVs x and y component. For example if we just multiply the coordinates with 0.8 there will be a magnifying lens effect.

``````
fixed4 frag(v2f i) : SV_TARGET{
fixed grab = tex2Dproj(_GrabTexture, i.screenUV * float4( 0.8, 0.8, 1, 1 ));
return grab;
}
``````

Now we can try a sine wave offset with _Time to create an animated warping surface.

``````
fixed4 frag(v2f i) : SV_TARGET{
fixed4 grab = tex2Dproj(
_GrabTexture,
i.screenUV + float4( sin((_Time.x * 10)+i.screenUV.x*32.0)*0.1, 0, 0, 0)
);
return grab;
}
``````

So this is our result. A simple warped screen texture projected onto an object. I hope this tutorial has been of use to someone. full shader code is here.

This has been a mere tip of the iceberg of course. There is a whole new level of complexity when you would need to create a warping in worldspace rather than in simple screenspace. This however is a secret sauce in my refractive shader asset. If you are interested in how to solve screenspace vs worldspace problem and also how to use GrabPass with unity standard shader you can aquire my shader from Unity Asset Store.