Wednesday, July 17, 2013

Lessons from Building Virtual Worlds


Building Virtual Worlds (BVW) is a class at the Entertainment Technology Center (ETC). If you haven't heard of either, you might have heard of Randy Pausch, one of the founders of ETC and the BVW class.

The BVW class focuses on rapid prototyping and teamwork. In 4 months, I worked and completed 6 team projects. During the process, we worked with technologies such as the Kinect, PS Move, Eyegaze (an eye-tracking device), and Phidgets. We also worked on custom ETC platforms such as the Jam-O-Drum and the Bridge.

However, the most valuable part of BVW is not the technical skills you learn. The important things are the lessons you learn from working with other people.

So, here are the most important lessons I've learned from BVW:



1. Keep an open mind about "Game Ideas"

It's hard to explain your cool idea to someone, and it's also hard to understand why someone else thinks an idea is cool. Sometimes things just click and you "get it." A lot of times this doesn't happen.

I can't tell you the number of times that I thought an idea was a bad idea, only to realize a few days later that it was "the best idea ever."

So, two pieces of advice: don't try to stonewall other people's idea - but do ask for clarification. And if you are explaining your idea to someone else, really try to explain why an idea is cool to you. Don't expect people to be excited about an idea if you just describe what the game is about.



2. If you know you need more time, lower your scope

When you realize you have less time than you need, you usually try to adapt by working harder. "It's okay. I can pull a few all-nighters."

This works for a few cases, but "working harder" can only take you so far. You might be able to cut it just barely before the deadline - but that's not good enough.

That's because the final stages of development - including polishing and testing - have a huge impact on the end product. Something you finish right before the deadline will have quite a few rough edges, and maybe a few cracks as well. Usually, it's much better to have a polished product with less content.

It usually doesn't take a lot to put the finishing touches on a game - but it's very obvious when you don't.



3. The things you care about are different from what the audience cares about

You know that little problem in your game that really bugs you? That graphical glitch or color scheme that you spent ages tweaking but just couldn't get completely right?

The player isn't going to notice that at all.

Instead, he going to have strong opinions on other things that you got used to already - such as the floatly controls or the lack of clear instructions.

As a developer, there are certain things you will obsess about. But spending a lot of time on them isn't the most efficient you can do to give players the best experience they can have.



4. Keep a Priority List

    A priority list is basically a list of tasks you need to do to complete your project with a rating of how essential each task is. It works best if the list is shared with everyone (for example, by using Google Docs).
 
 It helps you...
       Keep track of all the things you need to do - so you won't accidentally overscope
       Makes sure you work on the important things first
       Allow members to have a simple way to distribute tasks
       Keeps everyone notified of progress
       Makes you feel accomplished when you cross something off the list


 

Monday, June 3, 2013

Getting Started With Custom Post-Processing Shaders in Unity3D


I had some trouble learning about how to make my own post-processing shaders in Unity3D, so here is a simple guide to help people get started.

Note: Post-processing shaders require the "RenderTexture" function that is only avaliable in Unity Pro. Even shader code like "GrabPass" (which lets you use the previous render pass as a texture) requires you to have Unity Pro.



Learning the Basics


The first step is to learn about the Cg programming language and the Unity shader architecture. While you could write your shaders using OpenGL instead of Cg, Unity recommends writing shaders in Cg for compatibility problems. Also, almost all shader examples are written in Cg.

Cg Tutorial:  http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter01.html
Unity Shader Referencehttp://docs.unity3d.com/Documentation/Components/SL-Reference.html


Note: The Cg tutorial contains a lot of basic computer graphics knowledge that is good for review. However, you don't need to read about "compiling under different profiles" because Unity handles that internally. For Unity Shader Reference, the most important topics are ShaderLab Syntax, Writing Surface Shaders, and Writing Vertex and Fragment Shaders.



Writing Your First Shader


After going through those tutorials, it's time to write some post-processing shaders!

But first, how does Unity call post-processing shaders?

In Unity, post-processing shaders are different from regular shaders because there is no model to stick a material on. Of course, you could create a plane and stick your post-processing shader on that, but there is a better way to do this.

Turns out that the Camera class has a function dedicated to post-processing, called OnRenderImage. The Camera class will automatically call OnRenderImage, so you just have to fill it out like you do with Update or Start.

In that function, you should use the Graphics.Bilt function with a material. The Graphics.Bilt function will render the source texture using your material (a material is just a shader and the values passed in to it), and save it to your dest texture.

So there should be a script on your camera that does something like this:

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){

 //mat is the material containing your shader
 Graphics.Blit(source,destination,mat);
}
Note that in this code, we never explicitly tell the material (and hence the shader) to use the texture in the "source" variable (which contains the rendered image of the scene) as input. This is because Graphics.Bilt will automatically copy the "source" texture to the material's main texture (or _MainTex in the shader code).

After that, we need code for the actual shader. Below is a simple grayscale post-processing shader. The vertex shader simply transform the vertex position and texture coordinate and passes them along. The fragment shader uses the texture coordinates to get the color of the current render (which is stored in _MainTex) and finds the grayscale color.

Shader "Custom/GrayScale" {
Properties {
 _MainTex ("", 2D) = "white" {}
}

SubShader {

ZTest Always Cull Off ZWrite Off Fog { Mode Off } //Rendering settings

 Pass{
  CGPROGRAM
  #pragma vertex vert
  #pragma fragment frag
  #include "UnityCG.cginc" 
  //we include "UnityCG.cginc" to use the appdata_img struct
   
  struct v2f {
   float4 pos : POSITION;
   half2 uv : TEXCOORD0;
  };
  
  //Our Vertex Shader 
  v2f vert (appdata_img v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.uv = MultiplyUV (UNITY_MATRIX_TEXTURE0, v.texcoord.xy);
   return o; 
  }
   
  sampler2D _MainTex; //Reference in Pass is necessary to let us use this variable in shaders
   
  //Our Fragment Shader
  fixed4 frag (v2f i) : COLOR{
   fixed4 orgCol = tex2D(_MainTex, i.uv); //Get the orginal rendered color 
    
   //Make changes on the color
   float avg = (orgCol.r + orgCol.g + orgCol.b)/3f;
   fixed4 col = fixed4(avg, avg, avg, 1);
    
   return col;
  }
  ENDCG
 }
} 
 FallBack "Diffuse"
}