Tuesday, January 28, 2014

A Short Rant about Third Person Camera Angles in Video Games

Let's talk about something not related to anything.

Third person camera angles.

There's a lot of video games that use the "default" third person camera. You've seen them. The games where they place your character at the center of the camera and use your character as a pivot point.

It looks something like this:





There are some obvious faults with this type of camera system. The most general one is that it blocks your line of sight. Also, the view angle is not ideal in a lot of situations. When you move the camera down (as in moving your camera to almost parallel to the ground), almost half the screen is taken up by the ground close up. The other half is taken up by the sky, and you have a hard time telling apart depth. On the opposite side of the spectrum, if you rotate the camera up (and view from a higher ), all you get is the ground and you can't see objects far away.

This happens to be the camera system that a lot of MMOs adopt.

(Notice how some of these problems are not very obvious in Zelda: Ocarina of Time because when you need to look at anything the auto-targeting system takes over.)

However, a lot of games do something that make the camera angles a lot better. They simply raise the pivot point of the camera up a bit.







As you can see, Skyrim's camera is slightly better. The camera is looking more towards where the player wants to look instead of directly at the character. It let's you see more stuff and admire the view.

However, there are some practical concerns with this system when you start rotating the camera. The thing is, camera systems will limit the angle of rotation of their camera. For example, some won't let you rotate the camera up too high.

Now, suppose you are on a short inclined hill (as you often are in Skyrim). The ideal angle would be to look over the hill. However, the rotation limit locks your camera so you can only get a good look at the sky and not what's over the hill.






(Ok, so you think I'm probably a bit harsh on Skyrim. It's true that it has 3 camera modes: First-Person, Medium Range Third-Person, and Long Range Third-Person. And the one shown here is probably the least used.)


What I'm saying is, with a simple camera system there's always a bunch of compromises you have to make. But the experience can be so much better with a dynamic camera system that adapts to the player's actions.

Let's take a look at some screenshots from one of my favorite games, Shadow of the Colossus:


Riding




The effects of a better camera system are pretty clear. Almost every screenshot in you get while playing Shadow of Colossus looks like it could be a wallpaper. Not only does it look great when they nail the Rule of Thirds, having

Also, this is what the game looks like when you are playing. These screenshots aren't taken from special angles that aren't practical during actual gameplay.

(I'm looking at you, Skyrim)


As you can see, there are plenty of different ways you can improve from the default third person camera.



Here are a list of quick tips:



1. Don't focus on the character. Focus on what the player wants to look at


2. Don't put the pivot point of the camera on the player.


3. Change camera distance and angle when you want to show difference in size and scale


4. Change the field of view when you want to focus on something.






And that's the end of my rant.

Wednesday, July 17, 2013

Lessons from Building Virtual Worlds


Building Virtual Worlds (BVW) is a class at the Entertainment Technology Center (ETC). If you haven't heard of either, you might have heard of Randy Pausch, one of the founders of ETC and the BVW class.

The BVW class focuses on rapid prototyping and teamwork. In 4 months, I worked and completed 6 team projects. During the process, we worked with technologies such as the Kinect, PS Move, Eyegaze (an eye-tracking device), and Phidgets. We also worked on custom ETC platforms such as the Jam-O-Drum and the Bridge.

However, the most valuable part of BVW is not the technical skills you learn. The important things are the lessons you learn from working with other people.

So, here are the most important lessons I've learned from BVW:



1. Keep an open mind about "Game Ideas"

It's hard to explain your cool idea to someone, and it's also hard to understand why someone else thinks an idea is cool. Sometimes things just click and you "get it." A lot of times this doesn't happen.

I can't tell you the number of times that I thought an idea was a bad idea, only to realize a few days later that it was "the best idea ever."

So, two pieces of advice: don't try to stonewall other people's idea - but do ask for clarification. And if you are explaining your idea to someone else, really try to explain why an idea is cool to you. Don't expect people to be excited about an idea if you just describe what the game is about.



2. If you know you need more time, lower your scope

When you realize you have less time than you need, you usually try to adapt by working harder. "It's okay. I can pull a few all-nighters."

This works for a few cases, but "working harder" can only take you so far. You might be able to cut it just barely before the deadline - but that's not good enough.

That's because the final stages of development - including polishing and testing - have a huge impact on the end product. Something you finish right before the deadline will have quite a few rough edges, and maybe a few cracks as well. Usually, it's much better to have a polished product with less content.

It usually doesn't take a lot to put the finishing touches on a game - but it's very obvious when you don't.



3. The things you care about are different from what the audience cares about

You know that little problem in your game that really bugs you? That graphical glitch or color scheme that you spent ages tweaking but just couldn't get completely right?

The player isn't going to notice that at all.

Instead, he going to have strong opinions on other things that you got used to already - such as the floatly controls or the lack of clear instructions.

As a developer, there are certain things you will obsess about. But spending a lot of time on them isn't the most efficient you can do to give players the best experience they can have.



4. Keep a Priority List

    A priority list is basically a list of tasks you need to do to complete your project with a rating of how essential each task is. It works best if the list is shared with everyone (for example, by using Google Docs).
 
 It helps you...
       Keep track of all the things you need to do - so you won't accidentally overscope
       Makes sure you work on the important things first
       Allow members to have a simple way to distribute tasks
       Keeps everyone notified of progress
       Makes you feel accomplished when you cross something off the list


 

Monday, June 3, 2013

Getting Started With Custom Post-Processing Shaders in Unity3D


I had some trouble learning about how to make my own post-processing shaders in Unity3D, so here is a simple guide to help people get started.

Note: Post-processing shaders require the "RenderTexture" function that is only avaliable in Unity Pro. Even shader code like "GrabPass" (which lets you use the previous render pass as a texture) requires you to have Unity Pro.



Learning the Basics


The first step is to learn about the Cg programming language and the Unity shader architecture. While you could write your shaders using OpenGL instead of Cg, Unity recommends writing shaders in Cg for compatibility problems. Also, almost all shader examples are written in Cg.

Cg Tutorial:  http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter01.html
Unity Shader Referencehttp://docs.unity3d.com/Documentation/Components/SL-Reference.html


Note: The Cg tutorial contains a lot of basic computer graphics knowledge that is good for review. However, you don't need to read about "compiling under different profiles" because Unity handles that internally. For Unity Shader Reference, the most important topics are ShaderLab Syntax, Writing Surface Shaders, and Writing Vertex and Fragment Shaders.



Writing Your First Shader


After going through those tutorials, it's time to write some post-processing shaders!

But first, how does Unity call post-processing shaders?

In Unity, post-processing shaders are different from regular shaders because there is no model to stick a material on. Of course, you could create a plane and stick your post-processing shader on that, but there is a better way to do this.

Turns out that the Camera class has a function dedicated to post-processing, called OnRenderImage. The Camera class will automatically call OnRenderImage, so you just have to fill it out like you do with Update or Start.

In that function, you should use the Graphics.Bilt function with a material. The Graphics.Bilt function will render the source texture using your material (a material is just a shader and the values passed in to it), and save it to your dest texture.

So there should be a script on your camera that does something like this:

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){

 //mat is the material containing your shader
 Graphics.Blit(source,destination,mat);
}
Note that in this code, we never explicitly tell the material (and hence the shader) to use the texture in the "source" variable (which contains the rendered image of the scene) as input. This is because Graphics.Bilt will automatically copy the "source" texture to the material's main texture (or _MainTex in the shader code).

After that, we need code for the actual shader. Below is a simple grayscale post-processing shader. The vertex shader simply transform the vertex position and texture coordinate and passes them along. The fragment shader uses the texture coordinates to get the color of the current render (which is stored in _MainTex) and finds the grayscale color.

Shader "Custom/GrayScale" {
Properties {
 _MainTex ("", 2D) = "white" {}
}

SubShader {

ZTest Always Cull Off ZWrite Off Fog { Mode Off } //Rendering settings

 Pass{
  CGPROGRAM
  #pragma vertex vert
  #pragma fragment frag
  #include "UnityCG.cginc" 
  //we include "UnityCG.cginc" to use the appdata_img struct
   
  struct v2f {
   float4 pos : POSITION;
   half2 uv : TEXCOORD0;
  };
  
  //Our Vertex Shader 
  v2f vert (appdata_img v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.uv = MultiplyUV (UNITY_MATRIX_TEXTURE0, v.texcoord.xy);
   return o; 
  }
   
  sampler2D _MainTex; //Reference in Pass is necessary to let us use this variable in shaders
   
  //Our Fragment Shader
  fixed4 frag (v2f i) : COLOR{
   fixed4 orgCol = tex2D(_MainTex, i.uv); //Get the orginal rendered color 
    
   //Make changes on the color
   float avg = (orgCol.r + orgCol.g + orgCol.b)/3f;
   fixed4 col = fixed4(avg, avg, avg, 1);
    
   return col;
  }
  ENDCG
 }
} 
 FallBack "Diffuse"
}