Re-Animating Animations

When the player is hovering over an object, we want it to glow or rise or scale up or whatever. We’ll use the glow from here on but it could be applied to any animation. So if the player’s looking at that object then start the glow fade in animation. If the object is glowing and the player isn’t looking at it anymore, play the glow fade out animation.

If the player’s just passing over this object and not trying to focus it then it’ll play the fad in animation and then when that stops immediately play the fade out.

Well if the player stops looking at the object let’s cut the fade in animation and play the fade out animation. Now all of our object flash which isn’t good either.

So let’s hack our animation to allow us to change the start value of the glow at runtime. Hooray, our objects don’t flash!

But our objects don’t fade out at the same rate.

Since we’re already passing the start value at runtime, we can use this to contract the animation duration.

Now, if the player glances away from an object, we have this problem in reverse. So now we can double the number of lines of code on exactly the same problem.

Let’s also use this same code for a hint in our tutorial. But now our object is fading in twice, once for the hint and one for player’s gaze.

We could just skip the start animation during the tutorial. But we have this system for passing the start value through to the animation.

Now the tutorial needs the glow fade in value, so let’s just make that static. Brilliant.

Aren’t we smart.

No.

Ctrl + A, Delete.

Starting over. We can give our glow some states that we can manipulate. These states manipulate the object’s glow. Intuitively we have four states: Fading in, Fading out, Active, and Inactive. Then every render game loop update we can use these states to drive the amount of glow we have.

We’ll define some values, we’ll have a CurrentTime, a Duration and a CurrentState. The CurrentTime will go between 0 and 1 and the Duration is a constant time in seconds, while the CurrentState stores the state of the animation.

 
enum AnimationState {
    FadeIn,
    Active,
    FadeOut,
    Hidden
}
float CurrentTime;
const float Duration = 5.f;

AnimationState CurrentState;
 

First, let’s translate the render delta from real seconds into our animation time. This is quite simply dividing the render delta by the animation duration. We’ll use this modified delta from now on. Something to note is that this works whether the duration is greater than or less than one. The only case where it doesn’t work is for a instant animation. In this case we can hard code our delta to some constant value if the duration is zero to prevent mathematical errors.

 
void UpdateAnimation(float RenderDeltaTime)
{
    float ModifiedDelta = 2.f;
    if(Duration != 0.f)
    {
        ModifiedDelta = RenderDeltaTime / Duration;
    }
}
 

Next we use this modified delta to increment or decrement the CurrentTime based on the current state. This is the main part of this state based algorithm.

 
void UpdateAnimation(float RenderDeltaTime)
{
    float ModifiedDelta = 2.f;
    if(Duration != 0.f)
    {
        ModifiedDelta = RenderDeltaTime / Duration;
    }

    if(CurrentState == AnimationState::FadeIn)
    {
        CurrentTime += ModifiedDelta;
    }
    else if(CurrentState == AnimationState::FadeOut)
    {
        CurrentTime -= ModifiedDelta;
    }
} 

There are some edge cases, two to be precise, zero and one. We can handle these quite simply by clamping the time and changing the state. After we’ve clamped the CurrentTime, it’s now, proportionally, how far through the animation we are.

 
float UpdateAnimation(float RenderDeltaTime)
{
    float ModifiedDelta = 2.f;
    if(Duration != 0.f)
    {
        ModifiedDelta = RenderDeltaTime / Duration;
    }

    if(CurrentState == AnimationState::FadeIn)
    {
        CurrentTime += ModifiedDelta;
    }
    else if(CurrentState == AnimationState::FadeOut)
    {
        CurrentTime -= ModifiedDelta;
    }

    if(CurrentTime > 1.f)
    {
        CurrentTime = 1.f;
        CurrentState = AnimationState::Active;
    }
    else if(CurrentTime < 0.f)
    {
        CurrentTime = 0.f;
        CurrentState = AnimationState::Hidden;
    }

    return CurrentTime;
} 

Most likely your animations aren’t going to be linear. If you have animation curves, simply plug the number in. If not, there’s a whole bunch of mathematical functions you can apply to get the normal easing values.

Now, when we’re looking at an object, we can set the object’s glow state to FadeIn, if we look away we can set the glow state to FadeOut. Importantly, the state can be set and the object’s glow will sort itself out without any strange behaviour. If we pass over an object quickly, the CurrentTime doesn’t get very high before we start fading out again. In the tutorial example, we set the object’s glow to FadeIn after it’s already Active, the edge case gets it and keeps the CurrentTime at one.

Events are Talking to me

We all have event systems, there’s almost no program that doesn’t use events these days. Their proliferation belies their performance. They help neaten a whole bunch of code that, otherwise, would be a hideous nest of coupled code. If you don’t use event systems and I’ve lured you here under the pretence of a funny title, they are a programming practice that separates event triggers and event listeners. This removes a lot of the triggers looking for listeners or vice versa as you make an Event Handler that has lists of things listening for a particular event.

Event systems use these three roles in the following way. We’ll have the trigger notice something which requires an event, it’ll then send the event off to the event handler and think no more of it. The event handler will go through the list of things it has listening out for that event and send a call to the corresponding function and think no more of it. The event listeners will then do whatever they need to for that event and then the chain will have ended.

 
class EventTrigger : Object
{
...
    void SomeFunction() 
    {
        if(WeNeedToTriggerThisEvent) 
        {
             EventHandler.TriggerEvent(ThisEvent, EventArguments);
        }
    }
...
} 

static class EventHandler 
{
...
    Array&amp;amp;amp;amp;amp;lt;Array&amp;amp;amp;amp;amp;lt;EventListenerDelegate&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;gt; Listeners;

    void TriggerEvent(Event::Type EventType, EventArgs Args)
    {
        foreach(EventListener Listener in Listeners[EventType])
        {
            Listener.Execute(Args);
        }
    }
...
} 

class EventListener : Object 
{ 
    void Init()
    {
        EventHandler.AddListener(OnEvent, EventType);
    }

    void OnEvent(EventArgs Args)
    {
        ...
    }
} 

Event handlers are often part and parcel of modern game engines, they each have their own handlers that you can add triggers and events to. However, also in modern game engines, you are likely to have large inherited objects from all of the engines base classes and then yours on top of that. That’s a lot of classes and within that inheritance list there’s likely to be some event overlap. A base class will want to do something on an event and you want to do something else.

How do you control which event gets triggered, you can call super functions that will invoke an inherited event response. You can invoke these at the beginning of the function for bottom to top event priority or at the end for top to bottom event priority. If you want even more flexibility you can invoke it within conditional statements – whatever you wish.

 
class EventListener : Object 
{
    void OnEvent(EventArgs Args)
    {
        Super::OnEvent(Args);
        ...
    }

    void OnOtherEvent(EventArgs Args)
    { 
        ...
        Super::OnOtherEvent(Args);
    }
} 

The makers of one of the oldest game engines have had another neatening tweak. In UE4 the event handler waits for a reply from the event listener. The listener will come back with a reply that is handled or unhandled, not only this but there is some choice of what the handler should do with the reply for example you may set the user focus after you have come out of the event listener. Crucially it’s only after the event listener chain is complete that any action is taken.

 
static class EventHandler 
{
...
    Array&amp;lt;Array&amp;lt;EventListenerDelegate&amp;gt;&amp;gt;; Listeners;

    void TriggerEvent(Event::Type EventType, EventArgs Args)
    {
        EventReply Reply;
        foreach(EventListener Listener in Listeners[EventType])
        {
            Reply = Listener.Execute(Args);
            Reply.Execute();
        }
    }
...
} 

class EventListener : Object 
{ 
    void Init()
    {
        EventHandler.AddListener(OnEvent, EventType);
    }

    EventReply OnEvent(EventArgs Args)
    {
        ...
        EventReply Reply = EventReply::Handled().SetUserFocus(this);

        return Reply;
    }
} 

This is an interesting addition to the common pattern of event handlers. As now you can add to, overwrite or ignore previous replies, we get added flexibility and power for no added coupling. This system still give priority to base behaviour, or whichever priority you go with, but it’s not outright – other classes can override it giving you more flexible event responses.

The separation of event listeners and event action is potentially very powerful. Why, then, limit the post chain actions to a few options. Simply store a delegate that will execute if bound after the event listener chain is complete. This gives us full control over how we handle triggered events.

 
static class EventHandler 
{
...
    Array&amp;amp;lt;Array&amp;amp;lt;EventListenerDelegate&amp;amp;gt;&amp;amp;gt;; Listeners;

    void TriggerEvent(Event::Type EventType, EventArgs Args)
    {
        EventReply Reply;
        foreach(EventListener Listener in Listeners[EventType])
        {
            Reply = Listener.Execute(Args);
            Reply.Execute();
        }
    }
...
} 

class EventListener : Object 
{ 
    void Init()
    {
        EventHandler.AddListener(OnEvent, EventType);
    }

    EventReply OnEvent(EventArgs Args)
    {
        ...
        EventReply Reply = EventReply::Handled();
        Reply.Action = OnEventAction;
        Reply.Args = ReplyArgs;

        return Reply;
    }

    void OnEventAction(ReplyArgs Args)
    {
        ...
    }
} 

Making Your Game Sell Itself

Marketing is expensive, and a pain to do. A great (and free) way to do this is to have your players recommend your game to their friends. Word of mouth or going viral is one of the most effective marketing strategies around. Great! How to engineer it is somewhat more difficult. Rather than getting into information propagation and the psychology of social media, I’m going to talk about how to make your games screen captures look good.

A game that definitely thought a lot about this is Monument Valley. It is clear that their art style was considered with selling the game in mind, ustwo even confirmed as much in their GDC talk

In Monument Valley the camera is fixed allowing the designers to fix every pixel of the screen, everything is designed to make the screen look as good as possible. The fixed camera makes it very easy for the designers and artists to think about the composition and the positioning of everything in the screen. How far things should be from the edges of the screen, what colour the background is, the perspective the player is given, all of it is under complete control. The time and effort ustwo put into this aspect of the game is why it could be shared with such nice screenshots. By creating a game that always looks good they got everyone to advertise their game. Whenever they tweeted a picture or showed their friends their screen, they were advertising Monument Valley.

Having a fixed camera makes it easy, what if you give the player control of how the perspective and the composition. You might end up like Firewatch. Their game was also one that seemed to wow with every screen shot. Every video looked like a promo. If you listen to Campo Santo’s GDC talk you’ll see that they painstakingly designed every environment to yield the maximum impact. Each environment designed from initial sketches to be the best looking environment it can be. The results are plain. No matter what the player chooses to frame, no matter what angle the player is looking at the scene from, the scene performs. No easy task. So if you give the player camera control, you need to do a lot more work to make the screenshots from the game look good.

Well maybe.

People like looking at things that look good. A statement just shy of a tautology. What I mean is that people will often take screenshots of part of a game that looks good. Similarly with photography, photographers don’t blunder around taking pictures at random, they search for something they can frame nicely, that has a good perspective. In this way we can cut some corners with our environment design. Games that have environments too large to micromanage rely on this technique, the sprawling landscapes of The Witcher 3 exist in some parallel universe where sunrises and sunsets seem to take up half the day. But a bit of looking and sure enough, early morning and late evening light is favoured by photographers. In this way the developers at CDProjectRed have made it easy for players to capture something that they thought was beautiful in their game and share it. Ever wondered why the colours in The Witcher 3 are so vibrant?

 

Session Analytics

So, you’ve finished your game. Or more correctly, you’ve released your game – these things never seem to be finished. And unless you’re some sort of psychopath you want to know how well it’s doing. Anxiously checking the number of downloads can only get you so far. A more accurate measure of how well you game is doing is linked to the player session data. Simply put – the more people playing, the better. In essence, we want to remove the clutter from people who have downloaded the game but are no longer playing it.

There are a number of analytics to measure the number of people playing your game. You could get the number of players playing in the moment you want to see the analytics, but that kind of data is very hard to draw concrete patterns from. It’s safe to assume you have more players in the day than in the night, so let’s get the number of players over the course of 24 hours. This is already heaps better than what we had before. Hang on, wouldn’t some players play on weekends and not on weekdays? You’re absolutely right. Maybe we should take players over a week instead of a daily one. Better, but if the majority of our players play on weekends, we only have two days of seven that contribute to our total. As well as playing your game, like to do other things on weekends.(Crazy I know.) If our player base varies week on week, maybe we should change the collection time from a week to a month. This helps mitigate the effects of busy weekends. Common ones you’ll see are daily active users (DAU) and monthly active users (MAU).

So, we have a number for players playing over any day, week or month, but how many can we count on? How many players are just trying out the game then and how many are hooked? Let’s define the question properly: How many players that were playing at time t, are still playing after n days? This your game’s retention or sticking power. We want to discount the new players that joined after time t and any of the players from time t that don’t play after n days. The percentage represents the chance of your game having a long life span and supporting you in the future. For example, No Mans Sky has poor retention and probably won’t make much money in a year, whereas Rocket League continues to make money because they’ve got a high retention in their player base.

Ok, we’ve talked about what the numbers mean, but how do we get them. All of the stuff we’ve talked about has been about who’s playing and at what time. Let’s store the session data in the database. For each session we’ll have a player id, start timestamp and end timestamp. (You can put a unique id in if you want or use a composite key of player id and start timestamp.) With the start and end times of each player we can construct all of the numbers for the analytics we mentioned above. E.g. we can search for players that have a session on a certain day, then find the players that have a session on another day and compare the player ids to get our retention. To get Daily active users we search for players that had one or more sessions between two timestamps 24 hours apart.

How do we fill out this session data? When the game starts, we want to know which player is playing. The game sends up to the server a unique id that it either got from a previous time it started or we give it a new one from our server. Once our player has logged in, we make a new session record with the player id and the current timestamp. Simple, once logging in is handled.

How do we know when the session ended? By it’s definition the game can’t send us data anymore. We need to fill it out before we know the player has finished. Let’s say our game checks in with the server every minute or so, we can update the session data there update the end time of that players latest session to the current timestamp. Now our session data has an end time. Even when our player loses internet connection the session is preserved as best it can be. Note, if the player starts their game offline and then comes online that should be handled when our game checks in with the server so it doesn’t join the last session and this session together. Perhaps keep a logged in flag locally until the game can make contact with the server.

Now with the data and techniques you can see how popular your game is. Unfortunately nothing described here can tell you why it’s not. Sorry.

Databases

This week I’ve been making the databases for my game. The data storage objects that make my game work. A lot of the features of these classes are similar and I was wondering what was the best way to design these. I’ll explain how I do it at the moment and talk about why.

My current solution involves inheriting a base database class. This class handles all of the common behaviour, such as loading and saving the data from the server, but the data objects are filled out in the child class. I find myself having to write similar functions for searching and ordering the data objects. I feel like there may be a better way to handle the data objects.

Here is the base class I use:


using UnityEngine;
using System.Xml;
using AdvancedUtility;

public class Database : Object {

    public bool gotData;
    protected delegate void OnGotDataCB();
    protected OnGotDataCB OnGotData; 

    protected string scriptLocation;
    protected int loaderOptions;
    private AdvancedLoader loader;

    public Database() {
        gotData = false;
    }

    public void LoadData() {
        loader = new AdvancedLoader(loaderOptions);

        loader.SetSuccessCB(WebReturnedSomething);
        loader.SetErrorCB(WebError);

        loader.Load(scriptLocation);
    }

    protected void SaveLocalToCache(XmlNode preLoaded) {
        loader.SavePrelodedData(scriptLocation, preLoaded);
    }

    protected void SaveLocalToCache(string preLoaded) {
        loader.SavePrelodedData(scriptLocation, preLoaded);
    }

    protected void SaveLocalToCache(byte[] preLoaded) {
        loader.SavePrelodedData(scriptLocation, preLoaded);
    }

    private void WebReturnedSomething(XmlNode data) {
        ProcessData(data);
        gotData = true;

        if (OnGotData != null)
            OnGotData();
    }

    protected virtual void ProcessData(XmlNode data) {

    }

    protected virtual void WebError(NetError error) {

    }
}

The Object class I inherit handles object destruction and memory handling. The following variables are either expected by my database manager or there to communicate the child class to the loader object. The LoadData method sends constructs the loader to fetch the data from where the loaderOptions direct it. I also have the functionality to override the cache with any local changes that I have. The ProcessData and WebError let the child know the outcome of the loader call.

If you’ve any suggestions let me know.

Raining on my Parade.

Last week I worked on a shader that mimicked a rain effect. You can see how I did this here. This is a followup about the performance of that shader on a variety of devices. First off I created a similar particle effect, then I placed each in a separate scene and duplicated them 500 times to exaggerate the difference in performance.

System Shader FPS Particle FPS
Unity Editor 137 126
32bit PC 70 433
64bit PC 57 446
Android 4 21

The results here show that the Unity’s particles are much more optimised than my shader. However, I’ll stress that you should test your options on your own target system before letting anyone else tell you which route is faster or more efficient.

Make it Rain

Here’s a brief tutorial about how I made one of my first shaders. I’ve broken it down and explained every step, but if you’re not a programmer some of the terminology might trip you up. I’d recommend Makin’ stuff look good YouTube tutorials for a more intuitive introduction.

 Here’s how I did it.

What I wanted was a shader that would take a random noise texture and render only the pixels that had colour values above a customizable threshold value and set their colour to be another customizable colour. The rest would be invisible. Then I needed to be able to tile and offset the noise texture to give the impression of movement.

Shader "Rain" {
    Properties{
         
    }

    SubShader{
        Tags{

        }

        Pass{
            CGPROGRAM

            ENDCG
        }
    }
}

This is our empty shader, the tags CGPROGRAM and ENDCG define the start and end of the actual shader code, the rest is so Unity’s ShaderLab can interpret it properly.

Shader "Rain"{ tells Unity what this shader’s name is, we can also organise our shaders alongside Unity’s default shaders by mimicking a path name. For example Shader "Custom/Weather Effects/Rain" { will place our shader in a Custom and Weather Effects folder in Unity’s shader drop down list.

The Properties block tells Unity to show the things here in the editor, this is where you add your texture and anything else your shader needs. You might be familiar with these as the material settings, as that’s where they appear in the Unity editor, but they belong to the shader.

The SubShader block allows you to define multiple types of the shader. If you have a particlualrly expensive shader, you can define multiple SubShaders and the player’s graphics card will go down the list and use the fisrt SubShader block that works.

The Tags block is used to set the rendering order and some other parameters of a SubShader. It’s a set key-value pairs, for more info check here.

The Pass block tells Unity to do a rendering run with the shader code inside, each SubShader can have multiple Pass blocks but keep in mind that each Pass is a run on the GPU if you can achieve what you want from a shader in less Pass-es it’s almost always better to do so.

Before we can start rendering stuff we need to define a few data types in our shader.

CGPROGRAM

struct appdata {
    float4 vertex : POSITION;
    float2 uv : TEXCOORD0;
}

struct v2f {
    float4 vertex : SV_POSITION;
    float2 uv : TEXCOORD0;
}

ENDCG

Why we’ve done this will become clear in a little bit, first I want to explain what we’ve done. The first struct has two properties of type float4 and float2. A float4 is another data structure, it holds four float values, whereas a float2, you guessed it, holds 2. We’ve also put some other stuff after are variable types and names, POSITION and TEXCOORD0 are keywords that let the computer know what values to fill these with. TEXCOORD0 tells the computer to use the first set of uv coordinates TEXCOORD1, TEXCOORD2, TEXCOORD3 represent the others. These can be float2, float3 or float4, but we only want the x,y. POSITION represents the position of the vertex, simple. But why is it float4? The reason is that it is bundled with an extra variable which is the clipping plane, don’t worry about it. The second struct consists of the same data but using SV_POSITION instead of POSITION, the reason is a boring one. This type is used to be compatible with Playstation and a few other platforms.

#pragma vertex vert
#pragma fragment frag

v2f vert (appdata INvertex) {
    
}

float4 frag (v2f INfragment) : SV_TARGET {
    
}
ENDCG

Ok, this is why we did the stuff we did, so we can use it in these functions. The #pragma tags tell the computer what we’re expecting in the functions we’ve made. The SV_TARGET tag tells the computer that this function will return the colour for this pixel so it can stop and move to the next pixel. The vert function is used to translate a point in the game to a point on the screen and the frag function chooses what the colour should be for the pixel.

v2f vert (appdata INvertex) {
    v2f output;
    output.vertex = mul(UNITY_MATRIX_MVP, INvertex.vertex);
    output.uv = INvertex.uv;
    return output;
}

float4 frag (v2f INfragment) : SV_TARGET {
    return float4(1,1,1,1);
}
ENDCG

Let’s start with the frag function. It simply returns a white colour. The vert function looks a bit more complex, but isn’t really. It’s just preparing the output, the uvcoordinates for the fragment and the vertex are the same, but the position needs to change from local position to screen position. There’s a pre-made matrix that we can use to transform the vertex we get from the computer. The UNITY_MATRIX_MVP transforms the coordinates first from local coordinates to world coordinates (model), then to camera coordinates (view), then manipulates it to fit the projection (perspective). The mul function applies a multiplication – remember that matrix multiplication is not commutative so the order of the variables matter.

Congratulations you’ve just written your first shader. Maybe you’d like to make it a bit more interesting?

Properties {
    _CustomColor("Colour", Color) = (1,1,1,1)
}
float4 _CustomColor;
float4 frag (v2f INfragment) : SV_TARGET {
    return _CustomColor;
}

We’ve done a few things here, We’ve defined an editor field for the variable _CustomColor with the editor name "Colour" and the type Color which will default to white. We’ve also linked the variable _CustomColor in out Pass to the one in Properties by defining it again. And finally we return the _CustomColor instead of white in the frag function.

Properties {
    _CustomColor("Colour", Color) = (1,1,1,1)
    _MainTex("Noise Texture", 2D) = white { }
}
sampler2D _MainTex;
float4 _CustomColor;
float4 frag (v2f INfragment) : SV_TARGET {
    float4 noiseColor = tex2D(_MainTex, INfragment.uv);
    return noiseColor;
}

noise

Here we’ve added the noise texture, I’m using this one. Again we’ve defined our texture in the Properties, we’ve called it _MainTex because that’s a standard name for the main texture your shader uses, and Unity has some function that use this. Again we’ve linked _MainTex in our Pass. The tex2D function takes a texture and a uv coordinate and outputs the colour of the texture at those coordinates. This should look exactly like our noise texture.

Properties {
    _CustomColor("Colour", Color) = (1,1,1,1)
    _MainTex("Noise Texture", 2D) = white { }
    _NoiseThreshold("Intensity", Range(0,1)) = 0
}
sampler2D _MainTex;
float4 _CustomColor;
float4 _NoiseThreshold;
float4 frag (v2f INfragment) : SV_TARGET {
    float4 noiseColor = tex2D(_MainTex, INfragment.uv);
    
    clip(_NoiseThreshold - noiseColor.rgb);

    return _CustomColor;
}

Let’s go through this bit by bit. We’ve added a new variable in Properties the _NoiseThreshold is the proportion of the pixel that we will render. Because it’s a proportion we’ve constrained it’s values between 0 and 1 using the Range(0,1) type. We’ve linked the threshold in the Pass and then we use it in this clip function. What clip does is if the value passed in is less that 0 then it discards the pixel. You can read the documentation here. We’ve also replaced the returned colour with the _CustomColor again, this time only the pixels that aren’t clipped will be rendered.

float4 _MainTex_ST;
v2f vert (appdata INvertex) {
    v2f output;
    output.vertex = mul(UNITY_MATRIX_MVP, INvertex.vertex);
    output.uv = INvertex.uv * _MainTex_ST.xy + _MainTex_ST.zw;
    return output;
}

Now we’re getting somewhere! Adding the tiling and offset values from the texture in the material is pretty easy. For every texture given to the shader Unity makes a float4 which holds the scale and the translation of the texture. They call it [texture name]_ST. We still need to link it in Pass but we don’t need to define it in Properties. We can now transform the uv coordinates by multiplying them by the scale (_MainTex_ST.xy) and adding the translation (_MainTex_ST.zw). If the names of the variables are confusing it’s because they are, xy is the first two as a float2 and zw is the second pair.

So now you can mess around with the values of the shader we’ve defined in Properties including the tiling and offset values. I’ll put the whole shader below along with some examples. But first some links to good resources or tutorials on the subject.

Here’s the shader.

Shader "Custom/Weather Effects/Rain" {

	Properties{
		_MainTex("Noise Texture", 2D) = "white" { }
		_CustomColor("Noise Color", Color) = (1,1,1,1)
		_NoiseThreshold("Intensity", Range(0, 1)) = 0
	}

	SubShader{
		//We didn't use a Tag in this shader
		Pass{
			CGPROGRAM
			
			//define the functions
			#pragma vertex vert
			#pragma fragment frag

			//vertex structure
			struct appdata {
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;
			};
			
			//fragment structure
			struct v2f {
				float2 uv : TEXCOORD0;
				float4 vertex : SV_POSITION;
			};
	
			//linking definitions
			sampler2D _MainTex;
			float4 _MainTex_ST;
			float4 _CustomColor;
			float _NoiseThreshold;
			
			v2f vert(appdata INvertex) {
				v2f output;
				output.vertex = mul(UNITY_MATRIX_MVP, INvertex.vertex);		//transform to screen
				output.uv = INvertex.uv *_MainTex_ST.xy + _MainTex_ST.zw;	//allow tiling and offset
				return output;
			}
	
			float4 frag(v2f INfragment) : SV_Target{
				float4 noise = tex2D(_MainTex, INfragment.uv);	//get noise value

				clip(_NoiseThreshold - noise.rgb);				//discard pixel if too low
				
				return _CustomColor;							//use uniform colour
			}

			ENDCG
		}
	}
}

And some examples.

Advanced Loader

advancedloaderfacebook

I spent the first part of this week working on a tool to let my game connect with the internet with less hassle. The UnityEngine lets us contact the internet using a WWWForm and either a WWW object or a UnityWebRequest, depending on the version. I found that I was writing a lot of the same code for each request. I wanted a set of options that I could define and would be used by all requests – unless specified otherwise. I also wanted to send the request in a couple of lines and have functions that would be called when the request finished.

advancedloader

The AdvancedLoader inherits the UnityEngine.Object so it can be created and destroyed in scripts unlike a MonoBehaviour which must be constructed with an AddComponent call. This means that each AdvancedLoader must be able to contact a MonoBehaviour to run it’s coroutines. As overused as singletons are in Unity, it seems the perfect opportunity. The AdvancedLoader will create a singleton to run it’s coroutines if needed.

I also found myself wishing that UnityWebRequest’s cache operations allowed an attempt to the server first, and only if an error occurred, using the cache second. This was easy enough to implement, once the proper error handling was implemented. I also added a timeout module for cases on mobile, when the cellular data is not fast enough to contact the server. This produces it’s own error and the number of attempts and duration of attempts is customizable and separate from other loaders.

I also incorporated the post data, used in the same way as the Unity solution, but handled internally in the AdvancedLoader. This just helped to clean up the network code in the game class. It prevented me from having to worry about the post data, knowing that the AdvancedLoader would handle it properly.

I’m using Xml as my formatting when transmitting data from the server. It frustrated me that I had to convert the text to Xml every time the loader came back. Now the AdvancedLoader will return the data as Xml if it is requested as Xml. My OnGotData function expects an XmlNode instead of a string, letting me perform the game level operations without having to convert text to Xml.

I have personal data on my game’s server, I would be best if the sources could be verified. Currently sources are verified by using md5 encryption, and can be authorised on a game by game basis.

There are some improvements I’ve thought when deciding to release the AdvancedLoader source. The first of which is a PreLoad method. One loader fetches a batch of data from the server, the data is then split up and saved to the cache under given urls using the AdvancedLoader’s method, meaning that should a call be made to a PreLoaded url, the AdvancedLoader can use the cached version. This is to prevent overloading the local network when PreLoading lots of AdvancedLoader calls. The second is a concurrent loader manager, to circumvent the same problem only a set number of AdvancedLoaders will be active at any one time, the rest will be in a queue.

While this package is pending approval on the Unity Asset Store you can download it here. If you’ve any other suggestions for improvements, please let me know.

UI in VR

User interfaces in virtual reality are a problem yet to be solved.

VR, as the name suggests, further blends the digital and real worlds. The solution to user interfaces lies in the hinterland between digital and real user interfaces.

I’ve been thinking about this problem for a long time. The first games I played in the current era of VR were small mini-game-like programs that booted up from desktop and went straight into the gameplay. These games were great experiments into the booming field, but whether the UI problem was not considered or it was too big a problem to solve. These developers chose to ignore it completely. As virtual reality forges it’s own path in the modern world, it’s users expect a far more slick and polished virtual environment. This UI problem won’t go away.

As we look back to previous digital user interfaces, we see buttons everywhere. These tools are the most common and effective form of interactivity in the history of digital development. In your browser now, you see the refresh button, the back button, the close button – even the bookmarks highlight themselves when the mouse rolls over them and appear to depress while the mouse button is down. There are two reasons this works, the mouse is represented on the screen as a pointer, a precise, manoeuvrable digital tool. We don’t have this in virtual reality. The second is that the animation of the button matches the action we are performing. As we press down on the mouse button, the digital button is depressed also. This helps us make the connection that we made this happen, it is a crucial piece of feedback.

This takes me on to real world user interfaces. What are the digital user interfaces trying to do? Navigate. Forward and back. Start game and settings. They are helping guide the user to the correct piece of the program. How do we do this in the real world? The folder system in computers has converted this to buttons but it’s origins are clear. A filing system requires a person to navigate a room by reading signs and understanding the implications. But they must move in the real world space. Perhaps this holds the solution to our UI problem, perhaps a Stanley Parable like journey through a series of doors and corridors will provide a simple menu where the user is not required to use buttons. The movement problem in VR is another problem for another post.

As the demand for virtual reality programs increases, many inventive and creative people have taken it upon themselves to solve this problem. Let’s split these solutions into three types, controller solutions, vehicular solutions and staring solutions.

Graffiti Simulator

This is a controller solution, we can see that the player controls the spray can and UI with their bespoke VR controllers (e.g. Oculus Touch). It’s clear that this developer considered what the player would be doing in the game. This is about painting, how do painters select which colour they’ll use? They have a wooden palette on their offhand on it they have the selection of colours they are using for this painting. The developer has taken a digital painting palette and placed it on the player’s off hand. The player can access it easily but it’s fixed to their hand and, therefore, anchored in the virtual world. This isn’t nearly as disorientating as having the palette fixed on the screen or having some permanent onscreen button that will open up the palette.

One area that VR particularly excels is in vehicular games, games where the player doesn’t move, but controls a vehicle that moves them. People are already familiar with this kind of interaction from driving cars. A popular game that employs this solution is Elite Dangerous.

Here the UI is still anchored in the real world, it exists on the dashboard of whichever spacecraft the player is piloting. To bring up a menu they look at whichever side of the dashboard that has the UI for what they want to do, then the game controls transfer over to UI controls and the player uses the UI like how they would a console UI. Using the joystick to scroll through buttons and select the button they’d like to press. This is pretty seamless, as people generally look at the thing they’re trying to use. It can be taken too far however.

This brings me to the final solution. Even if you’ve only played a couple of VR games it’s likely that you’ve come up against this design. In the history of game development, what comes first is the gameplay. The UI is made later and to fit around the game. This works for normal digital games as the computer/console/ phone already have good ways to interact with the traditional buttons. However making the UI part of the game world is essential for an immersive UI experience. The staring solution is essentially making a choice by looking directly at your choice for a specified amount of time. This is not how people look at things. People use a combination of moving their heads and their eyes. This is what the controller solutions are doing – they are tracking the player’s eyes by using their hands.

The best UI solution for your game depends on your game. But there are a few general rules: fix your UI elements in the real game world. Whether it’s on the vehicle’s dashboard, on the player’s hand, in the player’s lap – designate a game world space for your UI elements. If you need a main menu, make it similar to the VR world – let players make their choices in a context that fits your VR world. Perhaps you don’t need to have your main menu in VR. They’ve already navigated to your game without VR – maybe a ‘Put your 3D glasses on now’ moment isn’t out of place. Just make it easy.

Unity2D

With the rise of the mobile market, 2D games have become more popular than ever. Titles ranging from the casual side of Candy Crush to the esport of Hearthstone. There’s no denying that 2D games are still popular. As such Unity developers have adapted their engine to smooth the process of making 2D games in their engine. What follows is an analysis of Unity2D’s RectTransform.

Engines, how do they work?

When making 3D games, the gamespace is a three dimensional space (obvious, right?) but not quite. It’s actually lots of three dimensional spaces – each object in the game space has it’s own coordinate system of which it is at the center, this objects children have coordinates in their parents coordinate system. This lets you move an object (a car say) in a direction and all of it’s children (doors, wheels etc. ) stay in the same place with respect to the parent.

This already poses a serious problem to those making games. It needs to be displayed on a 2D screen. They way we solve this is to pick a viewpoint – this is the eye or the camera of the game. It takes everything it sees and applies a matrix transformation to bring all of the objects into the root coordinate system or gamespace, now it can generate perspective on it’s 2D rendering of the gamespace.

When it comes to 2D games this process is much simpler, the root gamespace is simply the screen area, there’s no need for a camera and there’s no need for any fancy perspective. Things are the size they are and that’s that. We still have child and parent coordinate systems as it’s still useful for moving groups of objects – but because we don’t have perspective we can use the width and height of objects in the pixel dimensions on the screen. In addition it makes sense for parent object to take the bounds of their children as all of the display objects have a position, width and height and will be displayed of a quad (rectangle) of the same size. The position, width and height of this quad is useful and easily calculable.

In Unity2D the engine still assumes perspective, as Unity2D is a cross-section of their 3D  engine. There are many tools that the Unity team has developed to ease the process of making 2D games, however there are some pitfalls that I’ve found.

Scrolling Example

Say you wanted to make a list in your game, It might be a leaderboard or a settings menu. In this example we’ll use a settings menu. In an explicit 2D engine this would be as simple of adding each setting in turn to the parent scrolling panel and setting the y coordinate of that setting to be equal to the current height of the scrolling panel. This ensures that each child has enough space to be completely visible without overlapping whatever the size of the child. This is because the parent’s height takes into account it’s children’s height.

In Unity2D the parent’s height does not reflect the height of it’s children. Therefore this method becomes far more difficult, especially when, in our settings menu, each child (option) may contain a name, a slider, a checkbox. Unity’s solution to this is to use a combination of the VerticalLayoutGroup, ContentSizeFitter and LayoutElement. While this works well for lists of objects about which the size is known, it breaks down if things start to change size. Essentially what this solution requires is a knowledge of the dimensions of all the settings menu options beforehand.

In my game I made sure my options had plenty of space, such that each option would be the same size. But I’ve yet to find a solution that I like as much as the explicit 2D solution. If you’ve found one, let me know.