Databases

This week I’ve been making the databases for my game. The data storage objects that make my game work. A lot of the features of these classes are similar and I was wondering what was the best way to design these. I’ll explain how I do it at the moment and talk about why.

My current solution involves inheriting a base database class. This class handles all of the common behaviour, such as loading and saving the data from the server, but the data objects are filled out in the child class. I find myself having to write similar functions for searching and ordering the data objects. I feel like there may be a better way to handle the data objects.

Here is the base class I use:


using UnityEngine;
using System.Xml;
using AdvancedUtility;

public class Database : Object {

    public bool gotData;
    protected delegate void OnGotDataCB();
    protected OnGotDataCB OnGotData; 

    protected string scriptLocation;
    protected int loaderOptions;
    private AdvancedLoader loader;

    public Database() {
        gotData = false;
    }

    public void LoadData() {
        loader = new AdvancedLoader(loaderOptions);

        loader.SetSuccessCB(WebReturnedSomething);
        loader.SetErrorCB(WebError);

        loader.Load(scriptLocation);
    }

    protected void SaveLocalToCache(XmlNode preLoaded) {
        loader.SavePrelodedData(scriptLocation, preLoaded);
    }

    protected void SaveLocalToCache(string preLoaded) {
        loader.SavePrelodedData(scriptLocation, preLoaded);
    }

    protected void SaveLocalToCache(byte[] preLoaded) {
        loader.SavePrelodedData(scriptLocation, preLoaded);
    }

    private void WebReturnedSomething(XmlNode data) {
        ProcessData(data);
        gotData = true;

        if (OnGotData != null)
            OnGotData();
    }

    protected virtual void ProcessData(XmlNode data) {

    }

    protected virtual void WebError(NetError error) {

    }
}

The Object class I inherit handles object destruction and memory handling. The following variables are either expected by my database manager or there to communicate the child class to the loader object. The LoadData method sends constructs the loader to fetch the data from where the loaderOptions direct it. I also have the functionality to override the cache with any local changes that I have. The ProcessData and WebError let the child know the outcome of the loader call.

If you’ve any suggestions let me know.

Raining on my Parade.

Last week I worked on a shader that mimicked a rain effect. You can see how I did this here. This is a followup about the performance of that shader on a variety of devices. First off I created a similar particle effect, then I placed each in a separate scene and duplicated them 500 times to exaggerate the difference in performance.

System Shader FPS Particle FPS
Unity Editor 137 126
32bit PC 70 433
64bit PC 57 446
Android 4 21

The results here show that the Unity’s particles are much more optimised than my shader. However, I’ll stress that you should test your options on your own target system before letting anyone else tell you which route is faster or more efficient.

Make it Rain

Here’s a brief tutorial about how I made one of my first shaders. I’ve broken it down and explained every step, but if you’re not a programmer some of the terminology might trip you up. I’d recommend Makin’ stuff look good YouTube tutorials for a more intuitive introduction.

 Here’s how I did it.

What I wanted was a shader that would take a random noise texture and render only the pixels that had colour values above a customizable threshold value and set their colour to be another customizable colour. The rest would be invisible. Then I needed to be able to tile and offset the noise texture to give the impression of movement.

Shader "Rain" {
    Properties{
         
    }

    SubShader{
        Tags{

        }

        Pass{
            CGPROGRAM

            ENDCG
        }
    }
}

This is our empty shader, the tags CGPROGRAM and ENDCG define the start and end of the actual shader code, the rest is so Unity’s ShaderLab can interpret it properly.

Shader "Rain"{ tells Unity what this shader’s name is, we can also organise our shaders alongside Unity’s default shaders by mimicking a path name. For example Shader "Custom/Weather Effects/Rain" { will place our shader in a Custom and Weather Effects folder in Unity’s shader drop down list.

The Properties block tells Unity to show the things here in the editor, this is where you add your texture and anything else your shader needs. You might be familiar with these as the material settings, as that’s where they appear in the Unity editor, but they belong to the shader.

The SubShader block allows you to define multiple types of the shader. If you have a particlualrly expensive shader, you can define multiple SubShaders and the player’s graphics card will go down the list and use the fisrt SubShader block that works.

The Tags block is used to set the rendering order and some other parameters of a SubShader. It’s a set key-value pairs, for more info check here.

The Pass block tells Unity to do a rendering run with the shader code inside, each SubShader can have multiple Pass blocks but keep in mind that each Pass is a run on the GPU if you can achieve what you want from a shader in less Pass-es it’s almost always better to do so.

Before we can start rendering stuff we need to define a few data types in our shader.

CGPROGRAM

struct appdata {
    float4 vertex : POSITION;
    float2 uv : TEXCOORD0;
}

struct v2f {
    float4 vertex : SV_POSITION;
    float2 uv : TEXCOORD0;
}

ENDCG

Why we’ve done this will become clear in a little bit, first I want to explain what we’ve done. The first struct has two properties of type float4 and float2. A float4 is another data structure, it holds four float values, whereas a float2, you guessed it, holds 2. We’ve also put some other stuff after are variable types and names, POSITION and TEXCOORD0 are keywords that let the computer know what values to fill these with. TEXCOORD0 tells the computer to use the first set of uv coordinates TEXCOORD1, TEXCOORD2, TEXCOORD3 represent the others. These can be float2, float3 or float4, but we only want the x,y. POSITION represents the position of the vertex, simple. But why is it float4? The reason is that it is bundled with an extra variable which is the clipping plane, don’t worry about it. The second struct consists of the same data but using SV_POSITION instead of POSITION, the reason is a boring one. This type is used to be compatible with Playstation and a few other platforms.

#pragma vertex vert
#pragma fragment frag

v2f vert (appdata INvertex) {
    
}

float4 frag (v2f INfragment) : SV_TARGET {
    
}
ENDCG

Ok, this is why we did the stuff we did, so we can use it in these functions. The #pragma tags tell the computer what we’re expecting in the functions we’ve made. The SV_TARGET tag tells the computer that this function will return the colour for this pixel so it can stop and move to the next pixel. The vert function is used to translate a point in the game to a point on the screen and the frag function chooses what the colour should be for the pixel.

v2f vert (appdata INvertex) {
    v2f output;
    output.vertex = mul(UNITY_MATRIX_MVP, INvertex.vertex);
    output.uv = INvertex.uv;
    return output;
}

float4 frag (v2f INfragment) : SV_TARGET {
    return float4(1,1,1,1);
}
ENDCG

Let’s start with the frag function. It simply returns a white colour. The vert function looks a bit more complex, but isn’t really. It’s just preparing the output, the uvcoordinates for the fragment and the vertex are the same, but the position needs to change from local position to screen position. There’s a pre-made matrix that we can use to transform the vertex we get from the computer. The UNITY_MATRIX_MVP transforms the coordinates first from local coordinates to world coordinates (model), then to camera coordinates (view), then manipulates it to fit the projection (perspective). The mul function applies a multiplication – remember that matrix multiplication is not commutative so the order of the variables matter.

Congratulations you’ve just written your first shader. Maybe you’d like to make it a bit more interesting?

Properties {
    _CustomColor("Colour", Color) = (1,1,1,1)
}
float4 _CustomColor;
float4 frag (v2f INfragment) : SV_TARGET {
    return _CustomColor;
}

We’ve done a few things here, We’ve defined an editor field for the variable _CustomColor with the editor name "Colour" and the type Color which will default to white. We’ve also linked the variable _CustomColor in out Pass to the one in Properties by defining it again. And finally we return the _CustomColor instead of white in the frag function.

Properties {
    _CustomColor("Colour", Color) = (1,1,1,1)
    _MainTex("Noise Texture", 2D) = white { }
}
sampler2D _MainTex;
float4 _CustomColor;
float4 frag (v2f INfragment) : SV_TARGET {
    float4 noiseColor = tex2D(_MainTex, INfragment.uv);
    return noiseColor;
}

noise

Here we’ve added the noise texture, I’m using this one. Again we’ve defined our texture in the Properties, we’ve called it _MainTex because that’s a standard name for the main texture your shader uses, and Unity has some function that use this. Again we’ve linked _MainTex in our Pass. The tex2D function takes a texture and a uv coordinate and outputs the colour of the texture at those coordinates. This should look exactly like our noise texture.

Properties {
    _CustomColor("Colour", Color) = (1,1,1,1)
    _MainTex("Noise Texture", 2D) = white { }
    _NoiseThreshold("Intensity", Range(0,1)) = 0
}
sampler2D _MainTex;
float4 _CustomColor;
float4 _NoiseThreshold;
float4 frag (v2f INfragment) : SV_TARGET {
    float4 noiseColor = tex2D(_MainTex, INfragment.uv);
    
    clip(_NoiseThreshold - noiseColor.rgb);

    return _CustomColor;
}

Let’s go through this bit by bit. We’ve added a new variable in Properties the _NoiseThreshold is the proportion of the pixel that we will render. Because it’s a proportion we’ve constrained it’s values between 0 and 1 using the Range(0,1) type. We’ve linked the threshold in the Pass and then we use it in this clip function. What clip does is if the value passed in is less that 0 then it discards the pixel. You can read the documentation here. We’ve also replaced the returned colour with the _CustomColor again, this time only the pixels that aren’t clipped will be rendered.

float4 _MainTex_ST;
v2f vert (appdata INvertex) {
    v2f output;
    output.vertex = mul(UNITY_MATRIX_MVP, INvertex.vertex);
    output.uv = INvertex.uv * _MainTex_ST.xy + _MainTex_ST.zw;
    return output;
}

Now we’re getting somewhere! Adding the tiling and offset values from the texture in the material is pretty easy. For every texture given to the shader Unity makes a float4 which holds the scale and the translation of the texture. They call it [texture name]_ST. We still need to link it in Pass but we don’t need to define it in Properties. We can now transform the uv coordinates by multiplying them by the scale (_MainTex_ST.xy) and adding the translation (_MainTex_ST.zw). If the names of the variables are confusing it’s because they are, xy is the first two as a float2 and zw is the second pair.

So now you can mess around with the values of the shader we’ve defined in Properties including the tiling and offset values. I’ll put the whole shader below along with some examples. But first some links to good resources or tutorials on the subject.

Here’s the shader.

Shader "Custom/Weather Effects/Rain" {

	Properties{
		_MainTex("Noise Texture", 2D) = "white" { }
		_CustomColor("Noise Color", Color) = (1,1,1,1)
		_NoiseThreshold("Intensity", Range(0, 1)) = 0
	}

	SubShader{
		//We didn't use a Tag in this shader
		Pass{
			CGPROGRAM
			
			//define the functions
			#pragma vertex vert
			#pragma fragment frag

			//vertex structure
			struct appdata {
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;
			};
			
			//fragment structure
			struct v2f {
				float2 uv : TEXCOORD0;
				float4 vertex : SV_POSITION;
			};
	
			//linking definitions
			sampler2D _MainTex;
			float4 _MainTex_ST;
			float4 _CustomColor;
			float _NoiseThreshold;
			
			v2f vert(appdata INvertex) {
				v2f output;
				output.vertex = mul(UNITY_MATRIX_MVP, INvertex.vertex);		//transform to screen
				output.uv = INvertex.uv *_MainTex_ST.xy + _MainTex_ST.zw;	//allow tiling and offset
				return output;
			}
	
			float4 frag(v2f INfragment) : SV_Target{
				float4 noise = tex2D(_MainTex, INfragment.uv);	//get noise value

				clip(_NoiseThreshold - noise.rgb);				//discard pixel if too low
				
				return _CustomColor;							//use uniform colour
			}

			ENDCG
		}
	}
}

And some examples.

Advanced Loader

advancedloaderfacebook

I spent the first part of this week working on a tool to let my game connect with the internet with less hassle. The UnityEngine lets us contact the internet using a WWWForm and either a WWW object or a UnityWebRequest, depending on the version. I found that I was writing a lot of the same code for each request. I wanted a set of options that I could define and would be used by all requests – unless specified otherwise. I also wanted to send the request in a couple of lines and have functions that would be called when the request finished.

advancedloader

The AdvancedLoader inherits the UnityEngine.Object so it can be created and destroyed in scripts unlike a MonoBehaviour which must be constructed with an AddComponent call. This means that each AdvancedLoader must be able to contact a MonoBehaviour to run it’s coroutines. As overused as singletons are in Unity, it seems the perfect opportunity. The AdvancedLoader will create a singleton to run it’s coroutines if needed.

I also found myself wishing that UnityWebRequest’s cache operations allowed an attempt to the server first, and only if an error occurred, using the cache second. This was easy enough to implement, once the proper error handling was implemented. I also added a timeout module for cases on mobile, when the cellular data is not fast enough to contact the server. This produces it’s own error and the number of attempts and duration of attempts is customizable and separate from other loaders.

I also incorporated the post data, used in the same way as the Unity solution, but handled internally in the AdvancedLoader. This just helped to clean up the network code in the game class. It prevented me from having to worry about the post data, knowing that the AdvancedLoader would handle it properly.

I’m using Xml as my formatting when transmitting data from the server. It frustrated me that I had to convert the text to Xml every time the loader came back. Now the AdvancedLoader will return the data as Xml if it is requested as Xml. My OnGotData function expects an XmlNode instead of a string, letting me perform the game level operations without having to convert text to Xml.

I have personal data on my game’s server, I would be best if the sources could be verified. Currently sources are verified by using md5 encryption, and can be authorised on a game by game basis.

There are some improvements I’ve thought when deciding to release the AdvancedLoader source. The first of which is a PreLoad method. One loader fetches a batch of data from the server, the data is then split up and saved to the cache under given urls using the AdvancedLoader’s method, meaning that should a call be made to a PreLoaded url, the AdvancedLoader can use the cached version. This is to prevent overloading the local network when PreLoading lots of AdvancedLoader calls. The second is a concurrent loader manager, to circumvent the same problem only a set number of AdvancedLoaders will be active at any one time, the rest will be in a queue.

While this package is pending approval on the Unity Asset Store you can download it here. If you’ve any other suggestions for improvements, please let me know.

UI in VR

User interfaces in virtual reality are a problem yet to be solved.

VR, as the name suggests, further blends the digital and real worlds. The solution to user interfaces lies in the hinterland between digital and real user interfaces.

I’ve been thinking about this problem for a long time. The first games I played in the current era of VR were small mini-game-like programs that booted up from desktop and went straight into the gameplay. These games were great experiments into the booming field, but whether the UI problem was not considered or it was too big a problem to solve. These developers chose to ignore it completely. As virtual reality forges it’s own path in the modern world, it’s users expect a far more slick and polished virtual environment. This UI problem won’t go away.

As we look back to previous digital user interfaces, we see buttons everywhere. These tools are the most common and effective form of interactivity in the history of digital development. In your browser now, you see the refresh button, the back button, the close button – even the bookmarks highlight themselves when the mouse rolls over them and appear to depress while the mouse button is down. There are two reasons this works, the mouse is represented on the screen as a pointer, a precise, manoeuvrable digital tool. We don’t have this in virtual reality. The second is that the animation of the button matches the action we are performing. As we press down on the mouse button, the digital button is depressed also. This helps us make the connection that we made this happen, it is a crucial piece of feedback.

This takes me on to real world user interfaces. What are the digital user interfaces trying to do? Navigate. Forward and back. Start game and settings. They are helping guide the user to the correct piece of the program. How do we do this in the real world? The folder system in computers has converted this to buttons but it’s origins are clear. A filing system requires a person to navigate a room by reading signs and understanding the implications. But they must move in the real world space. Perhaps this holds the solution to our UI problem, perhaps a Stanley Parable like journey through a series of doors and corridors will provide a simple menu where the user is not required to use buttons. The movement problem in VR is another problem for another post.

As the demand for virtual reality programs increases, many inventive and creative people have taken it upon themselves to solve this problem. Let’s split these solutions into three types, controller solutions, vehicular solutions and staring solutions.

Graffiti Simulator

This is a controller solution, we can see that the player controls the spray can and UI with their bespoke VR controllers (e.g. Oculus Touch). It’s clear that this developer considered what the player would be doing in the game. This is about painting, how do painters select which colour they’ll use? They have a wooden palette on their offhand on it they have the selection of colours they are using for this painting. The developer has taken a digital painting palette and placed it on the player’s off hand. The player can access it easily but it’s fixed to their hand and, therefore, anchored in the virtual world. This isn’t nearly as disorientating as having the palette fixed on the screen or having some permanent onscreen button that will open up the palette.

One area that VR particularly excels is in vehicular games, games where the player doesn’t move, but controls a vehicle that moves them. People are already familiar with this kind of interaction from driving cars. A popular game that employs this solution is Elite Dangerous.

Here the UI is still anchored in the real world, it exists on the dashboard of whichever spacecraft the player is piloting. To bring up a menu they look at whichever side of the dashboard that has the UI for what they want to do, then the game controls transfer over to UI controls and the player uses the UI like how they would a console UI. Using the joystick to scroll through buttons and select the button they’d like to press. This is pretty seamless, as people generally look at the thing they’re trying to use. It can be taken too far however.

This brings me to the final solution. Even if you’ve only played a couple of VR games it’s likely that you’ve come up against this design. In the history of game development, what comes first is the gameplay. The UI is made later and to fit around the game. This works for normal digital games as the computer/console/ phone already have good ways to interact with the traditional buttons. However making the UI part of the game world is essential for an immersive UI experience. The staring solution is essentially making a choice by looking directly at your choice for a specified amount of time. This is not how people look at things. People use a combination of moving their heads and their eyes. This is what the controller solutions are doing – they are tracking the player’s eyes by using their hands.

The best UI solution for your game depends on your game. But there are a few general rules: fix your UI elements in the real game world. Whether it’s on the vehicle’s dashboard, on the player’s hand, in the player’s lap – designate a game world space for your UI elements. If you need a main menu, make it similar to the VR world – let players make their choices in a context that fits your VR world. Perhaps you don’t need to have your main menu in VR. They’ve already navigated to your game without VR – maybe a ‘Put your 3D glasses on now’ moment isn’t out of place. Just make it easy.