Provisioning servers for a gaming stack is no small feat. In part one on provisioning, we covered operating systems (Windows vs Linux) and took a trip down the well-trodden path of basic configuration, but that barely scratches the surface!
In part two, we go deeper; getting into the nitty-gritty of configuring your server. We’ll cover this off in three steps, to make things easier for you:
- Installing your Operating System (OS)
- Configuring your OS (updates and security)
- Application deployment and configuration
As with part one, we turn to Ben Alton, a DevOps Engineer at Multiplay for his expertise and wisdom on this important area.
How to configure your server
So, you’ve got your machines and you’ve been told how you can access them. The next question is how you want to configure your server.
Again, choices everywhere and the answer is “it depends”.
One of the biggest factors in your decision will be how much are you willing to do manually and how often you are willing to do it. I will assume that you want to automate as much as possible and give you some options.
Similarly, will you be expecting to quickly spin up a new game server if your game really takes off? If so, how long do you want it to take to get any old server to running your game?
Also – just because this isn’t complicated enough already – what will you be configuring? What applications? On what OS? How much did your provider do for you?
So many juicy factors for you to consider and it’s perfectly reasonable for you to change your mind over the lifecycle of your game.
For this section, I’ve split configuration automation into three parts. Depending on your provider and other tools, you might be able to skip the first two configuration steps. Also, just because I’ve described these steps separately does not mean they are distinct, atomic steps – you’ll likely use the same tool to automate configuring the host and the game server.
Step one – Low-level configuration – Installing an OS
Let’s assume your provider didn’t do anything too exotic for you – you asked for a box and you got a box . In which case, you’ll need to install the OS, configure your networking and set up your disks yourself.
Many providers (including cloud providers) will at least give you a machine you can access with SSH or Windows Remote Management (or Remote Desktop), which is enough for the next step.
For those that don’t, you’ll probably be given an IPMI (Intelligent Platform Management Interface) address so you can turn on the machine and “insert” an ISO like a disk.
Otherwise, here’s a few technologies that you can consider to get you somewhere:
- PXE (Preboot eXecution Environment) – If you are willing to set up a PXE server, you could use it to automatically install an OS for you. If you invest some time, you can create custom images that can run a script on boot (e.g. something that can resize the partitions on the disk automatically).
- Canonical’s MaaS (Metal-as-a-Service) – Allows you treat a bunch of physical machines in a pseudo-cloud-like environment. It provides a PXE server and everything it needs so that you can quickly get started. Canonical even provides some Ubuntu and Centos images for you to use. You can configure scripts to be run on boot without touching any OS images and set up the networking and disks through an intuitive web interface.
- Cobbler – In much the same vein as MaaS, it’s a PXE server and more. Again, you can quickly get Cobbler doing something useful. Some important functions of Cobbler are manipulated through the command-line, so ensure your team understand how to use a console. On the plus side, Cobbler can be plugged into other management technologies like Terraform, so if you’re looking for a single cohesive stack, it’s definitely something to look at.
There’s several others and it’s up to you to investigate what’s right for you and your team.
Often, the PXE-based offerings will require the PXE server and your new machine be in the same network. As such, you might find yourself with complicated VPNs or multiple satellite PXE servers.
If you’ve only got a handful of servers, it might not be worth investing too much time and effort with automation at this level.
Step two – Initial host configuration – Updates, security and OS tweaks
So, your box is online and you can remotely access over SSH or Windows Remote Desktop. It’s time to get your machine up-to-date and ensure that only you can log into it.
Again – some providers (including most cloud providers) will do at least some of this for you. Likely, they’ll ask you to upload an SSH key for Linux or specify an Active Directory endpoint for Windows. If so, you can skip to the next step.
If not, you’ll have to start with some basic host configuration…
First up is how you want to securely and conveniently access your server.
If your provider gave you a username and password to a box that’s accessible to the Internet, it’s probably a good idea to log in quickly and get that password changed.
For Linux (and Unix), you’ll want to log in, upload an SSH key or configure SSHD to authenticate by your chosen means. It’s still worth changing the password, just in case some opportunist finds a way to run arbitrary code in your game server and also manages to dig through your provider panel.
If you’re desperate for a desktop environment on your Linux machine, you might want to install and configure a VNC server (which will replicate Windows’ Remote Desktop).
Windows Servers are a bit more fun – you might be expected to login over RDP (Remote Desktop Protocol). If you have WinRM (Windows Remote Management) available, use that. If you don’t have WinRM, your first step is probably “Configure WinRM”. Almost all provisioning tools that remotely configure Windows will do so over WinRM.
I’m going to gloss over secure authentication with Windows Servers for brevity – there are numerous options and they’re not simple enough to be described here.
Regardless of OS, once you’re happy you can log in, it’s usually worth checking for updates.
Quick note on installing updates on Windows – you can’t (easily) do that over WinRM due to security constraints. But you can use WinRM to schedule a script to start in a few seconds and have that script install updates. Alternatively, there are a few workarounds to this ‘feature’ that you can find with a quick online search.
Here are some initial options for you to think about:
- Hashicorp’s Packer – Packer is an open-source, lightweight tool for connecting to machines and running provisioning tasks. It can run simply scripts or invoke configuration tools like Chef or Puppet. While powerful and convenient, its configuration files can be a tad daunting and verbose. More info on Packer.
- Ansible – Ansible is an open-source provisioning/configuration/deployment tool. You can write a playbook in YAML (Yet Another Markup Language – it’s like JSON, but more human-readable) to connect to a host and run some scripts or perform some setup. Sadly, while you can configure Windows with Ansible, you cannot run Ansible on Windows (or, if you do, it’s not a supported configuration) – so, if you’ve got a lot of Microsoft fans in your team, best look elsewhere. More info on Ansible.
- Shell/PowerShell – Can’t go wrong with the basics, right? Obviously, if you’re using Shell, you’ll have a hard time configuring Windows and, if you’re using PowerShell, you’ll have a hard time configuring anything but Windows. Nonetheless, if you want something lightweight and powerful, this option will tick both boxes. More info on Powershell.
At Multiplay, we use Packer a lot. Not just in host configuration, but also to create custom cloud images in Google Compute Engine and Amazon Web Services. As such, we have skills in Packer already and a lot of the things we do to create cloud images are things we want to do to provision a bare-metal server.
We have a couple of Shell/PowerShell scripts to work around Packer’s weaknesses or to achieve a couple of very simple tasks.
Step three – Application deployment and configuration – Making your server a game server
Your machine is secure, up-to-date and ready for action – now it’s time to run some games.
At the same time, it might be worth configuring your machine to plug into your monitoring and logging infrastructure. This is something we explored earlier in the series (part one and part two on Monitoring).
How you install your game and configure it will depend massively on the game and your infrastructure. As such, this section is a splash of things to consider.
As your servers increase in number, you might need to think not only about how to configure them, but how to keep them configured correctly.
Ensuring things stay configured
Just because you configured it once, doesn’t mean it’s going to stay configured. Sometimes, installing a later version of a package will overwrite the configuration file. Sometimes, a human will manually apply some changes to test or debug something. Sometimes, a background task that you didn’t even realize existed will come along and helpfully update something for you. Regardless of how something changed, the question is how you want to handle it.
One solution is to control package updates and access to machines so that nothing happens that you aren’t aware of.
Another solution is to use a Configuration Management Tool (CMT) – for example, Puppet or Chef. Previously, I mentioned Ansible as a way to perform OS-level tweaks, but it can happily dance to the music at the application-level as well.
Many CMTs have an client-server architecture where you set up a master (or multiple masters for resilience and scale) and install an agent onto your game server machines. Generally, in this architecture, the agents report to the master about the state the machine is in and the master tells the agent what to do to bring it in line with some desired configuration.
One benefit of the client-server style is that you have one place to go to update your desired configuration or to see which machines are up-to-date.
One downside to this architecture is that you need to set up a master (or controller or configuration server or whatever it’s called). This will be a machine that’s not hosting games – for security reasons, at least. It could be multiple machines need to take on this configuration server role depending on how many machines you have to control.
Then you need to think about how you set up this configuration server, which leads to a nice bootstrapping problem; you want your configuration server to be able to configure your configuration server. This isn’t an impossible problem to solve by any means; many CMTs make servers easy to install to mitigate this conundrum.
One more thing; security – if you’re going to have configuration management servers, make sure only your machines can connect to them. The last thing you need is for some opportunist hooking up to your configuration source and getting a free copy of your game.
Another option available to use is to run CMTs in an agentless architecture. While tools like Chef and Puppet are typically client-server, you can just install the agent program and tell it to apply configuration locally. You can even add a script to apply this configuration every few hours.
This does reduce some of your problems, but it adds its own; you have no idea how configured your machines actually are. Also – how do the machines get any new configuration?
Again, not hard problems to solve – especially with adequate monitoring and a file server (or cloud bucket) – but problems to be aware of nonetheless.
Going deeper down the rabbit hole, you could push your configuration to your machine and apply it immediately. If this interests you, then Ansible might be your salvation – instead of using long-running configuration servers, you run Ansible on a ‘controller’ (which can be your local machine) that decides what to do to the machine you’re deploying.
One of Ansible’s strengths is that you can safely (and securely) deploy a machine from within a private network (as long as it can make a TCP connection to the target machine) – nothing goes onto the new box that doesn’t need to be there.
Just like Puppet can be run agentless, you can also run Ansible in a client-server architecture – if that sounds like fun, check out Ansible Tower.
So there you are – many tools exist to handle this sort of problem. Even within a single tool, you can stick to the recommended architecture or forge your own path.
At Multiplay, we use Puppet, but we don’t bother with Puppet Masters (the configuration servers). We deal with both Linux and Windows game servers, so a tool that works well on both keeps things simple for us. We have a scheduled script that runs every few hours that will check a file server for a later version of the puppet configuration and download it if it doesn’t match what’s on the local disk. Another script applies puppet and reports the outcome (including a timestamp of when the puppet configuration was built) to our logging infrastructure.
We have had issues with Puppet Masters at scale, so we avoid them. Another reason why configuration servers aren’t too useful for us is due to our flexibility – we spin up or tear down machines within minutes; it’s perfectly possible to see our inventory shift by hundreds over a few hours. Keeping a configuration server happy with many new nodes and keeping it up-to-date can be a headache.
Of course, Puppet Masters can scale to several thousand, but it’s far easier for us to not bother.
Configuring servers isn’t easy and the fact that there are numerous methods, tools and services for it is evidence of that.
Even if you find what seems to be a perfect solution for you, I would recommend you keep your eyes open for new technologies. When new team members join or we source new hardware providers, we usually spend some time re-investigating our stack.
What I hope you take away from this is that you have options, do whatever you’re comfortable with – there’s no single “correct” way to tackle this problem. And I’ve of course, it’d be remiss of me not to say that, if you get stuck with this stuff, then give your friendly game server specialists Multiplay a call!
This is part eight of our ongoing Essential Guide to Game Servers, which includes:
- Patching a live game server
- Game server player density: tips and tricks to keep costs down
- Monitoring game servers – part one
- Monitoring game servers – part two
- Matchmaking – part one
- Matchmaking – part two
- Provisioning – part one
- Provisioning – part two