Insight into our Infrastructure

Reason for this post is to give interested readers some knowledge about the infrastructure our gameservers are running on. I try to keep it as easy to understand as possible. That is more difficult than I expected. I hope you get some basic insight, even if you don’t have much experience.

A visualized overview of our hosts running gameservers and other applicaitons.

A visualized overview of our hosts running gameservers and other applicaitons.

All our gameservers and other services run on dedicated(bare-metal) servers(our hosts). The reason for this is performance and pricing. Hosted solutions like the ones offered by Nitrado are expensive. While there are cheap virtual servers out there, in many cases they don’t offer enough performance to run gameservers on higher rates or many players. Think of a virtual server as a server with shared resources. Yes, you share resources with another user. If it has a high load your gameserver application might not get enough resources and harm your gaming experience. All of our servers run some Linux distribution.

How I started running gameservers

When I started hosting the first Crinis servers I manually setup a dedicated server, installed and configured everything necessarry to secure it. Then I installed the Counterstrike gameserver applications and their dependencies. This was a time consuming task. Just think of updating every gameserver regularly as they got more and more. Updating a plugin or a configuration forced me to update it for every gameserver. I had to configure every new host I added. As I added more hosts, their configuration had to keep their configuration in sync. Another major flaw was the use of harddrive space. Every gameserver brought quite a lot of files with it, that took multiple Gigabytes in space.

Trying to automate everything

I wrote a LOT OF scripts  to automate as many tasks regarding the hosts and gameservers as possible. I also reduced the amout of harddrive space by using hard-links which enables software to operate on the same files from different locations. It still was far less automated and fail-proof as I wanted it to be. A website requires a webserver running and responding to requests from users. This has to be configured and an environment running game servers might not be perfectly suitable to run a web server.  Another major flaw is security. All applications run on the same host. If one is hacked others are vulnerable. This is usually solved by running every application inside a Virtual Machine. But a virtual machine is a full environment that has to be configured and consumes a lot of hardrive space.

Where we are now

Ansible

So I continued looking for better solutions as my goal is to automate as much as possible. First thing to automate was the deployment and configuration of the hosts themselves. I use Ansible for that, which allows me to write a configuration file(called a Playbook) and a few scripts as a one-time job. It than executes them on every host I add to it. When I change host configuration my changes get applied to every host. My Playbook takes care of configuring security, automated updating of packages and logging.

Docker

dockerThis is nice but doesn’t really help me organizing the game- and webservers. Docker enables users to run a whole application in a thing called a container. The container simulates a full environment including the operating system to the application.  This environment is defined in a Dockerfile that is used to build something called a Docker image.  All dependencies, the environment and the gameserver application itself are installed and configured once and provided as a Docker image.  This docker image is persisted in something called a registry and can be pulled(downloaded) anywhere you want. When I update an image I can pull the updated image on all servers. Old containers are recreated using the new image. A container also solves security issues. If an application inside of a container is hacked, the hacker still has to break out of it to affect other containers running on the host.

CS:GO servers using Docker

Docker images can be used by multiple containers at once. This removes all my harddrive space problems as I can run 10 containers running a CS:GO gameserver using a single image. Images can build from each other. Let’s say I edit a file in one image. This file is than also available in every container using an image that is build on top of the one that had been changed. I created multiple docker images. They build on top of each other. If I change an image these changes are reflected in every image that builds from it. The first image contains all the files for the CS:GO server itself and its environment.  A second one is adding Sourcemod. More images add game-mode specific configuration and plugins. Every gameserver obviously needs some kind of unique configuration like its hostname, port, etc. To accomplish this, environment variables are added to every container on its deployment.

There already are images available containing all kind of software.  Other people created images for webservers and even very complex software like Elasticsearch which was quite difficult to set up in the past.  Images are either maintained by yourself or by other Docker users around the world.

Jenkins helps me automate tasks like building Docker images and deploying containers.
Jenkins helps me automate tasks like building Docker images and deploying containers.

Now I packaged everything inside a container. But images still have to be maintained and containers using an updated image have to be redeployed. I use Jenkins automate these tasks. If there is a CS:GO update new images get build at night. Containers get recreated on all hosts using the updated images. Jenkins also runs inside of a container.

 

 

 

Networking

There are still some imporant problems left:

  • deploying/recreating containers required me to connect to the host
  • sometimes applications want to communicate to each other(gameserver->database)
  • i have to keep track of the host specific containers are deployed and teach them how to communicate to each other
  • traffic between applications is usually not properly secured
Orchestration

This is where Orchestration comes into play. There are multiple orchestration solutions available. I’m currently using Cattle, but also had experience with Kubernetes and Swarm. Orchestration tells our applications and their containers where other containers live and how they can communicate. It also enables a secure encrypted communication between containers and their applications as its sets up a network around them and encrypts traffic in it. Now I can just tell my gameserver application how the database application is called(dns gets resolved) and it will find the database everywhere. The orchestration also makes sure, that all my containers are healthy and running and recreates them if necessary. There are a lot more tasks that orchestration can solve.

Visualized overview of containers communicating.
A visualized overview of how the containers are connected to each other.
A neat UI on top of the orchestration. It shows where my containers are running and if they are healthy.
A neat UI (Rancher) on top of the orchestration. It shows where my containers are running and if they are healthy.

This is only a brief and incomplete overview of what is running in my small cloud and more specific posts about other topics might follow. But it has hopefully given you some more knowledge about things running behind the  gameservers.

 

 

 

One thought on “Insight into our Infrastructure

Leave a Reply