AWS for Games Blog

Implementing a Build Pipeline for Unity Mobile Apps

The use of game engines to generate interactive content is a common choice for people these days. Unity, which is one of the popular game engines, is widely used for mobile applications that run on smartphones. It is also used for console and PC games, as well as in the metaverse, such as reality shows and simulating megacities.

Unity is used for use cases beyond games, and these new use cases may not run on traditional game platforms, so being able to build for a wide variety of platforms is critical. To efficiently build multiple Unity applications in this way, many application developers have achieved automation and reduced build times by creating their own build pipelines.

Benefits of Unity Build on AWS

An application build pipeline needs to run uninterrupted, and flexibly increase or decrease capacity for different development phases. AWS has the flexibility and scalability to meet these requirements, and it allows customers to scale capacity to meet their needs and compute demands. Additionally, many security services such as AWS Config, AWS CloudTrail etc. can be used with AWS, and by utilizing these, we increase the reliability of the system and protect your products.

AWS for Games has published a reference architecture describing a Unity Build pipeline. By following the workshop “Creating a cost-effective IOS Unity build pipeline”, it is possible to build iOS application build pipelines automated by Jenkins. These utilize Amazon Elastic Compute Cloud (EC2) Spot Instances and Mac instances.

We have also created an AWS Cloud Development Kit (CDK) sample that reduces the manual work included in the workshop as much as possible and promotes automation. Additional features have been added to deploy more production-oriented services, such as Unity’s cache server.

Architecture

A sample implementing this architecture has been published on GitHub. Unity Build Pipeline with Jenkins and EC2 Mac

This sample includes:

  • Jenkins controller on AWS Fargate
  • Jenkins Agent on EC2 Linux spot instances and EC2 Mac instances
    • Docker agent is also available on Linux
  • Unity Accelerator (on Docker) for EC2 Linux
  • A mechanism for continuously warming up the build cache (described later)
  • Automated deployment by AWS CDK

You can try it out right away by deploying it to your AWS account. This would be a good opportunity to deploy the sample if you’d like to read along and see the concepts in action.

Architecture diagram.

Practically speaking, a mechanism for managing Unity licenses may be necessary. Therefore, we have released an implementation and build procedure to create a floating license management server for Unity on AWS. Please look at the details in the Unity Build Server with AWS CDK sample.

Design and implementation considerations

There are several considerations in implementing this architecture.

Optimizing EC2 instance costs

Billing for EC2 Mac instances is per second with a 24-hour minimum allocation period (as of February 2024) and is a good solution for workloads with static loads. If the load fluctuates dynamically, further cost optimization can be achieved by utilizing a combination of EC2 Linux instances. Below are some ideas implemented in this solution.

EC2 Linux instances can be used flexibly and cheaply because they apply a 1-minute minimum charge with Spot Instances available. Since much of the processing associated with Unity’s build jobs can also be run on Linux, it is possible to reduce total costs by offloading them from Mac instances.

Linux spot instances are inexpensive, so use many instances and distribute them. Macs are pricier, so reduce the number of instances and focus on Xcode builds.

For example, when building a client for iOS, it is necessary to run a build using Xcode on a Mac, but other asset imports and Xcode project output processing can be performed in Unity on Linux. By offloading such processing to Linux instances and optimizing capacity through dynamic scaling in and out, it is possible to maintain build times fast while keeping total costs down.

The figures below illustrate the time required for an all-Mac build (Case 1) and a combined Linux+Mac build (Case 2). If the number of Mac instances is the same, in Case 2 it can be built faster because some processing can be parallelized on the Linux instances. To achieve the same speed in Case 1 it is necessary to increase the number of Mac instances and parallelize them, but this wouldn’t be ideal since it would lead to an increase in costs. On the other hand, Linux instances can increase and decrease capacity more efficiently and cheaply, leading to optimization of build speed and costs.

Reduce total cost and build time by using EC2 Linux and EC2 Mac together.

Maintaining a large build cache

When Auto Scaling EC2, all the instances are stateless. This means that data is not sharable between build jobs. A post-build cache is temporarily maintained in the instance, but if the instance is terminated due to scale-in or Spot interruption, the cache will disappear. For this reason, if you simply build on stateless EC2, it is harder to benefit from caching than conventional on-premise infrastructure, and there is a possibility that the build speed will be decreased.

For example, in a Unity build, there are three typical examples of files that should be cached:

  1. Repository: Contains source code and raw asset files managed by a version control system
  2. Library directory: Contains a cache of assets imported by Unity
  3. Build output directory: Contains the build generated output

All of these are stored on the local file system and are referenced when pulling from repositories and running Unity builds. In each case the file differences will be processed. Therefore, if there are files used from the previous build, this will increase build speed. This is why cache is so important.

The issue of cache volatilization in stateless instances can be solved by leveraging AWS features. We will introduce two methods in this article.

  1. Share and update cache with Amazon Machine Images (AMI)

The first method is to utilize an Amazon Machine Image.

An Amazon Machine Image (AMI) is information specified when launching a new Amazon EC2 instance, and includes snapshots of Amazon Elastic Block Store (Amazon EBS), which is storage. You can create an AMI based on an instance that has already been launched. You can snapshot the instance’s EBS, copy it, and save it as an AMI. By continuously updating this AMI as a cache and reusing it on each build server, the stateless instance group holds the cache required to speed up the build.

Diagram of AMI concept.

In this case it is assumed that all Unity builds will run on Linux agents. Therefore, cache management is only considered with Linux agents. If for some reason it is necessary to consider cache on Mac instances, the same method of using an AMI works.

In this system, EC2 instances of the Linux agents are managed by an Auto Scaling group (ASG). ASG automatically increases or decreases instances according to the desired capacity, but when starting a new instance, it starts using the AMI specified in the ASG launch template. In other words, to change the AMI used by Linux agents, you can update ASG and launch template settings.

EC2 instances managed by ASG are launched from the AMI specified in launch template.

To launch a new Linux agent with the local cache reserved, the following process should be performed:

  1. Create a new AMI from an instance with a cache in the file system
  2. Update the launch template to use the AMI that was created
  3. Update the Auto Scaling group to use the updated launch template

Flow of sharing cache between build jobs using AMI.

Furthermore, it is possible to keep the cache fresh by running a series of processes on a regular basis.

Here is an example of a concrete implementation of the above policy. For practical purposes, the following points should be considered.

  1. When creating an AMI, the target EC2 instance must be restarted during creation. You can create an AMI without restarting, but this is not recommended because it doesn’t guarantee the integrity of the snapshot.
  2. You need to create an AMI with an instance that doesn’t belong to the ASG. This is to prevent the following: when instances are restarted for the previous reason, instances belonging to the ASG might be terminated by the ASG.
  3. Instances that are creating AMIs should not have build jobs running during creation. Creating an AMI while a Unity build job is reading/writing the cache may cause cache inconsistencies in the snapshot.
  4. A Spot Instance may be interrupted while an AMI is being created, causing the creation to fail. For this reason, a retry mechanism is essential.

The sample implementation we have released includes an example implementation that takes the above into account. Please examine the actual implementation as well.

However, there is an issue with this approach: when an instance is started from an AMI, the EBS Snapshot file is loaded late. As a result, an increase in I/O latency may be observed during the build immediately after startup. This issue can be solved by using the Fast Snapshot Restore (FSR) feature, which enables file access immediately after startup without any loss in latency. The blog article Addressing I/O latency when restoring Amazon EBS volumes from EBS Snapshots has details.

  1. Use a pool of EBS volumes

With the method using AMIs described above, we cannot ignore the I/O performance decline that occurs immediately after instance startup. Using FSR can be a solution, but new concerns arise, such as cost increase and management of FSR Credit.

Alternatively, you can create a pool of EBS volumes.

First, create multiple Amazon Elastic Block Store (EBS) volumes. Then, every time an EC2 instance of the Jenkins agent managed by the ASG starts, attach this volume. And when the instance is terminated, detach it (Please be careful not to delete the volume). Within this volume, place the Jenkins workspace. The Jenkins workspace includes git repository, Unity’s library directory, and a build directory.

The process means the data in the EBS volume includes all the data you want to cache, and the newly launching EC2 instance can be launched with the cache at hand. Unlike EBS snapshots, the data is always on the EBS volume, not S3, so I/O latency won’t immediately increase after startup. Additionally, there is no need to manage FSR.

Diagram of EBS volume pools concept

How should we determine the number of EBS volumes in a pool? First, it would be good to match the maximum capacity of the Jenkins Agent fleet (ASG). Also, the maximum capacity of the ASG must be configured so that the build queue can be processed quickly enough to avoid piling up the job queue and meet the needs of your studio’s delivery timelines. Increasing maximum capacity reduces the risk of queue congestion, but there is a trade-off in that the static cost of securing EBS volumes increases. Consider this trade-off and set the optimal capacity for your use case.

Instructions on how to create an EBS pool are also included in the sample implementation. Also, in the preceding figure, instances use volumes in order from the top, but with this, there is a possibility that this may cause the cache in a particular volume to become biased and outdated. To avoid this, it would be good to implement random selection processing where the instance randomly decides which volume to attach.

Using either of the above two methods, you can maintain and share the build cache between stateless instances. In addition, methods for uploading and downloading the cache to S3 and EFS, as appropriate, can be considered. Since each method has benefits and disadvantages, please compare and contrast them according to your use case.

Summary

In this article, we have introduced an example of building a Unity build pipeline on AWS. This CDK sample will enable you to quickly create a Unity build environment on AWS.

Even if an iOS/Mac build is required, you can improve cost performance by utilizing Linux Spot Instances. When using Spot Instances for building, there is a potential issue where unnecessary time is spent building assets because the local cache is erased. In this sample we have proposed a mechanism to solve that issue.

One of the strengths of AWS is that we enable you to build services with higher availability by combining Unity features with AWS features. We hope you will try the published samples and provide us with feedback.

Special thanks to: Game Solutions Architects Nagata-san and Fujiwara-san, who cooperated in writing this article.

Masashi Tomooka

Masashi Tomooka