Designing game content architectures

As game budgets expand once more, the success of a title often depends on producing large amounts of high quality content. This is not a trivial task. Mistakes setting up your content plans can easily result in panic, shipping delays, scope cuts, rework and crunch. Modern developers live on the content treadmill so we might as well embrace it. 

For a long time I’ve been interested in content architectures, the tooling and data structures behind what content we make. This somewhat obscure topic drives much of the production efficiencies available to a team. A poor content architecture can easily result in an equivalent player experience costing 10 times as much time and labor. That’s the difference in output between a 30 person team and a 300 person team; a lot of money and human life to naively misspend.

Who this is for

  • Producers: Anyone above level of associate producer should know this topic down cold. To paraphrase the words of designer Crystin Cox, “I want to be able to ask a producer whether I should use a placeholder or a vertical slice when building an experience.” To a large degree this is your job since these early decisions drive much the team’s ability to deliver on a schedule and adapt to unexpected changes. 
  • Designers: If you decide what the team makes, you owe it to them to also understand the best possible methods of building the desired outcome. Design leaders maximize the impact of the experience they deliver while working within a fixed budget. 
  • Engineers: You’ll be building many of these tools and pipelines. Wouldn’t it be nice if they were useful? Wouldn’t it be nice if other disciplines could communicate their needs? Knowing how to think about serving content authors improves the game, your work and results in happier cross team relationships. 

What we’ll cover

Content architectures are a broad topic best approached holistically. Existing content architecture experts are usually veteran developers who have multiple games and dozens of failures under their belt. Unfortunately that means this essay needs to spend time intro-ing the basics before we get to the more advanced considerations. Apologies for the slow build! 

  1. Terminology: Basic definitions of what we are manipulating in a content archichitecture.
  2. Concepts: Key concepts that help us think about our content.
  3. Constraints: What are the specific content choices for a given project that shape our architecture?
  4. Basic architectural patterns: How might we organize our content?
  5. Advanced patterns – Manual composition: How do we manage rigidities in the content pipeline?
  6. Advanced patterns – Automated composition: How do we reduce rigidities with automation?
  7. Meta – Tool authoring: How do we build tools that multiply our authoring efforts?

Terminology

Let’s start off with some basic definitions that work for most forms of content you’ll run into. I’m abstracting the discussion away from specific content (levels, character, textures) so we have the building blocks to think conceptually about any content in our game. We want to get to “content algebra”, instead of always asking “how many apples does Bob have?” 

Content: Content is an authored set of data intended to be displayed in some broader game system and consumed by the player to create a meaningful experience. More traditional forms of content include things like a chapter of a book, or a painting in a museum. 

We often think of game content as generic media like 3D models or text. And that’s definitely where we spend a lot of effort. However each game also contains data files like a loot table, level progression or powerup. These need to be designed with care. 

Chunks: Traditionally content comes in the form of  content chunks. This is a piece of the player experience that is standardized and reproducible. Examples of chunks include

  • Level: A game like Super Mario Bros has discrete levels. Each module is self contained and consists of a set of platforms, enemies and win conditions. A game is then composed of multiple levels. 
  • Player character skin: A bundle composed of 3D model, UVs, textures, shaders, animation rigs and state machines. The player has a choice between multiple skins. 
  • Weapon: A set of properties for weapon damage, rate, cooldown. As well as associated art, economics costs, etc. 
  • Player buff: A set of modifications that occur to an external set of properties. Along with constraints on when and how the buff is triggered. 

The contents of each chunk differ. Weapon A has different data than Weapon B. But the data structure and how that data feeds into other systems is shared. 

Standards: Standards are rules and constraints that define a content chunk. They help reduce risk by removing unexpected variability and associated thrash. They help improve quality by focusing an author on excelling at a particular well-defined space. They help improve efficiency by eliminating common blockers and streamlining workflow. They help teams scale, by allowing multiple authors to work coherently on the same project. 

Sets: These chunks are organized into sets. You might have 20 levels in a game. That’s your game’s set of levels. Or 500 barks. That’s your game’s set of barks.

Composition: Chunks can be assembled together into new composite chunks. A level chunk is a composite of enemies, level tilesets, powerups and other modular elements. The level designer likely did not create any of these sub-components, but they put them together to form a unique player experience. 

Composition is a creative act of authoring. Someone needs to make deliberate choices on what is included and its relations with the other elements. Even a writer composes words they did not create on the page. A painter composes color they did not create on a canvas. 

Dependencies: When you split content up into chunks and string them together in a content architecture, we create dependencies. In order for content to work or have meaning, it requires that other content or systems are functioning exactly as expected. The act of creating chunks always creates dependencies since there’s a fuzzy line for where content wants to reside. Standards help catalog and isolate dependencies. Later on, we’ll see many of the tools for managing content architectures are about structuring dependencies in a useful manner. 

Questions worth asking about your game: But we often don’t take the time to think of ‘words’ or ‘color’ as standardized chunks. They are just the invisible air we breathe. Eliminating our blindness to the ‘intentionality of the default’ is the first step one must take. As a content architect you need to expand your perspective and see these elements as explicit design choices.

  • What are your chunks?
  • What are their standards?
  • What are their sets?
  • How are they composed?
  • What are their dependencies?

Concepts

In order to design a content system, it helps to have a mental model of how content ‘works’. Here are some of the big picture rules of content authoring.

Content delivers value: We author and deliver works of art to players in order to provide them with meaningful experiences. We can build content that harms or wastes a player’s life. Or we can build content that enriches their life. 

Content is consumed: Content can be experienced a certain number of times before players feel like they understand the experience and are ready for something different. Some content becomes a touchstone for an ongoing socio-economic player ritual, but most is used and then put aside. The player exhausts their motivation to return to the content. 

Consumption is iterative. Players experience a chunk of content 1 to N times before they move on. Chunks that are experienced once and then discards are seen as Highly Consumable. Ones that can be experienced many times without being discarded are seen as Evergreen

Authoring is iterative. How does an author deal with the uncertainty inherent in a diverse audience’s consumption of the content?  You iterate. You deploy the content and observe the reaction of those consuming it. Then you revise the content and test once again. 

At the most basic level, authors do this with themselves in a process called ‘self playtesting’. They switch between a creation state and a consumption state. With writing and painting this happens moment-by-moment in a tight iterative loop. For example when writing, the following happens thousands over times:

  • I write a word
  • Then immediately read what I wrote and react. 
  • Then I revise. 

Games have longer feedback loops than many forms of media. As we author, we can imagine in our minds how it might play out, but our existing skills and understanding of the game systems pollute our empathy. Some systems like multiplayer, economic or long term progressions yield surprising results or large play surfaces. Self-playtesting ends up being unreliable. So we need to rely on much less frequent cycles of playtesting with others. 

Authored content exhibits varying degrees of leverage: Leverage is a measure of efficiency, how much value the content delivers relative to its cost. 

  • Leverage = Meaningful contribution to the player experience / Sum of total authoring and tooling costs
  • High leverage: An evergreen piece of content (such as a National Anthem that that took hours to write and is used millions of times over hundreds of years) is high leverage. 
  • Low leverage: A comic in a book that took days to draw, but is viewed only once and then forgotten is considered low leverage. 
  • Other factors: The full cost/benefit structure includes the cost of set up the toolchain, the amount of content you make and the content pipeline everything needs to flow through. We’ll talk about these more in the Constraints section below. 

Leverage is a useful concept used in planning, but understand that it is inexact. Once content hits an audience, they may choose to elevate what the author thought of as a minor element to evergreen status. There are scenes from a comic like Calvin and Hobbes that were just as expensive to create as any other scene, but their resonance with the audience turns them into a much greater experience. 

Building content architectures involve an upfront cost: You need to pay for tooling. And learning the tools. And iterate on standards for your content. This is all before the team as author any shippable content chunks. 

Traditional marginal media content costs are mostly linear. Once you’ve standardized on a chunk of writing, video or imagery, there are few meaningful economies of scale. The cost to create one comic panel is roughly the same as the cost to create a similar panel 100 pages later. Most efficiencies occur by descoping standard chunks and cleverly interweaving low cost chunks with high cost chunks. 

In games, we can create non-linear content architectures: Content architectures can introduce non-linear leverage into the process of content creation. Such that for each additional hour of author labor, we get some more rich player experience out than if we had naively been making traditional content. 

Diagram 1: Stages and costs of content chunk creation

This graph helps visualize trade offs. 

  • A – Tooling Complete
  • B – Initial learning and prototyping cost paid, first content chunk created. 
  • C – Break even on your fancy content pipeline. This is the first time all your work has a net benefit relative to just manually creating stuff from piecemeal. 
  • D – Exhaustion sets in. Additional meaningful content is expensive because the player gains less value from each additional chunk of this type of experience. 

Constraints

Let’s say your goal is to create a high leverage content architecture. The first place to start is by understanding your constraints. I couch these primarily as questions a team needs to answer, since the answer will vary substantially based on the project. You’ll need to consider both sides of the leverage equation: 

  • What is the cost of designing, building and testing the content?
  • What is the effectiveness of the content?

Cost – Prototyping: The goal here is answering the question “Would this imagined content deliver the experience we desire and how?” 

  • What are your goals for this type of content?
  • How long will it take you to establish and explore the playspace limits for a particular class of content chunk? 
  • What is the risk that this prototyping effort won’t pay off?
  • What are lower risk fallbacks if the prototyping fails to pay off? 

Cost – Standardization: You need to create standards that eliminate edge cases and prevent the creation of weak content. This step is not free and often ignored.

  • How long will it take to create easy-to-communicate standards for the prototyped content? 
  • How does the content fit into the content pipeline? 
  • What tools are required to achieve desired efficiencies? 

Cost – Iteration count on each chunk during production: Iteration is also not free and is commonly ignored. 

  • How many implementation->playtesting->feedback iterations are required before the content chunk is polished and ready for the player?

Cost – Iteration speed: The speed of iteration typically determines how many you can fit. In my experience, the quality of content is directly correlated with the number and frequency of polishing iteration. 

  • How long does it take to iterate on a chunk? Consider author iterations, where the author is testing based off their own playtesting perceptions. And also consider external iterations. 
  • How can tooling be improved to speed up iteration?

Cost – Human resources: Each iteration heavy process needs to be designed, tested, optimized and mastered by living human beings operating at human-speed not computer-speed. 

  • How many people across various disciplines does this chunk cost to make?
  • How long does it take people to master the creation of a chunk?

Cost – Technology: All that data only works because it hooks up into code. 

  • What is the cost of the tech that supports the content?
  • Can you reuse or extend existing code when you add a new content use case?
  • What sort of dependencies and rigidities do certain tech choices create?

Cost – Game systems: Game play is a complex interlocking system of game mechanics and associated feedback loops. The content expresses and explores the playspace created by these systems. 

  • What is the base cost of the game mechanics the content feeds?
  • How much content and of what types do game systems need to be fun?
  • How many game systems need to be in place before you can test the validity of the content?
  • How long does it take to balance the content across the various systems in order to test it?

Cost – Communication: As you add more people, their interdependence often increases the need to talk through design intent and issues. Hand-offs can be expensive or sources of blockage. 

  • What are the hand-offs?
  • How do you make the hand-offs as efficient as possible? Where are blockage, delays or backlogs occurring?

Cost – Risk of failure: No creative undertaking as a certain outcome. Risk is converted directly into a cost in the form of rework or needing to implement an alternative design. For any specific class of content, you might not pay the cost, but over time the project as a whole will pay a higher cost for higher risk content. 

What is higher risk content? Time and resources are two factors that have a giant impact. But the factor I’ve found most predictive is the past experience of the individual or the team. An experienced team will often know how much time and resources they need. An inexperienced team will be too busy exploring what they don’t know to budget effectively. 

In order from lowest risk to highest risk

  1. Content you’ve successfully made many times. 
  2. Content you’ve made 1 to 2 times. 
  3. Content similar to something you’ve made before.  
  4. Content that has clear playable examples in another game and you seek to copy the identical functionality. 
  5. Content inspired by something someone has made, but has not been demonstrated. 
  6. New content that has no direct analogue. 

Note that there is both individual risk and team risk when talking about experience. If a task involves lots of people and they have not worked together before, they have a much higher risk of failure even if an individual contributor successfully worked on a similar project in the past. 

One might think this sort of risk spectrum results in cookie cutter content. But that is not necessarily so, especially with smaller teams. A style of content produced by someone who has spent years working on an uncommon set of skills will often be lower risk than that same person trying to copy a popular style of content. Always consider the fit between creative skills and content, not just popularity or examples. 

  • What is your team good at? What are they experienced at?
  • What content standards can you borrow from other projects?
  • What is the risk of failure for this chunk? Does it fit in a portfolio of risk?
  • Are your fallbacks if a prototype fails lower risk?

Cost – Late Revision: Only at the end of the project does the team start getting high volumes of quality player feedback. With live games, the bulk of the critical feedback will happen long after launch. So now you’ll need to update key load bearing chunks of content. What was ‘finished’ needs to be opened up, rebalanced, revised or completely redone. 

Late revision is particularly problematic for games-as-a-service. If your initial launch is even slightly successful, the title will spend the majority of its life undergoing constant revision. The rigidity that you bake into content becomes a major constraint on the cost of future updates and whether or not your team can sustain the project. You live with it forever. Teams who only know single player games struggle here and need to reevaluate most of their assumptions. 

  • What does it cost to change a chunk of content after it is finished and tied into all other dependencies? What does it cost to replace it?
  • What does it cost to change a set of content? What does it cost to replace it?

Diagram 2: Design insights happen throughout the schedule not just at the beginning. 

Diagram 3: If your content pipeline is not amenable to late state changes, you’ll fail to capitalize on most of your design insights. 

Total marginal chunk cost: So there are lots of costs that go into making a chunk of content. Be sure to honestly measure and summarize these. Blindly insisting on an optimistic fantasy helps no one. 

  • After paying prototyping and standardization, what does it really take to call one additional chunk of content ‘finished’? Include iteration cost, human resource cost, communication cost. 
  • In the cursed wail of every team edging like Zeno towards the finish line, what is the true cost of calling content “done”? 

Effectiveness – Load bearing: We now can talk about the other side of the leverage equation. Let’s start with how some content is more important than other content. A game has pillars made of key experiences that it needs to deliver in order for it to be successful. This is the heavy weight of player, publisher and market expectations. Various mechanical systems and content support those pillars. Those that bear the most weight and would hurt the game most if they failed are considered “load bearing”. 

It is also worth identifying content that is “non-load bearing”. These are places where you can use lower cost content. You might reuse existing content or apply generic purchased assets. Alternatively, you can use the fact that non-load bearing content is low risk in order to experiment and be playful. I often find some non-load bearing chunks like item descriptions and inject them with my quirkiest writing. Or give authoring of this content to someone who is learning. If this content fails, the game won’t fail.  

  • What are the pillars of your game? 
  • What content is critical to supporting those pillars? 
  • Is a particular type of content load bearing? Or is it non-load bearing?
  • What is the fallback if this content doesn’t deliver on its promise?

Effectiveness – Optimal set size: No practical system is scale free. On one hand, you want this number to be as high as possible in order to maximize the prototyping investment. However, standardized content chunks also fade in effectiveness over time as well. There is often less marginal utility to a player as they experience the 200th level compared to the 1st level. And if you are crazy enough to make a 5000th level, the utility can turn negative. Some players start to see the patterns behind your standardization and will ignore or resent non-meaningful variation.

  • What is the size of the playspace this content addresses? Is it small? Is it large?
  • What is the sweet spot for set size where each chunk of content remains distinct and meaningful to the player? 

Effectiveness – Resonance with real player motivation: This should fall out of the exercise of determining if content is loadbear, but it is worth treating as its own thing. The best content helps players fulfill their deepest intrinsic motivations. When content and system support support the various factors of self-determination theory, we see increased retention, engagement and player satisfaction. 

  • Does the content facilitate competence? Does it help the player learn skills? Or feel a sense of growth?
  • Does the content facilitate autonomy? Does it help the player feel like they’ve chosen their path? Does it help them express their identity?
  • Does the content facilitate relatedness? Does the content connect the player with others who support them? Does it enable reciprocation loops that deepen relationships?

Basic Architectural Patterns

Now that you’ve got a bunch of knowledge about what type of content you need to make, you need to build the system that helps you make that content. Here are some techniques I think about when building high leverage content architectures. 

Take these with a grain of salt. I find that as a team gains experience in a domain, they develop new tools and vocabulary custom tailored to the tasks at hand. So I encourage you to set strong constraints and then deliberately grow your team’s ability to experiment with and iterate on more efficient tools. 

Each of the following tools will likely take your team a full game or two to start to understand and master.

Lego blocks: Embrace composition by building player facing experiences out of highly reusable standardized content chunks. Consider a non-lego block design like early graphical adventure games. Every pixel on the screen was hand placed. Every interactive puzzle was hand-scripted. Deep in the code there were common structures, but there was very little modularity or reuse. 

Consider a game like Super Mario Bros. The world is composed out of standard block types, standard enemy types and standard player moves. Tiles are placed on a grid so their relationship to one another is highly predictable. The cost to create a screen of a Mario game is much less than the cost to create a screen of an adventure game. (Thankfully, no one measures gameplays by screen any longer!)

Modular blocks intended to be composed together are not limited to tiles. In the puzzle game Road Not Taken, each object was built out of a stack of standardized behaviors. A block might have the ability to be pushed. Or it might have another ability to slide if pushed. Or it could break. Or duplicate itself. Or move on its own. And by mixing and matching a relatively small number of these lego-like behaviors, we built out dozens of distinct objects. 

  • What are the legos of your game?
  • What pieces of your games are not standardized building blocks? How might you turn them into reusable legos?
  • How do your legos snap together to build interesting compositions?

References: Lego blocks usually use referencing where this is a master object stored in some central location and then an instance of that content is used in the composition. 

You may store instance specific properties. There’s a trade off here. In general you want to specify the minimum number of instanced properties as possible since global late revision that touch a 1000 instances are expensive. It is better to store the bulk of the behavior on the master so that if you can make a change in one central location, the change happens everywhere. However, some instanced properties let you adapt the instance to the current context. 

  • What properties should be on the master?
  • What should be on the instances?

Templates: As you compose structures using your reusable chunks, you discover that there are some patterns you repeat again and again. Certain sub-elements might shift around, but there’s a recognizable boilerplate structure you keep needing to rebuild. To minimize work use templates, reusable structures that have blanks the author can fill in details. 

Consider rooms in a Diablo-like game. There was a set of templates that defined each room. During level generation instances of the rooms would be plunked down and connected with hallways. However, inside each room a subset of different objects or enemies might appear. So even though there were standardized, reusable templates, each instance of the room felt different. 

  • What are your common reusable patterns? Can you turn those into templates?
  • Which elements in those patterns can be varied in order to provide players with meaningfully different experiences?

Decoupling: As we’ve discussed, splitting content into chunks and assembling them into compositions creates dependencies. Dependencies aren’t always bad. References are a form of dependency where instances depend on the existence of their master. However there are many dependencies that increase both initial content creation cost and future iteration costs. 

For example, recently we built a quest that required you to purchase an ingredient (onions!) from the store. The contents of the store were defined in chunks of data. While the quest asking for store items was defined in a totally different chunk. If the store didn’t have onions, the quest was not completable. Which just so happened to break the entire game. 

This showcases some common issues with dependencies. 

  • Difficult to spot: It wasn’t obvious looking at the quest that there was a dependency on the store. The quest config said nothing at all about where you get an onion and it was only by sorting through the entire config system we found the connection. I call this content pattern “Chunnel Design” after the famous tunnel that goes under the English channel. They dug the tunnel from both the French side and the English side with plans to meet up blinding in the middle. If either effort had been off, the tunnel wouldn’t have connected. 
  • Expensive to fix: Instead of making a change in one location, we needed to make a change in multiple locations. With tangled dependencies, this can get quite expensive. In one project, we had to update 5 separate locations to get an item to show up in the store. A five tunnel chunnel. 🙂 
  • Ambiguous ownership: The quest wasn’t able to specify anything about how a player gets the onion. And the store had no idea that someone might want the onion. Neither piece of content was responsible for making sure that the desired experience was delivered to the player. Even if we did fix the issue, it wasn’t clear we fixed it in the right spot. And the next time we fixed a similar issue, we might make a different decision. Which leads to edge cases and more unexpected problems later on. 

Decoupling at the most basic level is the process of eliminating unnecessary problematic dependencies. 

  • What dependencies are helping speed up authoring?
  • What dependencies are slowing down iteration?
  • Can you remove these costly dependencies?
  • Can you explicitly state dependencies in your data so they are obvious upon inspection?
  • Can you give ownership of the experience to fewer chunks, instead of spreading it across multiple chunks?
  • Can you add automated validation so you are instantly alerted when dependencies break?

Content pipelines: As you start to engage with both composition and decoupling, we start splitting complex content into stages of work. Early stages of work, composed of templates and referenced masters feed into later stages of composed instances. Each stage has its own required tools, processes for ingesting data from previous stages and processes for exporting data to subsequence stages. Put it all together and you’ve got a directed graph called a content pipeline. 

Diagram 4: Sample content pipeline

A content pipeline might involve the following three sub-pipelines of character art, terrain art and behavior code feeding into a finished level. Notice that various content chunks pass through multiple stages in a fixed order across many tools in order to create the final output. 

Directed pipelines have some interesting properties

  • Stages are composed in a fixed order: This ensures reproducibility of results. Selecting the right order is a big design choice that impacts your content production schedule. I often think of this as “up pipeline” and “down pipeline”. Changes at base stages cause ripple effect down pipeline. Changes down pipeline have fewer later stage dependencies, but have a linear cost to make change. Which can be a very expensive number if that surface area of content at the end of the pipeline is large. 
  • Manual composition: Order matters so much because often the earliest pipeline stages are created and locked down. Then subsequent stages are built on top and the earlier stage is never changed. In platformers, designers build a chunk of player movement with locked jump distances. And then the layer of level design is built on top of this. Manual composition creates strong dependencies. Changing or replacing a locked stage invalidates the later stages. If those stages (such as hand crafted level) took time to build, naive changes to earlier stages can cause immense project thrash. Managing scheduling of locked stages is one big reason why we have producers and those miserable gantt charts. There are tricks to get around this issue, such as using stubbed in dummy data or placeholders. We’ll talk more about that below. 
  • Automated composition. One way of reducing these dependencies is to automate the composition process. Procedural generation is one form of this. The rooms in a rogue-like are placed via an algorithm. If the rooms get bigger, that constraint is passed up to the next layer and the hallways connecting the rooms adapt accordingly. Unlike manual composition, the author can then make a change on almost any stage and the end content is rebuilt automatically. (Photoshop was so transformative because it pioneered automated layer composition in the visual arts) 
  • Content at each stage can be referenced: Each layer is defined in a master chunk and instanced. 

Automated composition + referenced chunks offers immense leverage by reducing the cost of authoring iteration. A content author can compose multiple layered compositions. And if late changes need to be made to ever base layers, it is less of an issue. 

Observation – Non-linear leverage appears in how you build the pipeline: What we are seeing here is a key truth. Non-linear leverage in your content architecture rarely comes from how you structure your base chunks. Instead it appears in how you build the composition of those chunks. In my experience, the more you can move into hierarchies of composition, more leverage is available. This introduces its own complexity and cost so it isn’t a silver bullet. 

Advanced Patterns – Manual composition

Sadly, it is rare that we can apply automated composition to every composition process in the pipeline. Anywhere there is manual composition, the order that elements are created matters. This presents some challenges: 

  • How do you schedule work so the right stuff is complete before the next stage needs it? 
  • How do you reduce the cost of making mistakes?

There are some common strategies. Any or all of these can be mixed and matched. 

Vertical Slice: Build out a representative segment of the final content at full fidelity, test it to verify validity. Then meticulously lock down standards for each pipeline stage. In production, build content to these standards and trust that the end result will deliver on the promise of the vertical slice. 

Issue – Slow iteration: However, building the vertical slice is expensive and leads to slow iterations. Imagine building out a whole level with complete mechanics and final art, discovering it doesn’t work and then throwing that away. I think of it as “Building the game five times” More often than not, teams get into the second or third iteration and are canceled. 

Issue – Bureaucracy: Another issue with vertical slices is that it puts immense pressure on the standards. They must be perfect and they rarely are. The answer is often more documentation. This acts as an organizational tendency for large bureaucracies and large teams where waste is common. Due to rigidities in the system, change — when it does occur — is often a destructive coup or pogrom. Vertical slices are very common in AAA. 

Bottoms up design: Identify most core “up pipeline” stages. Prototype them. Test them. Ensure they are fun. Polish them to a high degree of fidelity. 

Now lock down that element of the design. Then move onto the next stage of the pipeline that builds on the locked down stage and repeat.

For example, if you are building a platformer, build, polish and lock down the most perfect jumping you can create. Then build a small level with blocks based off jumping so your game grows like an onion from the innermost layers. When you hear the advice “Focus on a fun core mechanic” it is usually a sign of bottoms up design. 

Issue – Highly systemic games: An issue here is many games require multiple interlocking systems to be in place before you know the game is fun. Consider a game like Animal Crossing. It certainly has central mechanics like chopping trees and running around. But (having just worked on a game in this genre) until economy, narrative, pacing, affordances, inventory, other minigames are all in place, the game is desperately unfun. 

Issue – Late stage changes: The other issue is again one of managing late stage changes. If you discover that you screwed up an aspect of the core gameplay early on, it can be expensive to pay the cost of that change rippling out across all the dependent layers of the content pipeline. An MMO (Age of Conan) baked the timing of their attacks into their female character animations. When community playtesting suggested they needed to speed these up, it was an expensive fix. The early assumptions baked into the content architecture bit them. 

Placeholders: Build a vertical slice of your game, but fill it with low fidelity placeholder content. This lets you test the game quickly and identify issues. And since the placeholder content is relatively cheap to make, throwing it away doesn’t destroy your budget. As you become more confident of the validity of the work, you start refining and polishing. 

This pattern shows up in all sorts of areas

  • Paper prototyping: Mechanical content
  • Grayboxing: Spatial content
  • Wireframes / Storyboards / Animatics: Sequential content
  • Concept Art: Visual content

Placeholders can be used with either vertical slices or bottoms up design and they inherit most of the same issues. Bottoms up design often results in piecemeal prototypes that don’t really tell you how the final game will play. Vertical slices still result in a lot of throw away work, but since you are using placeholders, iteration is much less expensive. 

A version of the vertical slice + placeholder that I’m intrigued by is the “playable skeleton”. With this strategy, you create a full version of the game that is playable end-to-end as inexpensively as possible. And then you perform subsequent polishing passes until the game reaches a shippable state. Thimbleweed Park was built using a similar technique with a full playable version of all game rooms complete and iterated on before final art was added. 

Issue – ignorant stakeholders: A common issue with placeholders is that stakeholders do not have the critical sophistication to understand what is placeholder and what is final. Games have been canceled when an executive looked at a graybox level and wondered why this game they are spending millions on is so obviously ugly. Many teams end up with a secret rule to only show their publishers near final art and claim it is placeholder. The risk of getting that one ignorant person is too high for honesty. And education can be an impossible lift. 

Issue – weak player affordances and feedback: Players also don’t always understand placeholders. There’s an art to picking comprehensible placeholders that work well in a placetest; abstract boxes and colors are almost never the right answer. Instead go for lower fidelity content that is still thematically and symbolically representative. If you are supposed to be petting a dog, use a picture of a dog. You’ll learn important lessons iterating on the right affordances and feedback even in a prototype. 

Scaffolding systems via value anchors: The challenge of cheaply validating systemic designs is unsolved. It is common, even when using vertical slices or playable skeletons to spend months (or years!) in the dark valley of faith as various systems slowly come online. 

For example, in order to test a crafting system, you need to build the crafting system, a UI, add the crafting content, add sources for that content, balance the sources, balance the crafting costs and finally anchor the crafted items to a functional purpose within the broader game. Even if you build the base crafting functionality quickly, the other elements take a lot of time and effort to coalesce. 

One approach is to stub in value anchors early in production. This is usually a large sink that’s easy to build but still gives purpose to the various content systems. By building the anchor first, you have something to judge the activities against. Later you can still add secondary activities and more nuanced anchors.

Some examples: 

  • When you prototype an RPG, you can create a player level that is fed with XP. Then you can have various activities like combat feed into XP. Player levels feedback into power which in turn allows tackling of harder monsters. Later you can add additional skills, enemies and resources that expand the system. But you’ll always have something playable from early one. 
  • Animal Crossing has a large sink in the form of paying bells to upgrade your house. Whatever activity you do results in items that can be sold to generate bells. This creates a simple skeleton to slowly add more activities, more resources and ultimately more player goals. 

Anchors are a bit tricky to get right because they aren’t purely mechanical. They are about setting up systems of value and tie into deep player motivations. The reason upgrading the house in Animal Crossing is interesting is not because of the mechanics of upgrading! It is because the house holds your decoration and items, which in turn act as a signal of identity, progress and status. In our Animal Crossing-like Cozy Grove, we’ve built a prototype that had upgrading your ‘house’ without the decorating aspects. It didn’t anchor player value at all. 

Advanced Patterns – Automated Composition

There are also content architectures that open up when you enable automated composition. This is an exciting open area ripe for additional experimentation and research. I expect over the next decade or two, we’ll see a steady adoption of content architectures with various forms of automated composition. Here are a few ideas that I’ve found helpful to get you started. 

Thinking of procedural generation as an authoring helper: Broadly, many of our existing tools in this space are termed “procedural generation”. But this field has problematic roots. 

Researchers and new proc gen developers look for magical algorithms that provide fountains of surprising new content. Like old cranks searching for perpetual motion machines, they hope to one day crack the problem of an infinite experience generator. It is very much the perspective of an engineer who is not an artist but still wants to magically create without learning art. Though certain machine-learning efforts show promise, I personally have no interest in this particularly philosophical approach. 

Instead, I look at procedural generation entirely as a tool for high leverage content

  • How does it make the content creator more efficient? 
  • Does your content author understand the tool?
  • How can they create richer content that resonates with players? 
  • How can they reduce iteration time?
  • How can they decrease the pain of late changes?

It is these last two area where procedural generation techniques shine. A good automated composition pipeline allows designers to make changes at most stages and have those changes flow through into the end experience with little to no manual rework. 

However, procedural generation has a very real upfront cost. You need to abstractly design about your content and how it is assembled. And build all the tooling for those specialized chunks. And then build the automation that assembles them. This can cost many multiples of just building a single content chunk manually. Long term, you accumulate benefits in terms of cheaper iterations, but it is rarely clear that the initial investment was worth it. 

Technique – Combinatorics: Do you need 1000 chunks of content in a set? If so, the cost of making that content is often high. And the post-release cost of changing that set is likely high as well. 

One technique is to split your desired content into sub-chunks that are arranged in orthogonal sets. And then use combinatorics to generate an expanded set of final content that covers a wider surface.

For example, in our game Cozy Grove, we have shells on the beach. This is split up as follows

  • Shell type: This is a small set of 6 basic types like clam, conch, whelk, starfish, cowrie, coral. Each of these chunks contains a set of properties for image, price, chance of spawning. 
  • Shell season: This set contains 4 seasons and color variations across those seasons. It also contains filtering information so shells don’t spawn in the wrong season. 
  • Shell rarity: A set of five rarities. Each contains modifications to chance of spawning and price. Additional information about which bitmap to use. 
  • Master shell definition: This tells how these 3 orthogonal sets are to be combined. It also contains any properties shared across all shells, like behaviors or dusting value. 

Once each of those is defined, there’s an automated composition step that combines them all together to generate 120 (6 * 4 * 5) expanded variants. This also provides us with non-linear leverage where adding one new shell type adds 20 new shells to collect. 

Issue – Bowl of Oatmeal: Combinatorics make it trivial to create what Kate Compton calls Bowls of Oatmeal, vast amount of content that is neither perceptually unique or differentiated. Players will tend to latch onto patterns shared across your spread of content and filter out non-meaningful variation. The infinite yet weakly differentiated worlds of No Man’s Sky are one example. 

There are a few techniques I’ve found useful here. 

  • Choose smaller set sizes that don’t trigger player exhaustion. Small, highly differentiated sets are often much better than large undifferentiated sets. If you split your placespace up too finely, you get oatmeal. 
  • Use cheaper content like names to obscure the rote nature of combinatorial expansion. One thing we do for shells is give every combination of season and type a unique name. That’s only 24 names and took very little time. And concatenating “rarity + 24 unique names” results in strings that feel unique. 

Technique – Chocolate Chips Cookies: Another composition pattern is to mix high fidelity setpieces in a low cost substrate. You can think of your templated setpieces as chocolate chips. Players love them, but if they repeat them too often, they burnout on consuming them. So they must be used sparingly. And the substrate they are embedded in is the cookie dough. Pleasant, filling, endlessly edible. But not very unique or interesting. 

Individually, these two types of content have flaws. The dough is low cost, but also results in bland experiences. The chocolate chips are high cost and overly consumable. But they provide great peak moments. By creating a pacing structure so that just as players are getting bored of the dough, they encounter a chip, the value of both can be extended. 

In rogue-likes, you author setpieces in the form of rooms and boss encounters. And then you embed those in levels composed of randomly generated hallways and generic rooms. Just when you are getting tired of slogging through endless corridors, you see a magical unique room that changes the rest of your run. 

  • Imagining the final experience, what aspects deserve to be meticulously authored? What aspects are filler?
  • What are your set pieces? Prototyping, standards, production processes and costs. How often can each one be used before players consume them?
  • What is your substrate? 
  • What is the ratio of set pieces to substrate?
  • What is the pacing of setpieces? 

Advanced Patterns – User content

There’s also a set of more volatile patterns that involve leveraging your players. You give up control and risk quality, but sometimes gain new sources of content far beyond the resources of your team. 

Player sourced testing: If you have a strong pre-release community, you can ask them to test the game. This is perhaps obvious, but in the language of the model we’ve been discussing, it facilitates getting back rapid and rich feedback on your iterations. This path also includes analytics. 

Player sourced game content: You can go further and source actual content chunks. The most common example of this is crowdsourced localization, but it can be extended to other types of content. 

In Realm of the Mad God, we crowdsourced much of the pixel art. Some important lessons from this and crowdsourcing localization: 

  • User friendly tools: Players don’t have the patience to learn typical developer tools. 
  • Robust standards: You need explicit, heavily validated standards. Developers need a path creating the content right. Players need to be prevented from creating the content wrong. These sound similar, but the latter is a much harder requirement.
  • Credit: Acknowledge their contributions. This goes a long way towards encouraging them to help out. We held contests that were very effective. 

Mods: Post launch you can open your game up to mods. It is quite common for long lived popular games to source entire expansion packs or members of the ongoing dev team from the mod community. It is a gift that keeps giving. 

In-game social content: You can also build tools inside of your game and incentivize players to create content for other players. There are many variations of this, but the main thing to note is that good UGC systems require you to design your game around them. Not a simple add-on, but something at the heart of the core loop. Examples: 

  • PvP: Players act as enemies for other players. Counterstrike, Chess. 
  • Base builders: Players create bases for other players to destroy. Clash of Clans
  • Building games: Players cooperatively build in a space together. Minecraft, Factorio
  • Design games: Players create levels for other players to play. Super Mario Maker, Dreams

Meta: Designing tools

So far we’ve been mostly talking about how you design your data and the structure it lives within. But don’t forget that authoring this content is a human process; someone needs to create by hand the work feeding these magnificent pipelines. And for that you need great tools.

The goal of tools: Tools multiply the efforts of content authors. They help create:

  • Richer content: Tool unlock the ability to make types of content that were otherwise impossible or too time intensive to consider. 
  • Cheaper content: Tools enable an author to create a chunk of content of a desired quality level more quickly.  
  • More polished content: By reducing iteration time and improving feedback, an author is able to quickly polish their poor rough drafts into something that delivers 

Unless you get into generative systems, they tend not to be used to create large quantities of new content from some base seeds. That’s more the role of combinatorics or other proc gen techniques. 

All game tools are custom designed: The first and most critical lesson you should learn is there are no standard tools. Every tool needs to be custom tailored to best fit the following constraints

  • Skills of author: What level of abstraction does the author work best in? What affordances help them do their job? Game tools generally target intermediate and experts. 
  • Requirements of the content chunks: What is the minimal set of data that should be hand authored to make an effective chunk?
  • Ingestion of the content: What is the efficient process by which authored content is connected up with the rest of the game?
  • Iteration requirements: How do the tools enable the author to make and see changes rapidly?

I suspect some of you are thinking, “But I have Photoshop! I have Maya! I have Unreal! Those are standard tools.” Sweet summer child. 

Modern commercial tools are powerful enough to do almost anything. Without identifying and serving the previous constraints, you will flail. So like it or not, you still need to establish standard practices, procedures, naming conventions and automation scripts in order to use even something as ‘standard’ as Photoshop to efficiently build your specific game. There will always be a tool design process for each game, even if it is built on top of an existing tool chain. 

A process for designing your tools

  1. Constraints: Identify the four constraints for a particular type of content: Author Skills, Content requirements, Ingestion Pipeline, and Iteration requirements. 
  2. Initial Sample: Create an example of the type of content you are making. Get feedback from stakeholders if this is what you want to build. 
  3. Brainstorm building the sample: Talk to a real author. Not an imaginary one, but an actual person who is going to be creating these things. How would they build this? Is there anything that exists that could be leveraged? What are problems and workflows they imagine will come up? Small, cross functional strike teams are very effective if multiple people are involved. 
  4. Build a first version: Try for the 20% of features that gets you 80% of the functionality. Test the pipeline of creating and ingesting and seeing the content in the game end to end. 
  5. Get an author to use the first version as soon as possible: Have them make real content that is expected to be in the game. Listen to their complaints and dreams. 
  6. Fix issues: Fix as many easy issues immediately. Prioritize one or two big asks for the next rev. Repeat these last two steps until the tool converges on something ‘good enough’; it will never be perfect. 

Mistake – Not basing the tool features off real content needs: The most common pitfall that plagues tool creation is that feedback and iteration steps (2, 3, 5 and 6) simply never happen. An engineer makes a tool. They (or antsy producers) declare the tool finished and the rest of the team is told to use it. 

  • Often this first pass contains the wrong features. 
  • Or weeks are wasted over engineering aspects that are unimportant. 
  • Or they’ll have built in major workflow problems that are invisible to them because they don’t understand that X is an operation you need to do 300 times in an hour, not once per week. 

In the best case, content authors don’t even use the tool and find cheaper workarounds that get the job done. You just lose the engineering effort. In the worst case, content authors use the tool but they spend truly enormous amounts of wasted time jumping through avoidable hoops. The result is typically bad, hacky content that was expensive to create. And often needs to be thrown away. 

Mistake – Delays building real content: The next most common pitfall is that there is a large time gap between the first version being built and an author uses it to create real content. In addition to general problems of skipping iteration, waiting too long has the following negative effects. 

  • Change becomes expensive. Code and processes petrify over time. When an engineer still has the code in their brain, feedback from the author is much easier to implement. Small tweaks happen quickly. 
  • Authors are never taught how the tool works. An immediate dialogue between the creator of a tool and the content author inevitable results in knowledge transfer. So many times I’ve realized that there was a keyboard shortcut already implemented for a laborious task. But the conversation happened a month after the tool was built and the engineer had forgotten. 

Tip – Shadowing: Content authors infest old tools like fungus in a moist fecund jungle. Strange content will seep out of every crevice in the toolchain. Wait long enough and you’ll see workarounds built off hacks forming the foundation huge swaths of your content. Authors learn, adapt and push tools in ways many find horrifying.  In the process, inefficiencies creep in as the tool ends up being used in ways it was never intended. 

This is normal. And it is a good thing. Clever content creators are discovering new opportunities and new requirements that couldn’t be predicted until they put a few hundred (or thousand) hours into actually building the desired content. 

The first step in supporting your fungal creators is to understand how the tool is used in the real world. Shadowing is when a toolmaker watches a content creator build something. It is like playtesting for your tools. 

  • Share a screen as a content creator builds something. If they start doing something strange, ask them why. The answers are delightful. 
  • Record how long things take. Is anything surprising? A fun exercise is predicting how long you think things will take, and then compare it to reality. 
  • Brainstorm ways of reducing iteration time. Can steps be removed or automated? Can automated steps be sped up? How would you make this process 10X faster?
  • Review standards: Do they need updating? Can edge cases or expensive exceptions be avoided going forward?

Final notes

We’ve only managed to cover the most basic aspects of game content architectures. I hope you find enough here of interest to explore further. Observe your own projects with a critical eye, experiment when possible and share notes with others. For deep skills that cross multiple disciplines, a document alone will never be enough. 

Be humble. Content architectures are not a magical silver bullet for making more meaningful content with less effort. They can be a huge pain in the ass that introduces immense complexity, costs and risk into your game. Because of the effort it takes to build and tune them, they often delay your ability to start playing the game. 

Diagram 5: When each incremental chunk is expensive and you need a lot of them, a higher leverage content pipeline might be worth your time. 

Learning curve: Any content architecture and toolset has a substantial learning cost. The specific team using the system needs to understand and practice building great content with the tools. I don’t mean to frighten anyone, but this can take years. A level designer who has been using Unreal for 3 years will generally be a lot more effective than one who has been using it for 6 months. A team that has been building content for a specific genre on a specific engine will be much the same. 

Often the best tools and processes are the tools you know: I’m regularly amazed at how simple tools and simple content in the hands of experienced, talented teams results in world-class experiences. The content architecture of a novel isn’t complicated. Just a series of chapters composed of a few hundred pages. Authored using bog-standard text editors. Yet we give that to writers with years of experience under their belts and amazing work emerges. 

A lot of times, you can just throw talent at a problem. And if you need to scale up, throw more bodies into the pipeline. This path is always an option as long as you’ve kept your content modular and highly decoupled. 

The final constraint: Hand-authoring is our ultimate pinch point. Humans can only work as fast as humans work. They need to dream, experiment, clumsily and slowly make mistakes. It takes  “human time” to have moments of insight and creative breakthroughs. 

Naturally, as beancounters and producers, we want to multiply those efforts. To stretch out that costly thing and increase efficiency. 

But this hand-authored content is also the soul of our games. Dilute it too much and you destroy the very thing that provides value. “More, Faster” is not better if you are churning out garbage. 

Your content architecture is a delicate balance act. Where do you put all your limited, beautiful,  messy, human effort in order to provide the highest quality experience for the player? A worthy design challeng e. 

References

Daniel Cook Avatar

2 responses to “Designing game content architectures”

  1. […] For more information on how to build high leverage content architectures see Designing Game Content Architectures.  […]

  2. […] leverage: How do we get more bang-for-buck from our expensive content? See proc gen, content standards, tooling, trickle […]

Leave a Reply

Table of Contents

    Discover more from Lostgarden

    Subscribe now to keep reading and get access to the full archive.

    Continue reading