Building Alignment with Code Pillars

By Jack Davidson

One of the joys of programming is that there are so many different ways to tackle a problem, you may lean into specific paradigms like object oriented programming or data oriented design, you may make heavy use of specific design patterns as they provide a framework to ease development, testing or provide some form of modularity. All of these paths have their pros and cons, some we can measure and be objective about, some we have to let our experience guide us. 

Unfortunately it is not a given that if you’re working on a team, each member is going to come to the same conclusion on how the problem should be solved. At the end of the day we have all accrued different experiences and lessons from our time honing our craft, this is especially true for those who come from different teams or projects who may not have worked together as long.

This can result in a lack of cohesion in a shared codebase, where different regions of the code are built using juxtaposed methodologies. Maybe at first this isn’t too much of an issue as they are isolated from each other, but at some point they can come into contact and these different methodologies that make sense on their own can create complex code where they meet. Maybe a programmer from one region is exposed to a new part of the codebase when tracking down performance issues or fixing a bug, and finds themselves lost, having to stop and take the time to build a new mental map of how the code is structured before they can make progress, slowing down development time.

We can minimise these risks by taking proactive steps to build alignment in the team. Processes such as reviewing technical designs before implementation; reviewing code after the fact, will help create that alignment over time, module to module. However, we think it is valuable to have that discussion of how we unify as developers on the current project up front. By doing so we can surface misalignment early, understand the compromises we can make and commit to how we as a team work and solve problems. Done right these discussions can speed up the process of reviewing technical designs or code by reducing the number of surprises that can come from this misalignment.

A tool we have used to have that discussion is what we refer to as Code Pillars. The process of creating these pillars provides a space for the team to think about alignment early and more importantly get communicating. I will not pretend this is some magic bullet that can solve all our woes, building alignment in the team requires constant attention and communication to maintain, however, code pillars can help get the ball rolling as well as providing a constant reference point to compare against, anchoring your team to an agreed approach.

What are Code Pillars? 

Code pillars to us are an umbrella of core ideals that we agree as a team to work under and to use when planning, building and reviewing code. While they can help alignment during development, they can also provide an insight into the mindset of the team for anyone joining the team or for future engineers after the project, if it is ever revisited.


It is important that code pillars are driven by the team, describing what the team prioritises when solving problems; by focussing on their priorities we can get the buy-in from the team necessary to reach alignment. These priorities may include but are not limited to: 

  • How to think about the problem.

  • How we structure our code and data.

  • How the team wants to personally grow and learn as individuals.

  • How the codebase can best support the team.

Code Pillars are separate from coding standards, they are not hard rules carved in stone that focus on the grammar of what we do but instead tend to be more high level, to help guide us in reaching a solution that is expected by our team, a solution that our colleagues would have also arrived at, more or less, given the same problem. It should be noted that the pillars should be flexible and ready to change when the priorities of the team change.

While Code Pillars are driven by the team and can encapsulate a lot of the team's personal ambitions for development of software, it is important that the team always keeps in mind that code is a means to an end, and that we do not work in isolation. We have to consider the needs of the product, the other disciplines and the eventual users at each step.

With that overview, let’s step through each point and provide some examples of code pillars we have used in different projects to provide some context.
You may agree with some, you may disagree with others. That's fine, your team's code pillars should be a reflection of what's important to your team and the problems you’re tackling in your project, not ours.  My goal here is to get you thinking about what a code pillar can be.

How to think about the problem

We have used the pillar “Solve actual problems - no more, no less” to keep the team focussed on what we know within reasonable certainty, to try to minimise unnecessary speculation and avoid complicated general purpose solutions to simple problems. As a team we all know that desire to create a solution that works for as many cases as we can think of, it can be a challenging and interesting problem, much more fun than a simple function. However, we wanted to keep in mind the likelihood that we need a solution for these cases and consider what we are sacrificing in terms of readability and performance to have that one-size-fits-all system.

Another pillar in this category is “Different problems require different solutions” admitting that when there are large design changes that should be reflected in the code. Our code should be tailored to the problem we are trying to solve and in doing so we can benefit from optimising to its constraints.

How we structure our code and data

A good example was “Don’t squirrel away data”, we wanted to avoid systems owning too much data, and instead push data down into systems as needed. We admitted as a team we didn’t know where data would be needed later in development and getting it wrong can impact how easy the codebase is to work with. In the past we tried having each system own its own data which in turn led us to build bridges between systems and couple them to access that data later. By agreeing up front that the accessibility of data was important to managing the relationship between systems, we did not have to worry how we could access data when the design of a system changed and required something new or when we needed to strip out a system from the pipeline.

A favourite of mine is that code should focus on “Debuggability”, as in to avoid patterns that make it difficult to know what is happening when something breaks. For example opt for polling over callbacks as it allows for you to walk through the code and see why a block isn’t executing. You don’t always know where the bug has come from and you can’t always fall back to the original author to find the problem.

The pillar “Code to last” was a lesson hard learned during prototyping. Eager to move quickly  and test out as many ideas as we could, myself and a few other junior engineers took shortcuts and hacked things together. This is fine we told ourselves, this project will only last a month, oh how naive we were. That project lasted 3 months as new ideas were generated that built on what came before. We soon ended up in a position where the hacks we had used to speed up development were crippling our velocity in later months. With the next project we wanted to manage that velocity better, to maintain a pace that might be slower than that initial run of hacks but was consistent and reliable, so we used “Code to last” as our guiding light while prototyping. This doesn’t mean writing prototypes to a production standard but acknowledging the balance required between flexibility needed to iterate and creating that stable foundation that we can continue to build on, as our code will live longer than we expect.

How the team wants to personally grow and learn as individuals

In a previous project I worked on, a few of the engineers were keen to learn about Data-oriented design and how to accomplish this through the use of Unity’s Job system. By surfacing this through the code pillars process the team were able to have that conversation up front, and as a group evaluate the potential, the risks and where possible a space can be made to drive forward that learning where it makes sense for the project. 

In this case while Unity Jobs was a good fit for the type of game we were making, the timeline of the project and our unfamiliarity with the Jobs system and Data-oriented design however led us to the pillar “Open to using Unity.Jobs - Group data where it’s used most”. We didn’t commit to using Unity Jobs yet but wanted to take steps towards learning about these concepts by making sure we as a group structured our code and more importantly data in an appropriate manner. In doing so we could get a feel for the challenges and benefits of data-oriented design; while leaving the door open to easily slot Unity.Jobs in later down the line, to investigate its use and impact on performance when time allowed.

How the codebase can best support the team

This example was a direct result of the code team coming from a project where we felt we dropped the ball, that we made things unnecessarily difficult or time consuming for other disciplines and ourselves to work in the editor. So we agreed as a group “Code decisions should empower other disciplines”, we wanted to focus on fast game startup, and minimising iteration time when working with data and assets. The more we as creatives can iterate the better the final outcome.

How we define our code pillars!

Hopefully by this point you see some of the value in creating your own pillars to help align and guide the team - maybe you even have a couple of ideas. Let’s discuss how we go about actually constructing our pillars to ensure they are memorable, useful and usable.

This process should start with each new project or change in direction, take your previous code pillars with you as reference but you don’t have to stick with them. Your new project or direction might have new priorities, maybe as you and your colleagues have grown the pillars you want to work toward have changed. Take the time to understand what is important to your team now and going forward.

As I’ve said already it’s vital that we get everyone involved, if we all have a stake in their creation, it can get everyone invested in making it work. I’ve enjoyed projects where this process was easy, we may have started with some misalignment but through the discussion we got excited by the potential ahead and were forged into a much stronger team. I’ve also delighted in projects where this process of building alignment was filled with more lively debate, while we were all made better by these discussions there came a point where we needed to agree to disagree, and commit. It’s at this stage you need to rely on the critical mass and the key figures in your team being onboard, otherwise you lose any buy-in and it will be doomed to fail.

What this looks like at scale in a much larger team, I’m not sure yet, but I look forward to tackling the problem and learning how this applies.

When it comes to constructing your code pillars, it can be useful to look back on your previous project. Consider what worked, what problems you ran into, can your code pillars help mitigate these problems while shoring up what was successful?

Ask your team a few questions to get the conversation going, a good starting point is how many pillars should we have? Too many and they become hard to remember, too few and you might miss what gets people excited. The likelihood that everything can be covered by one set of pillars is slim, so some tough choice may lie ahead to determine what is the priority for the team and the project. For actually understanding the teams priorities, I’ve tried asking

  • What do we want to achieve as programmers?

  • What do we want to learn?

  • How do we want to structure our code and data?

  • How can we best support other disciplines?

  • What are the key product risks we are looking to mitigate?

While these are more generic questions that could apply to any project, it is important to remember your current project may have unique problems that need to be a focus of the team. For example, in a project targeting console platforms we also asked “Should certification drive some of our pillars?”. These questions should set us up to concentrate on what’s important for this project and the team.

We opt to start by brainstorming to discover what the team finds important, collecting similar themes together before voting on our highest priorities. We’ve found techniques such as dot voting to be particularly useful in narrowing down to a short list, you may even wish to give weight to votes from senior members of the team. 

The next step is to format these pillars into something that is easy and quick to reference, so as to refresh our memories. A snappy headline that when repeated in a document or in a meeting the team knows what it means, however, a headline isn’t enough, we all forget and new members to the team will have missed any conversations that provide context. Follow through with some bullet points or paragraphs to flesh out the concept. Essentially it is important for us to capture the necessary information for the reader to understand what the pillar is, why it's important and how they can achieve its goal.

With everything formatted, some gorgeous posters lining the office walls, the team is aligned! … If only. At the end of the day Code Pillars are a jumping off point, to get the team thinking about how we stay aligned. It is important that the team refers back to them regularly. When discussing solutions to a problem they can be used as an evaluation criteria to decide the best solution for the project. When reviewing code they can be used as a marker to understand if code meets the expectations of the team. By keeping them in the conversation at each stage in the code development lifecycle, they will piece by piece be internalised in how we develop software. 

As a team it's necessary to try and understand if the pillars are still relevant as time goes on. Understanding if the code the team is creating has started to diverge from the pillars, can be a rough measure of the team's alignment. It's worth highlighting this disparity with the team, the result may be that they want to continue with the existing pillars, or they may decide that there are new priorities that should take precedence. 

It may be that the team has stuck to the pillars to the point they become second nature, the conversation might turn to are there any new pillars that the team can strive towards to continue to improve.
It is likely that the context of the project has changed as time has progressed, does the team's priorities still allow for the project to achieve its potential or do they go counter to the current direction? 

However the situation, checking in and having these conversations is an important part of maintaining that alignment.

Building alignment in a team is no easy task, especially as teams grow larger and the deadlines get closer. We’ve certainly not perfected it, code pillars are not a silver bullet that's going to do all the work for you, they can only ever be high level guidelines. There is potential for there to be misalignment over how to achieve the pillars, which will only show as code is developed. When this happens it's a sign to circle back and have that discussion of what it means, how it applies and if this is a case to bend the rules. As with any disagreements it’s a learning opportunity for everyone involved and we should all aim to come in with an open mind to how others frame the problem, its constraints and how those paired with their interpretation of the pillars lead to a solution.

This process and its ability to make stronger teams is something I am passionate about, so I am very keen to hear if anyone goes through this process or something similar - what were your pillars and are they helping?



Tag Games is now a Scopely Studio!

We are excited to announce that Tag Games has joined the Scopely ecosystem!

From the Scopely.com blog

”One of our ongoing objectives at Scopely is to explore acquisition opportunities to bring additional best-in-class teams into our operating system, and the talent at Tag Games is perfectly aligned with this ambition.

Scopely is home to a dynamic ecosystem of world-class gamemakers, creating exceptional game experiences around the world. Scopely game teams include internal Scopely Studios and/or external partner studios that collaborate with a unique single team approach, where the traditional boundaries between developer and publisher are eliminated. Scopely is highly committed to building game teams with incredible talent density, inviting the world’s best gamemakers to create together! Both internal Scopely Studios and partner studios benefit from Scopely’s publishing infrastructure, operating system, and proprietary technology platform Playgami™, which offers a range of products, enabling teams to create games players love and grow them as great businesses.

In 2021, we partnered with Tag Games, a fantastic team based in Dundee, Scotland. Tag, who recently celebrated its 17th anniversary, has worked with leading publishers on more than 60 games since its founding in 2006

To collaborate even more deeply, both Scopely and Tag Games leadership felt it was the ideal time to fully join forces, and believe this combined entity will make us all even stronger. Tag shares our vision to create extremely meaningful, dynamic experiences for players and represents outstanding passion and expertise in game making. We can’t wait to see what more we can do together.

With this acquisition, more than 60 employees join Scopely in Scotland.

Welcome to the Scopely adventure, Tag Games!”


Architecting #1 - Responding to change

By Scott Downie

I’ve read countless books and blog posts that cover specific aspects of development, and many other articles that talk at length about modularity, coupling, dependency inversion, SOLID, etc; but I’ve found it difficult to find many sources that discuss architecture in concrete rather than abstract terms, so thought I would share some of my own experiences. Architecture is more than just the modules of your application (although that is a big part of it), architecture is the relationship between those modules and particularly the data that those modules consume and produce in order to achieve the product vision. Architecture is also not something that should be rigid but something that should change as our understanding of the product space changes. It is unreasonable to expect us to plan and deliver a game on the first iteration without encountering challenges, feedback, misassumptions that then ultimately force us to change our approach.

At Tag I’ve been fortunate enough to work on a number of different greenfield projects, each providing me and the teams with an opportunity to try something different and attempt to learn the lessons of the past. In the last couple of years I’ve kind of settled on a general approach that we’ve used in our last 3 or 4 projects (although it may change in future again - who knows). Tag isn’t a hive-mind, so there are people I work with day to day who I’m sure (and in some cases know) would approach things differently. But equally there are a number of people who are philosophically aligned with me. So all of that is just to say this is a reflection of my thoughts and experiences that I wanted to get down on paper (others on the team will also have a chance to post about things that are important to them).

This post will focus primarily on architecture through the lens of building a codebase that is “responsive to change”, so that we can accommodate the inevitable pivots and bumps in the road with as little additional impact as possible. There are, of course, other important facets of designing a codebase: like how it supports performance, how we utilise and support a larger team, etc, that I’ll perhaps cover in future - but for now the focus is on adaptability. (Worth noting that I don’t believe performance, readability and adaptability are mutually exclusive - but that’s for another day). The aim of writing this was both to help set out my approach for those who might be curious and hopefully prompt some discussion about what has and hasn’t worked for others. So I’m interested in hearing others' experiences, thoughts and feedback.

Something worth noting is that I will try to avoid using shorthands to refer to patterns and instead build from first principles to avoid any misunderstanding. I find it hard enough to build alignment using shorthands within a team working together day to day, let alone have shorthands that translate to a global audience. I tend to think of good shorthands as something that emerge over time once there is alignment on the foundational knowledge.

Responding to change

Humans tend to be inherently anxious about change and I’ve found that to be particularly true with programmers (myself included), who seek order and often see change as chaotic. Sometimes we find ourselves in situations where the design is changing and we lament the fact we didn’t do more to catch this up front and plan for it. But deep down we know that change is inevitable and often outwith our control. Whether it is coming up against something unexpected in an implementation, pivoting the design in response to user feedback, or even sometimes having to change to support platform or legal directives. The requirements that our codebase supports at the start of a project will not be the same requirements it needs to support at the end - and that is ok.

So what does it mean to be responsive to change? Or rather what does it not mean? In my opinion being responsive to change does not mean our code can magically handle changes in design. I’m a strong believer that non-trivial changes in the design domain should result in similar scope changes in the actual codebase. Being responsive to change is about reducing the time and impact of making those changes. In essence, how reconfigurable our code is and how easy it is to remove parts that no longer meet the requirements and replace them with parts that do.

This is sometimes in conflict with the (often preferred) approach of writing more “generic” solutions upfront. While there are valid cases for writing more general purpose solutions, I’m a big believer that these tend to be more exceptional cases and for most problems it is better to focus on solutions that are a tight fit for the concrete requirements at hand. The reasons being that “generic” solutions tend to:

  • Have larger upfront and ongoing maintenance costs

  • Be more complex than a tight fitting solution (not always and that can be a good razor for choosing that approach)

  • Miss the mark (when applied speculatively), introducing unhelpful coupling or not actually handling the evolving requirements.

So I’ve kind of developed a heuristic that if non-trivial changes in design don’t require changes in the code, that is a bit of a red-flag that we might be solving problems more generally and are perhaps falling foul of confirmation bias for the handful of general purpose solutions that have worked out. All in all I tend to follow the “rule of three” with regard to generic code - if we have 3 concrete examples that point to a more general purpose solution; we can then build the more general purpose solution.

In terms of architecture that is responsive to change, to me this means we build something that doesn’t try and guess at how the product will evolve, it supports the product as the product is now - but crucially reduces the cost of changing the modules and the relationships between modules as we learn more about the design domain.

So how do we measure how responsive to change our codebase is? I could probably fill a whole blog post with a discussion of the importance of objective measurements but I’ll just say that it is important that where possible we try to approach these things in terms of measurable outcomes. While it is very difficult to say whether one architecture is “better” than another, we can absolutely say whether our architecture meets the measurable outcomes. So let’s define the measurable outcomes for a codebase that is responsive to change:

Responsive to change

  • Minimises time taken for changes in the design to be reflected in actual working software

  • Minimises the amount of bugs and technical debt taken on to achieve that change

  • Maintains the product KPIs despite changes

We should measure these periodically and if these KPIs get worse as the design changes, then chances are our structure isn’t adaptable enough…yet. But it can and should evolve.

The 5 principles

We need to split our codebase up so that it is easier to rationalise about, can support multiple people working on it and ultimately, so that it is easier to make changes to the codebase in response to changes in the design domain.

Now we can talk about “modularity” and the role that it plays in helping us respond to change. After all, architecture is primarily about designing the relationships between modules. Essentially the idea is to make it so modules can be easily changed, or replaced, and crucially for the relationships between modules to change -  without the ripples of that impacting on the wider codebase (or the wider team).

We achieve this through these main principles/guidelines:

  1. Organise the application into “phases” and “layers”. Where the “phases” encapsulate systems and data relating to that phase of the application and the dependencies between layers are unidirectional

  2. Centralise “global” data outside of any one system

  3. Minimise inter-system dependencies using “data contracts” and have them communicate through a “glue layer” / coordinator (which we call Flow States BTW)

  4. Ensure “global” state changes are enacted centrally and not through satellite systems

  5. Model the behaviours and not the design domain

Here’s a simple, hopefully very recognisable scenario. We have a player controlled character (controlled via some input device) that moves around the world and the camera follows its position. Here’s a diagram showing the architecture:

Let’s walk through how the principles are applied.

#1 Organise applications into “phases” and “layers”

Let’s start by thinking about our application in terms of discrete phases; perhaps there is a main menu and then the game itself (the above diagram captures a system operating in the “game” phase). You could imagine each of those phases having a “coordinator” that drives the systems and the data flow for each phase, as well as holding onto the relevant core data for the phase. You can think of the “coordinator” as being the conductor of an orchestra; not making any of the music but making sure everything else is happening at the right time. If we just have a single coordinator it would soon become giant and unwieldy with systems running that aren’t even needed at that point in time. We will likely want to split some of these phases into smaller finite states - perhaps “game” is actually made up of an “attacking” mode and a “defending” mode. Each mode runs different systems, has different data and has different UI views, but there are still some “game” phase wide systems (like listening for a “pause” event) that we want to run in both modes. So now we have something of a hierarchy where “attack” and “defend” coordinators are actually children of the “game” coordinator and each system within the coordinator is also a bit like a child. 

Firstly, we want to avoid children having dependencies on parents or parent systems. Doing this will make it more difficult for something like a UIView or other systems to be used in multiple phases because they will be coupled together with a specific phase. This coupling impacts on adaptability because there is more work to be done if we want to move the system or view to another phase of the application. So it’s fine for a coordinator to call a function on a system (say to update it each frame) but typically we want to avoid explicit dependencies the other way around.  These boundaries can be quite obvious at a library level where it would be fine for our “MainMenu” to depend on a JSON parser but certainly not fine for the JSON parser to depend on our “MainMenu”. (FYI we touch on handling this dependency inversion a little later but it can be achieved in multiple ways.)

Secondly, we want to ensure that the “coordinator” has all the information it needs to make any required global state changes (#4). If a coordinator is too low down the hierarchy to make an effective decision, then the decision needs to be escalated up the hierarchy until a coordinator has the context to act on the information. So for example if the “attacking” coordinator wants to show a UI screen but we know that the parent “game” coordinator manages the pause screen - it probably makes sense to promote the management of the screen state to the “game” coordinator. In practice this means as the application evolves data stores are promoted and demoted up and down the hierarchy to make sure that coordinators have all the information they need for their systems.

#2 Centralise “global” data

I’ll start by trying to define “global” data. It’s not in reference to global variables or anything like that; but more that we have data that is important or central to our application and that is often required by multiple systems. A good example is the position of the player. In our small scenario above, the player position is important to the CameraSystem and the MovementSystem but it will equally be important to a collision detection system and UI systems (for displaying markers, etc). If we squirrel that data away inside systems - especially satellite systems - then we will likely introduce dependencies between different systems (see #3) but equally as the design of the application shifts you might find that the data needed by systems only exists in places that there isn’t a direct connection to. The solution is then often to convert those systems to singletons to provide easy access to the data, but this can then lead to “the big ball of mud” pattern, where the convenience of having access to systems anywhere leads to a loss of structure as systems start to call into each other. Another solution that emerges to solve the lack of direct access to data is the introduction of complex dependency inversion through message passing. Messaging passing has its uses (which I touch on later) but not as a sticking plaster for poorly designed communication lines. Ultimately message passing adds a layer of cognitive load in tracking where the messages come from. For me it is better to accept that there is data in our application that doesn’t really belong to any one system but instead belongs more centrally in some form of data storage container (BTW, I’m also not saying systems can’t be stateful, they will have internal working/scratch data that only they care about).

If we have the data stored centrally in the coordinators (not necessarily as literal variables but in some data container) when we add a new system (or change the responsibilities of an existing system) we should then easily be able to feed it the data it needs. The coordinator can even aggregate/package/split data into the format needed by the systems (not too dissimilar to the controller in a Model-View-Controller approach). So if required there can be a level of indirection between the stored data layout and the layout needed by systems. Now, typically we will want the data stored in batches and formatted ready to give to the systems in bulk without any transformations - but different systems might require the data in different formats and we will probably want to layout the data around the critical path. Ideally we will want to group data together so that it has high cohesion (i.e. 90%+ of the data elements are used by any given system that uses that data container).

In many respects this sort of database approach is what would happen in a typical server application, where logic and data is very much decoupled. It also has the advantage of being more trivial to serialise if we aren’t spreading the responsibility of serialisation across multiple otherwise unrelated systems.

#3 Minimise inter-system dependencies with “data contracts”

Notice in the diagram above that the systems do not have any direct dependencies on each other. Sometimes I’ve seen this achieved using interfaces at a system level, to avoid creating explicit type dependencies between two systems, but that approach doesn’t really solve the main problem as we still have the dependencies on the actual interface via the method calls. The systems in our example above essentially just take in some data and output some data (our result). This means we can change the internals and even the API of the InputSystem without having to touch the MovementSystem. The MovementSystem just needs the previous frame’s velocity and position, it doesn’t care how that data is provided. So if we need to change the wider application in response to some design change - as long as we still provide velocity and position, the MovementSystem is quite happy. This is important because there is a correlation between the amount of churn in a system (frequency of changes) and the number of bugs. By reducing the reasons for the MovementSystem to change, we hopefully also reduce the number of bugs.

I refer to this boundary between systems as “data contracts”, by which I mean there is an exchange of data under an agreed format. Another benefit in establishing these “data contracts” is that we can iterate on the internal workings of a system without those changes rippling out. This supports one of our key philosophies at Tag  of “make it work and then make it good”. We can build a prototype, that meets the data contract, to test assumptions, get feedback and then we can improve that prototype into production ready code (or shelve it if it isn’t meeting the needs). Similarly this approach can also front load dependencies and allow more parallel work (like server and client being able to work independently once the data is designed).

The approach of passing in data to systems allows us to more easily change the source of the data without having to change the internal workings of the system itself. In the MovementSystem this could mean that the InputData fed to it comes from an AI Bot rather than an input device. Equally it could be we have dummy data that we create to enable testing - it’s much easier to mock data than it is to mock a full system.

An additional  key benefit, from my perspective, is that I find it easier to read and understand code that flows linearly (while again acknowledging that readability is hard to measure directly). Humans are well conditioned to read text top to bottom (in most modern cultures at least), so having explicitly ordered system calls with explicit data flow allows me to read the coordinator like a story; “we get input data from the InputSystem, we pass that to the MovementSystem to calculate an updated position, we pass the updated position to the CameraSystem so the camera can frame the player”. With something like an event driven, or heavily interface driven, architecture I can find it hard to follow the flow without stepping through the debugger because I have to keep many more things in my head (e.g. what is the actual type of this interface).

Something to be careful of in this approach is that the coordinators don’t end up doing too much heavy lifting. Naturally the coordinators tend to be high traffic areas (often with multiple developers contributing) so we try and keep the logic inside of systems (and also design the data to minimise the transforms required to feed the data into systems). If you keep coordinators light on logic then conflicts tend to be trivial and easily resolved. You can reduce the chances of conflicts with a little bit of indirection, if you wish, by having the coordinator not deal in concrete data types but instead pass an object or interface that allows the systems to grab the data (meaning when the data structure changes only the provider and the using systems need to change and not the glue layer). But I don’t tend to make heavy use of that pattern simply because it can be harder to read at a glance.

#4 Enact global state changes centrally

If you’ve ever worked in Unity you will be familiar with how UI buttons operate. Essentially we attach a callback function to a button that is called in response to the button being pressed. If that callback is responsible for making a global state change such as showing a new screen, we might find if a player presses two buttons at the same time, two screens show up. Basically we need to have some system with access to all the information that is empowered to make those types of changes. Satellite systems (like a UI view controller) almost certainly don’t have all the information they need to avoid breaking the global state.

Instead of having satellite systems enacting the changes,  they pass information to a central system (in our case the coordinator) and it leverages its access to the broader context to make the state changes correctly. In our UI case that would be knowing if there are other screens currently showing. If we think of the application as layers (#1 organise into layers and phases), information can flow bi-directionally but control flow should come from top down. There are multiple ways to achieve this sort of information flow without having too many circular dependencies. My preferred approach is to have the coordinator poll its systems for data (like in the diagram) and then have it use that data to trigger other appropriate systems (it’s more explicit and easy to follow). That way we avoid systems having dependencies on the coordinator. However, we can also use a message passing or event driven approach for this too (we also use a message queue in the coordinators in our codebases).

Tying this back to our theme of “responding to change”, centralising control flow or global state allows us to more safely expand the systems and their remits while still ensuring overall correctness of the app. Really we can think about this as splitting the responsibility of resolving into a central system that then allows many more contributing systems to be added over time.  I often sum this principle up as “avoid having the tail wag the dog”.

#5 Model the behaviours and not the design domain

The “design domain” in our above example is ”We have a player controlled character (controlled via some input device) that moves around the world and the camera follows its position”, but would usually be a brief or design doc. This scenario outlines how the designer/user/product manager might think about the problem space. In my experience it isn’t always useful to model this 1:1 in our codebase (tempting though it may be).  In the synopsis we described a “player controlled character” but you’ll see from the diagram there is no concept of a “player” mentioned. Let’s imagine for a minute we tried to more closely represent the design domain by adding a “Player” class and associating the behaviours of the player with that class (which in this case would be movement but in a more realistic example might include combat, levelling up, etc). Modelling the player as a type would result in a large class containing lots of complex (and probably unrelated) logic. When you have a file/class that is large and lacks cohesion it tends to have multiple developers working in it and a higher churn/change rate; this in turn can lead to more conflicts and more bugs, meaning there is more risk and resistance to making changes.

Instead of adding a “Player” class, we look to model along different boundaries that are better suited to our codebase. The first main guideline for splitting up the player is to create systems for each behaviour type. By splitting into smaller systems we reduce the responsibilities that any one system has and as a result reduce the amount of people, churn and low-cohesion coupling within a system - which makes it easier to change, remove, or add new systems. Another advantage of this is it makes it easy to have different entity types (players, enemies, NPCs) leverage the behaviours because the systems are decoupled from the types themselves. So as long as the data required by a system can be provided, any type can use it. ECS frameworks (like Unity Entities) are an example of this system based approach, however they come with a host of extra features that we haven’t needed so far and therefore tend to go with something simpler.

As well as splitting systems along behaviours, it is often advantageous to further divide systems based on churn/uncertainty and authors. Again it is about managing change and by isolating change we can iterate without having unintentional impacts on other areas. So if we have a system that has a couple of responsibilities that make sense to include together conceptually - but we find that one area is very stable (never changes) and one is changing frequently; it probably makes sense to split again. In doing so we isolate the churn and mitigate the risk of that churn polluting the stable part of the system. Same for authors. If we have a system where certain teams or individuals are changing one part and a different group is changing another - it probably makes sense to split to avoid conflicts and bugs that crop up in the communication gaps between teams. In terms of uncertainty - if we have experimental parts of a module, or things we are unsure of, it can be wise to isolate those as uncertainty tends to lead to a lot of churn and change. In certain cases it may even make sense to introduce a little bit of code duplication if it allows this sort of separation - particularly things like boilerplate code. When both systems are stable - we can always look to recombine and remove the duplicate code.

The dependency injection problem

Dependency injection is another term which has taken on lots of different meanings. I use it in the simplest sense of how we pass dependencies into systems that need them. 

One of the things I pushed really hard on in past projects was having very explicit dependency management - so literally we push all dependencies into systems and we don’t use any “pull models” like DI frameworks or stuff like singletons. The feedback (and by my own admission) is that we took that model too far. In my head it was supposed to improve readability by showing dependencies explicitly and avoid an explosion of dependencies by introducing a little bit of friction whenever we needed to add one. In practice, because of our hierarchical finite state machine of coordinators, it meant routing dependencies into states just so those states could pass the baton onto another state; and it became quite rigid in terms of moving “phases” around as we had to reconfigure the plumbing..

One of the teams more recently made a proposal that I think addresses this issue while retaining the original spirit of the initiative. That is we can use the “pull model” via a service manager/DI framework (or even singletons shudder) inside a coordinator but we inject explicit dependencies (or containers of dependencies) into systems. So the coordinator can easily grab the dependencies needed by the systems and then forward them in (afterall the coordinators are the glue layer so we expect them to have a number of dependencies). The advantage of this is that we can reconfigure the “states”/”phases” of the application without having to rebuild the dependency chains but we still have enough of the friction/explicitness at the system level that means systems don’t start to talk to each other directly - keeping their behaviours and impact more isolated.

So in effect this adds rule #6 (or maybe just #1a) - “We can pull within coordinators but push into systems”.

The case study

One of the examples that jumps to mind when I think about reconfigurability (and it was a pretty extreme example) was on a project that Tag was working on around 2016. We had designed the codebase around encapsulating data to make sure that only systems that needed the data had access to it (under the premise that it would be safer because other systems couldn’t accidentally change the data or cause conflicts). We had very tight ownership between systems and the data and the systems were pretty much modelled on the design domain. It looked a little bit like this (it was actually a lot bigger than this but just pruned to what is relevant for the example):

And then came the day when the product manager delivered us the telemetry spec. This spec outlined the analytics events that we wanted to track and the data that needed to be supplied with each event; and the spec cut right across all our encapsulation boundaries.

For example, here was one of the events:

Select Character Event

  • Selected character

  • Level being played

  • Current score

  • Current lives

As you can see from the diagram, all this data was encapsulated (hidden) in different systems. The player controller was the obvious candidate to trigger the event (as it was responsible for responding to character selection) but it only had access to a fraction of the data that was needed by the event. And it’s perfectly understandable why we didn’t think this data would have any relation to each other, but equally understandable why PM wanted this data bundled together, to understand the context in which players picked particular characters to help inform future product decisions. It would have been difficult for the programming team to have anticipated or foreseen these requirements during initial planning without that telemetry spec existing.

The solution we came up with to solve this was a little bit of a hack and actually turned out to be quite an expensive hack performance wise.  It was two-fold, firstly, we ended up exposing lots of systems to the analytics system so that it could pull much of the data it needed (which meant turning a lot of systems into singletons and then we had the dependency explosion once it was easy for people to access systems anywhere). Secondly, we created the concept of micro events that could be used to send small pieces of data to the analytics system. Each event would listen for the data and build up its own package - once it had all the data it needed, including the trigger, it would be sent to the analytics backend. So for example we had a SelectCharacterEvent that listened for data, the LevelSelectController would send a micro event with the selected level, the PlayerController would send a micro event for the selected character (and the trigger) and the SelectCharacterEvent would then pull the score and lives from the newly created singletons.  This was pretty slow as all events were listening for all data all the time to figure out what they needed! It was also prone to out of order issues if the micro event orders were changed - meaning events sometimes wouldn’t be sent as not all the data had been received before the trigger. 

It was a really brittle system (and perhaps there were better workarounds) but fundamentally the design of what we needed the application to do had changed and our codebase was not equipped to respond. If I were doing that game again today I might structure it like this instead:

In the revised diagram above, the analytics events would typically be sent from the coordinators and they would leverage their access to the central databases to provide the relevant data. If you imagine the analytics system not existing at first, you should hopefully be able to see how trivial it would be to add and indeed to add more analytics events. The RoundData could be injected into the coordinator (which makes sense in this case as the phases are linear) but could equally be available globally or exist in a root coordinator. (Also in real life I’d probably split round data up once I better understood the data usage patterns).

Final thoughts

I’m certain there are dozens of details that I’ve missed from this overview that would make it tricky to actually apply the guidelines wholesale to your own specific applications; but I hope the overall gist came across and that you can see some of the positives and some of the tradeoffs that are made in this approach. Everything in programming is about tradeoffs, there are very few ideal scenarios so it’s important to make sure we understand what those tradeoffs are and make very considered and deliberate decisions about what tradeoffs we are willing to make based on the objective needs of our product.

Fundamentally architecture is not something that can be done on paper, up front and it never changes. Architecture cannot be rigid if we want to support the product and its changing goals. We need the actual feedback of writing code and we need to accept that change is inevitable as we uncover unknowns and surprises, and as we get feedback that helps us inspect and adapt. Being responsive to change in architecture is about reducing the cost of not only changing the internals of a module but also in changing the relationships between modules. And that is what the principles above try to do.

I would love to hear about architectural approaches that have worked for you and your teams and the guidelines your team has in place to help (and of course the tradeoffs you’ve been willing to make in service of them).

Further reading / watching

  • Write Code that is Easy to Delete - A great blog post that proposes the best way to have an adaptable codebase is to make it so modules are easy to remove and replace. Has some concrete advice for how best to split up a codebase.

  • The Architecture of Uncertainty - A video presentation that does a great job of articulating the fear of change and that architecture does not mean rigid plans we do upfront that never change. Talks about architecting around the cost of change and the impact of dependencies.

Meet the Manager- Brek

Tag Games is full of some incredibly talented people and in our Meet the Manager series we take a look at the people who lead and inspire the team’s development and growth.

Next up, is Brek Carr, our Head of Design.

What do you do at Tag?

As Head of Design, I help to define the design processes we use, support and maintain the well-being of our design team and spread design awareness throughout the studio.

How did you get into Games?

Waaay back in early childhood with the Spectrum ZX and BBCB Micro. I’d spend hours on end rummaging through boxes of cassette tapes, popping them into the tape recorder, pressing play and listening to the screech of the tape as it slowly loaded the splash art line by line… fingers crossed in the fervent hope that it’d actually work this time.

Then along came cartridge consoles like the NES and AMIGA computers with their discs. Load times were reduced from an uncertain 5-15mins depending on how much fluff was on the tape to seconds. I was hooked!

Fast forward to 18 years old and choosing my University course. As I was filling out my application form in the front room for Computer Science degrees, I saw the Queen opening the first video games course at Abertay in Dundee on the TV. After a hurried alteration to my application, I got in and it has been games ever since.


Favourite thing about working at Tag?

The people! The work! The passion! Everyone cares about making amazing games.


What about your work at Tag do you find particularly rewarding?

When a design just feels “right” and you see the player’s faces light up as they play. I look at the faces of my designers smiling at the player’s response. It’s great to see people’s pride in that.

And what about the more challenging parts of your work? What really pushes you?

One of the more challenging parts of work I find can be to convince others of the importance of the small things in design. Things that are hard to quantify but are important, especially when it's several of these seemingly small things that combine to build a more cohesive experience.

It's compounded by there not being a standard way of communicating these things. Meaning there isn’t a common understanding or a common consensus on the importance.

Someone once said… Trifles make perfection, but perfection is no trifle.

Also… Trifle is delicious!

Favourite thing about Dundee?

It's big enough to have a variety of things going on and small enough to be able to walk to most places you need.

What game character do you think you’re most like?

  • Taz the Tasmanian devil (he’s starred in several games). Why? Because I have a tendency to get over-excited when talking about design and descend into using hand motions and sound effects.

  • Kirby because I’m pink and inhale cakes!

  • Jigglypuff because my singing voice is considered a weapon!

National Pet Day

We love to celebrate a national day here at Tag so when we found out today was National Pet Day, we just had to share some of our favourite furry friends with everyone!

Egg.

This is Egg he sleeps like this sometimes.

Pumpkin.

Elderly Lady. Very shouty and fierce, her tail often stars in zoom meetings.

Fluffy.

The youngest in the family. (at 16 ) Also quite demanding.

Sushi.

This is Sushi. Half retriever / Half alligator. Loves sticks, plushies and will run through a kid to catch their ball.

Chaos.

This is Chaos (his brother's name is Trouble). He's 11 and suffers from anxiety, but he's on anti-anxiety meds now and getting more comfortable every day. His hobbies are working out and sleeping in weird positions on top of dumbbells.

Scout.

This is Scout. Her hobbies include dressing up, blowing bubbles and stealing any cheese she can find.

Polly.

This is Polly, before her cat tower fell apart.

Soot.

This is Soot. He had a near death experience when he decided it would be a good idea to swallow his rubber smurf toy, which then lodged in his gut. Since the surgery to remove it, he's decided he can't be arsed to cat anymore and just eats and lounges on the sofa. He is afraid of the outdoors and shows no enthusiasm for life except when pestering me to fill his bowl and when bullying his sister and asserting his position as the household alpha. He only gets away with this because he is now the size of a dog. He can often be heard in my zoom meetings, because he's shouty and attention seeking (particularly of me) but never seen, because he is too fat now to be able to get up onto my desk.

Sweetie.

This is Sweetie (the kids named them!) She is Soot's sister. She stays out all night and comes in to sleep it off during the day. She's an apex predator in the garden and timid in the house. She tries to find places to sleep where Soot can't reach her, which is easier than it sounds because he needs a run up to get onto the sofa (!!). She knows how to cat.

Indiana.

Meet Indiana! Her name is a tribute to Henry Jones Jr's dog .
She is 9 years old and she smiles when she see someone she really likes. We try to put together a band where she sings while I play the harmonica

Meet the Manager- David

Tag Games is full of some incredibly talented people and in our Meet the Manager series we take a look at the people who lead and inspire the team’s development and growth.

Next up, is David Murdoch, our Head of QA.

What do you do at Tag?

I’m the Head of QA. I currently manage QA on our projects.

How did you get into Games?

I’ve been a keen gamer since playing on the NES in the eighties (my favourite game at the time was Super Mario Bros 3) but it really started when I got an Atari STE for my Christmas, where games like Zak McKracken and Dizzy got me hooked. Career-wise, I briefly started a Games Technology course at University but decided to take time off, and during that time I landed a temporary contract as a QA Tester, testing F1 2000 on the PlayStation. As a side note, I’m also a keen Formula 1 fan, so this was a great opportunity for me.

 

Favourite thing about working at Tag?

The people, the culture and the drive to succeed as a team. Everyone’s opinion is welcome and it doesn’t feel like there is any seclusion. All disciplines work together, which is great.

What about your work at Tag do you find particularly rewarding?

Implementing new processes or strategies which have a positive impact on the company and projects. As a bonus, it always feels rewarding finding a defect which is particularly elaborate in its nature. I would imagine this would be the case for a lot of people who work in QA.

And what about the more challenging parts of your work? What really pushes you?

Understanding new systems or parts of the game that are complex to test, with lots of scenarios to think about. This can at times feel quite daunting but it just takes time and teamwork, to work through the process and breaking things down into smaller, manageable pieces. The drive comes from wanting to succeed as a team and learning new things.

Favourite thing about Dundee?

The general scenery and walking or running next to the River Tay, it is very calming and serene. I also like that it is quite hilly which is certainly a challenge when cycling.

What game character do you think you’re most like?

This is a tough question! Maybe Isaac Clarke from the Dead Space series. I have a very slight resemblance (maybe) but it’s one of my favourite franchises so I’ll go with that (I spent way too much time getting all the trophies in this game)

International Women's Day 2023!

To celebrate International Women's Day, we decided to bring the women of Tag together to discuss women who are inspirational to us personally and professionally. We talked about what advice we would give to our younger selves and to the younger generations that follow us.

Ali, Studio Support Lead

First of all, a shout out to all the incredible women out there who are paving the way for future generations of women in the world and workplace. Your resilience, strength, and determination are truly inspiring! One woman in particular in my career that has been instrumental to my development is Jacqui Henderson. Jacqui cultivated my natural curiosity to always keep learning, growing and developing as an individual and as a professional in my field.  She believed in me. She helped me push boundaries. I will forever be grateful for her commitment to see me succeed. Thank you Jacqui.

To my younger self and all the young women out there, I would offer this advice:

Believe in yourself - You are capable of achieving anything you set your mind to. You got this!

Build your network - Network as much as you can in your industry and beyond. You never know where a connection could lead to.

Find a mentor - Find a mentor who can provide guidance, support, and advice as you navigate your career and life.

Keep learning - Always be curious and open to learning new things. This will help you stay ahead of the curve and adapt to changes that come your way.

Speak up - Your ideas and opinions matter. Don't be afraid to share them, even if you're the only woman in the room.

Don't be afraid to chase your dreams and aim high. With hard work and determination, you can achieve anything you set your mind to.


Lauren, Producer

I often forget the challenges that being a woman in a male dominated industry brings. I genuinely don’t pay attention to it 90% of the time. There have been a few occasions where I see the women around me doing so well, and feel an absolute sense of pride in regards to how far we have come to be represented, present, and listened to. I have heard stories of difficulties, and continue to see the prejudice on social media, but I am so fortunate to be in a workplace where I haven't felt that struggle myself.

I'm extremely lucky that the person who has inspired my career, and helped me grow as a Producer - is someone I get to work with every day, Carol Clark.

Carol ensured I was given every opportunity at Tag Games, and dedicated time to help me understand game production. She encouraged my learning as an individual, and with peers. Having come from a different industry, it was a whirlwind of information. Since then she made exceptional efforts to help me understand the importance of culture - and how to work best with the people I engage with on a day-to-day basis. I feel so unbelievably lucky to have had this amazing support system directly for over 3.5 years.  Carol has taken intentional steps to bring female Producers in the Dundee area together, to help create a support system that was not just contained within Tag Games. She inspires me every day with her strength, consideration, and at times, vulnerability. In truth, I wouldn't be who I am today without her support and push to empower me to be a better Producer, and person.

 To my younger self and all the women out there, I would offer this advice:

Try not to doubt yourself, your ability or compare yourself to other women. Everyone has their own journey, and yours will be completely unique to you. Remember to  look up to others, admire, seek advice, community, mentorship etc. - but don't beat yourself up over not getting to the same place as other people in the same timeframe. It will work out and you will learn so, so much.


Maria, Finance Assistant

There have been so many inspiring women both in my personal and professional life too many to mention.  I believe that things are improving year on year and the next generation will hopefully not need to make gender inequality  a talking point.

Women can achieve anything that they want to if they are hard working and tenacious.  Women often lack the confidence of their male counterparts - this is the biggest challenge that we need to work on because if a woman believes she can do something she is normally right!

To my younger self and all the young women out there, I would offer this advice:

“Nothing is impossible. The word itself says I’m Possible”.


Indie, HR Administration Assistant


Joining a company like Tag where women are automatically respected and listened to despite being in a male-dominated industry makes it easy to forget it isn't like this everywhere. For me, multiple inspiring women have helped me, all in different ways, to become the person I am. Anyone who talks to me for more than five minutes will quickly learn about my love of football and most likely hear me rant about the need for better visibility of sportswomen in general. One player who quickly comes to mind when thinking of inspirational people would be former Scotland player, Jen Beattie. Purely for her determination to play through her cancer diagnosis as well as continually being unequivocally herself at all times.

To my younger self, I would say, don't be afraid to speak up for yourself and advocate for what you want, even if you feel that others may disagree. It is just as important for others to hear what you're thinking as it is for you to hear their ideas.


Mina, Mid-Level Programmer

I have been fortunate enough so far in my career that I have had very few issues with being a woman in a male-dominated industry, which I know is not always the case, especially in programming. I am very grateful that I know that I can speak up about anything and be heard at Tag.

While I don't have a particular person to mention, there have been many women who are close friends that have been an incredible source of inspiration and encouragement for me, even through what seems like incredibly minor things, words of encouragement and offering advice and support go a long way and I will never forget it.

To my younger self and all women I would say, Don't give up, despite how difficult the road might seem it is not insurmountable.  


Fru, UI Artist

It’s funny—or ironic, or maybe sad—how difficult I found adding to the document that was meant to gather inspirational thoughts from all of Tag’s women. It’s not as if I have no thoughts on the matter: I could talk about the maths teacher in my high school I barely knew, who offered me remedial classes when my parents wanted to take me out of the arts program for almost failing maths. (I know she will never see this, but it is worth saying anyway: Kézér tanárnő, I owe you more than my awkward 16-year-old self could ever express.) I could also talk about my own complicated relationship with being a woman, the comparisons that naturally emerge in conversation when discussing working in male-dominated industries, or my experiences with other women in academia. But no matter how much I thought about it, there was really only one thing I wanted to say on this day. 


To me; age 16, sitting in 8th period remedial maths: you’re going to be okay. Not soon, not all at once, but you will be. Don’t give up. It will be worth it.

Meet the Manager: Jaid

Tag Games is full of some incredibly talented people and in our Meet the Manager series we take a look at the people who lead and inspire the team’s development and growth.

First up, is Jaid Mindang, our Head of Art.

What do you do at Tag?

I’m the Head of Art, which means in a nutshell that I’m responsible for the wellbeing and performance of the artists. 

How did you get into Games?

Sort of by accident, really. I’m of the age that I had a ZX81 and a Commodore64 and loved computer games, but the career I had chosen for myself when picking a degree course was traditional animation with a view to animating on feature films. But when Domark Ltd. (the company that later rebranded as Eidos) approached my art college to procure traditionally trained animators to help them raise the standard of their game graphics, the course staff pointed them towards me, since I continued to spend more time than was wise at maintaining my high score on the coin-op cabinets in the Student Union. So I freelanced for Domark during the summer holidays of my 2nd and 3rd year, learned how to use Dpaint and they offered me a job when I graduated. I was still aiming at a career in feature film animation but hadn’t sorted things out to do anything about it, so I accepted thinking I would get around to applying formally to Amblimation in Acton, where they had a studio at the time… it never happened. That was 30 years ago.

Favourite thing about working at Tag?

I really like that Tag puts the wellbeing of its staff very high on the agenda. In our post-Covid society, I feel that employee wellbeing is now a serious conversation topic in the industry, but Tag really does a whole lot more than just pay lip service to that. It’s really a shared priority in the studio leadership.


What about your work at Tag do you find particularly rewarding?
Working with a wonderful, welcoming and inclusive team is the best part. I feel this is an immediate upside of spending so much time on staff wellbeing - everyone is genuinely enthusiastic, positive and collaborative.

And what about the more challenging parts of your work? What really pushes you?
Keeping up with the modern production methodology. There are plenty of principles of project management and production I’m just not aware of.  Sometimes I find myself having to google acronyms in meetings to follow the conversation. Most of my production experience is learned from trial and error. Carol our COO, was amused by my allusion to the existence of a“Producer School”, where I feel I should enrol myself in order to catch up with these new-fangled processes. At least I am still learning new stuff. 


Favourite thing about Dundee?

I’m working remotely from North Yorkshire, but I’ve been up to Dundee a few times now. I like its proximity to the water, that the cost of living is so much less expensive than what I’m used to. A night at the pub doesn’t break the bank, and I feel more energised from the days I’ve spent working in the Dundee HQ alongside real people rather than just looking at everyone’s faces on Zoom. I guess that last one isn’t unique to Dundee, but that’s one of the things I look forward to when I’m going up there.

What game character do you think you’re most like?

I’ve been cast as background game characters in a couple of guises - a villain in a Scooby Doo SNES game, the lowest tier of security guard in Stolen (PS2/Xbox) - you know the fat, lazy type who ignores the security monitors while reading his newspaper and eating doughnuts. I’d like to think the similarity was purely visual, though - I did carry a bit more weight in those days - HAHA! I’ve been likened to Wario more frequently than any other game character by other people, which is weird because I always play Donkey Kong.