The real value of tokens. Part 1
And why I believe we are on the way towards a completely tokenized design future!
This article is a part 1 of a two part series about design tokens.
Link to Part 2
Tokens are all the rage in the design world in the last few years, and it’s especially true after Figma announced their new ‘variables’ capability at Config. Now, before this new announcement - none of the big tools natively supported design tokens functionality, so people had to hack it with plugins, verbal agreements with the dev teams (which are difficult), spreadsheets and other dark magic, so it’s mainly the big companies that actually made tokens part of their design systems (and even then - mostly colors, typography and rarely - dimension tokens like size, space and corner radius.)
For many of the rest of the designers - it all feels overwhelmingly complicated, and above all - unclear what the actual value is in going all-in on tokens, instead of doing what they already did, which is to use hard coded values for some things, and styles for others.
In part 1 of this guide I’ll answer two main questions: why tokens, and why now? In part 2 I’ll dive more practically into the recipe for creating a token system that works.
But first - quick intro to set the stage for my perspective:
I’ve been climbing the ropes of web and product design specifically, and graphic design more broadly, for the last 13 years. I come at this with some scar tissue, as every year I have this moment when I look at how I worked on previous projects and want to facepalm myself in disbelief: “How primitive and needlessly laborious my process was!” And this holy ceremony repeats itself every year. Knowing my design tools inside out became a passion of mine and very quickly I started teaching design tools to other designers, students and professionals who needed an extra edge. Nowadays, I mainly do workshops for product design teams teaching them how to get the best out of Figma with their processes and design system. I’m a total design systems and Figma geek, so you and I probably have much in common.
So, why tokens?
To answer this question thoroughly we first need to talk about why the move towards greater abstraction took place in both programming languages and user interface design tools.
Abstraction in programming languages
In short - increase in abstraction was a natural evolution. It allowed for greater speed and efficiency of expressing developers' intent.
In more detail - computer languages follow an evolution tendency for the last ~70 years, starting from machine language (zeros and ones) to Assembly, to languages like C and C++ to higher languages like Javascript, Ruby and Python. You can think of all the languages as steps in a tower, where machine language is the bottom, and languages like Javascript and Ruby are closer to the top. Usually, the higher up the abstraction you go - the more expressive, effective and reusable the language is.
The evolution of that tower was for the most part from the bottom up, and for a good reason - it takes a lot of knowledge to write with machine language and Assembly, so the less time developers spent tinkering with specific micro orders for the computer to do and figuring out memory allocation - the more time they had to declare in human-readable language what they wanted to achieve. The more abstract you go - the more the code will be easier to understand, and the more effort you can spend thinking about ‘what do I want to happen’ and less about ‘how would the machine do it’.
It’s worth noting that there is a price developers pay for more convenient high level languages - a reduced ability to make sure performance is optimized, since in high level languages, you’re not going deep enough to control the actual orders to the machine itself, and what you can’t control, you can’t optimize. This is one of the reasons why in industries where every drop of efficiency is paramount - you’ll see devs use more low level languages.
Abstractions in user interface tools.
In the design tools space abstraction took a slightly more straightforward approach, mainly - grouping hard coded design choices as a ‘sort of variables’, where you can define something once and then reuse it in many places, with the ability to change the definition in one place and see the ramifications play out all across the designs. These are your typical paragraph and character styles (still love you, InDesign), global colors swatches (Illustrator), symbols (Flash and Sketch) components, grid styles, page layouts, margins and smart objects (remember Photoshop?).
The abstraction in most of these cases was very simple: the lower part of the abstraction tower is the hard coded value (could be HEX color, a font family, a stroke style etc) and the higher levels of the tower would be styles that point to those specific values. In more advanced and technical tools like InDesign you could point paragraph styles to other paragraph styles (using the ‘based on’ feature) and include distinct character styles and color swatches in it, but it’s an exception to the more general rule of a rather simple 1:1 ratio.
It really helped because instead of memorizing the HEX values of your ten brand color shades - you’d just need to find your way through much more human friendly names like ‘primary-5’, ‘heading-1’ etc.
Unlike in the code tower - the only ‘tax’ for using the more abstracted styles was the time it took to create and manage them. But the cons pale in comparison to the pros, since making sweeping changes across many screens / pages becomes a huge pain if all the values are manual and it’s very easy to lose track of how many distinct shades of gray or sizes of text you’ve used that way. Same goes for the benefit of using components.
Ok, so all of these are variations of ‘styles’, with one level of abstraction, a hard coded value is named and saved somewhere once and can be used over and over. So what do we need tokens for?
We need them for two main reasons:
Reason # 1
Styles are idiosyncratic to the design tool at use. By which I mean that they are particular to that tool, and don’t correlate to neither other design tools, nor to coding languages, especially for different platforms (vanilla web, React, Android and iOS for example). It’s true both in terms of the way they are written and ordered in folders and what each style represents.
For example - in Sketch - typography styles include properties like text color and alignment while in Figma they don’t. Both of them understand some level of foldering, both represent font family, font weight, font size, line-height and letter spacing. So there’s some overlap, but not perfect overlap. In coding languages there’s not even a unified idea of a bundle such as a ‘typography style’ and each language will need to unbundle this into the specifics of font family, font weight etc.. This is a recipe for misunderstanding when thinking about where changes reside, huge time sinks when going from one tool to another and when trying to find common language for communication of the designs to the developers.
Design tokens alleviate the vast majority of that problem. This is due to the fact that they are platform agnostic, usually written and saved as a JSON file (a semi-structured text file) so can be both used by a design tool and read by developers who only need to specify the mapping between the token and the actual css/scss/React/Java/Swift visual properties once, and then just use the tokens as a shorthand to tell components how to look.
So basically tokens provide a unified, generic language, with strictly defined properties, to communicate design decisions.
In the high-paced, multi-faceted digital jungle we live in - it’s no small gift indeed.
Reason #2
Modes (light/dark, loose/dense etc). Themes (brand A and brand B for white label or multi-sub-brand products). Generally - any global style setting that an app can have (and that happens more and more in the last couple of years) necessitates some kind of mechanics that is better than just manually adjusting every design decision for every mode, because it’s both hugely time consuming and very hard to maintain and develop.
So we need some form of two-layered architecture of ‘styles’. One layer in that architecture to make the local design decisions saved and defined to recall later (as styles already do) and another layer - and here’s the most important part - to be persistent for multi-mode environments, and point to the relevant 1st level style for each of the modes. This is achieved thanks to a referencing mechanic named ‘aliasing’. And tokens, unlike regular ‘styles’, allow for that aliasing mechanic as an integral part of the way they are built. Here’s a quick example of how aliasing works with tokens:
Let’s say we want to allow for light and dark modes of our UI. That would mean that we need a set of core color styles (tokens), say gray-100 (#F5F5F5) to gray-900 (#141414) in increments of 100:
and a set of 2nd level alias tokens, which have semantic meanings like ‘primary_text’ or ‘secondary_text’:
These semantic tokens don’t point straight to specific HEX values, but rather they point to 2 different core color tokens, depending on the mode. For light mode ‘primary_text’ can point to gray-900 and for dark mode it can point to gray_100. So you assign a semantic token to some piece of text and it’ll work perfectly for each of the modes.
This is going up the tower of abstraction we mentioned before, because with time you don’t even bother looking where the semantic tokens point (the actual hex values, and even the names of the core color tokens), you ask yourself: ‘is this a text object that needs to have higher prominence over other pieces of text in this part of the screen? Great, let’s assign it a ‘primary_text’ token - and be sure that it’ll work perfectly for both light and dark modes.
Design tokens are the first time in a lot of years that designers get to step up in the abstraction tower, and think in terms of meaning and purpose rather than concrete hard coded values. This is a big cultural shift - but an inevitable one.
Since modes (like light and dark) are relatively recent as a common property of modern applications - there wasn’t a big pressure to push the design tools to embrace tokens. When there’s no modes in your app, there’s less of a need for second layer referencing, no need for styles to point to other styles, and the first reason alone was not enough to make it worth it for most tools to embrace tokens as a tool/idea. Now that modes are all around us, the pressure is way up to have tokens.
Which raises the question…
Why now?
As mentioned before - multi-mode ability became a commodity in applications. Pressure built on, people started implementing tokens even without the support of the popular design tools using plugins (like the awesome Token Studio plugin for Figma), 3rd party tools on the web and even spreadsheets with mappings of styles to tokens. That was generally a very uncomfortable solution because the act of playing around with styling and saving and naming and mapping them should be one unified cohesive experience, and the only place that can allow for that are the product design tools themselves.
Which brings me to the latest Config, and Figma’s release of ‘Variables’. Now we finally have the ability to start thinking about design decisions as tokens, with aliasing ability and modes that affect components that use tokens within them.
Now, the spec of what variables gives is far from being complete, as basically there are only 2 types of tokens available - color and dimension (Figma called it number which is needlessly confusing, as ‘number’ is a different thing than ‘dimension’ in the W3C community draft and in Figma it’s sometimes used as actual number (in prototypes) and sometimes used as dimensions for height, width, corner radii etc..) and there are no composite tokens like typography, shadow, border, gradient etc [yet, they’ll probably arrive sometime this year I hope] - which are badly needed. But - it’s the right direction for advancing, so kudos to Figma for being the first big design tool to implement it.
As we’ve seen with plenty other features in the past - this is going to be copied quickly and become a standard. After that - tools are going to be in a race to complete the full set of all the token types that the W3C community group will agree upon, and probably even expand upon it in some ways. We are accelerating towards max tokenization, which will plateau when all the leading tools get there. Yes, I believe - token systems, instead of the classic ‘styles’, are going to be ubiquitous in all the design tools that want to stay relevant in the future, and it’s either Figma or some other tool that is going to lead the way into a complete support of all the stylistic decisions that can be defined as a token. It is definitely the future of how we make, communicate and document design decisions.
So if you didn’t already - both of the reasons I provided above should convince you that the right time to switch to tokens is now. If you don’t - the other designers will, and you’ll be at a competitive disadvantage for product design jobs pretty soon.
This concludes the first part of the series, and I hope that if I’m successful you’ve come out understanding what tokens actually are, why they are needed, and why now, more than before.
In the second part I’m going to offer a recipe for how to organize a token system that is cohesive and makes sense, step by step.