Now that my clickbait-y headline has your attention, I should qualify my inflammatory statement a little: “WYSIWYG (What You See Is What You Get) in the context of producing interactive user interfaces Must Die.”
The idea that you could visually design your app or website in a tool and then, at the click of a button, have it export a fully functional, interactive user interface (UI) is a seductive one. Why wouldn’t you want that? Non-technical people could create simple apps by themselves without ever needing to write a line of code. Even if the app needed more complex logic behind the scenes, you could at least unburden developers from having to code the UI portion of it.
Not only that – you’d also eliminate any misunderstandings and discrepancies that might creep in when developers translate your pristine, pixel-perfect UI design into code!
This is precisely the kind of promise that many WYSIWYG tools make. Back in the day, Microsoft FrontPage would let anyone build a website with a few clicks. Then it was DreamWeaver. Now it’s WIX, or SquareSpace, or one of countless others.
These tools vary in complexity and the amount of creative freedom they afford the designers, but their basic premise is always the same: you just focus on how you want things to look, and the tools will take care of that code stuff automatically.
Sounds lovely. So what’s the problem?
Warning! Iceberg metaphor ahoy!
In a nutshell, how a (piece of a) UI looks is only the tip of the iceberg. Any UI (component) should meet a number of non-visual hygiene factors:
It needs to be device agnostic.
Whether you’re making websites or apps, you can’t escape the reality of different screen sizes, different pixel densities and different input mechanisms. In terms of UI design, that translates to fluid and responsive layouts — e. designs that will intelligently stretch, squash or adapt to the dimensions of any screen size.
Likewise, user input might be via any of the following:
- touch (which means there is no hover state!),
- mouse (meaning multi-touch gestures are not available),
- keyboard (do your interactive UI elements visual focus when you tab through them?) or,
- combinations of those (Windows “convertible” laptops, anyone?) and your UI must work with all of them.
It needs to be performant.
We’ve long known that the speed at which a UI reacts to user input is critical. In the context of a web experience, that includes the perceived page load speed. Studies have shown that 79% of web users will abandon a site if it hasn’t loaded in 3 seconds. While the choice of visual styling is most definitely a factor in this instance, a lot of what influences performance comes down to how things are coded.
It needs to be accessible.
Whether it’s to avoid getting sued, wanting to address the largest possible market, or (hopefully) just because you don’t want to be an arsehole, it’s critical to make your UI as inclusive as possible. For example, some aspects like the sizing and colouring of text and UI elements will affect how accessible your UI is to people with various visual impairments. Yet how accessible is it to someone with no eyesight at all? Or one of countless other impairments (hearing, motor, neurological, etc.)? This will be highly dependent on how the site has been coded. Does your UI code hook into relevant accessibility APIs? Does it use them appropriately?
It needs to be robust.
It’s a relatively rare occurrence that your UI will only ever contain content that never changes and this is known when you design it. Far more commonly, much of its content will be dynamic: displaying a user’s name and photo, embedding user-generated content, rendering a chart based on some downloaded data, and more. These are all examples of the kind of things your UI will need to cope with, and all of them will have edge cases that need addressing. What happens when your user has a very long name? What if the user generated content is written in a right-t0-left language? What if an uploaded image is not in the aspect ratio you need? What if the downloaded data requires more bars in your chart than you have screen space for? Besides dynamic content, you also have user settings to contend with. Will your UI hold up if some users have configured their systems to have a larger default font size?
It needs to integrate well into its target environment.
A web experience should be SEO-friendly, mindful of its URLs (to enable bookmarking, sharing, and more) and aware of other considerations that are unique to the web.
An iOS app should heed Apple’s Human Interface Guidelines not only for the visuals but also for how users navigate in the app — it’s not enough to look right, it needs to feel right too. It should probably consider (and where appropriate make use of) iOS-specific features like App Extensions, Apple Watch integration and more.
Similarly, an Android app should aim to follow Google’s Material Design while being mindful of Android-specific UI conventions, which sometimes differ from those of the web or iOS. By taking advantage of how an Android app can expose different tasks (which can potentially be invoked by other apps), you can provide a more seamless and useful experience. Also consider integration with Android Wear and/or Android Auto.
These factors share the following commonalities:
- They have a very tangible effect on the user experience (UX).
- Achieving them demands close collaboration between design and development disciplines.
- A visual picture of a UI (component) does not provide enough informationto answer all the questions that will arise.
This last point is crucial in the context of this article.
If I’m using a WYSIWYG tool to design a UI and I style some text to be bold and have a larger size, how can this tool know why I did that? Was it because that text is a heading (and if so, a heading of what exactly — where exactly does its section begin and end?). Or was it perhaps simply a phrase I intended to emphasise?
If that tool were to then generate code, it would probably apply the correct styling. The font would be bold and the size would be just so. It would look right. However, if the output was HTML, it wouldn’t necessarily interpret that it should be marked up as <h1>, <h2> or maybe something entirely different like a <blockquote> — to play it safe it might just make it a <div>. That would be a pity, because the choice of mark-up will have an affect on the accessibility (e.g. a screen reader might read out an outline of all the headings on the page to aid navigation) and SEO-friendliness (a search engine might ascribe more relevance to text inside an <h1> than to text in a <div>) of the resulting code.
How about something more complex, like an interactive pie chart? If I draw a circle divided into segments in my WYSIWYG tool, how does it know that my intent was for that drawing to be a pie chart, as opposed to a picture of a fancy bicycle wheel or a stylised pizza? Even if it did know my intent, how would it know where the data was coming from or what colours to use if a different dataset required more segments? How about accessibility — what should the text alternative to this pie chart be for an audio browser or braille display? Most WYSIWYG tools will throw their hands up in despair at this point and simply output your pie chart as a static PNG image.
WYSIWYGBWYGIG: What You See Is What You Get But What You Get Is Garbage
Clearly, there’s a lot more information that the designer would need to provide to a WYSIWYG tool in order for it to generate UI code that satisfies all the hygiene factors listed above.
Of course, most WYSIWYG tools acknowledge this to some degree. They’ll often let you assemble your design from a set of common UI widgets (buttons, inputs, checkboxes and so on), which behind the scenes apply some of the necessary semantics. For instance, by dragging and dropping a button widget into your design, the tool knows that it is indeed a button (which, in turn, is a thing that can be clicked and has various states) as opposed to a generic rectangle with some text inside it. It can then output more appropriate code.
Taking that to its logical conclusion though, if there was a hypothetical WYSIWYG tool that allowed you to specify all the nuances of your UI component (its responsive layout behaviour, its interactive behaviours, its accessibility attributes, what coding conventions the output should follow, etc.), it would inevitably be quite complex and have a very steep learning curve. In effect, designers would be programming the UI design because they would need to tackle the same questions that developers would.
It seems to me that this would pretty much defeat WYSIWYG’s whole raison d’être, which is to avoid programming altogether. The thing is, though, the actual tools themselves are making a valiant effort. From a technical perspective, they are often ingenious bits of software and it’s amazing just how much they can do. The problem is the unobtainable dream they are chasing after. WYSIWYG code generation was never a sensible expectation to have, it still isn’t, and I highly doubt it ever will be.
What to do?
Great. The promise of WYSIWYG tools generating decent, production quality code is fundamentally flawed. What now?
Let’s take a step back and think of what we’re actually trying to achieve. We want to craft great, interactive user experiences. Interactive prototypes can be an invaluable tool towards that end — they enable more lifelike usability testing and they can convey the UI’s intended behaviour to developers and other stakeholders more accurately than other means.
I’d therefore say that using a WYSIWYG tool to mock up such prototypes can be useful, as long as everyone acknowledges that they are throwaway prototypes. The implications of that are:
- Each prototype only needs to be goodenough for its intended purpose. For example, if the intent is to convey or test navigation through an app, there’s rarely any need to have the visual aesthetics fully branded and fleshed out.
- There can be many little prototypes.It’s OK not to have a single, all-encompassing master prototype. Instead, you can have many simple prototypes that each only demonstrate one area of interest.
- Not every prototype needs to be created in the same tool.Even if you use a WYSIWYG tool to generate some of your prototypes, it might not be the right tool for the job every time. Sometimes a few code snippets in CodePen will do the job. Other times a paper prototype will suffice. Do whatever’s fastest and cheapest.
- Once a prototype has served its purpose, it gets retired.It will have been set up to test or demonstrate something. Once the outcome is known, the real product can be built or updated accordingly. There’s no need to maintain the prototype afterwards. Over time, this might mean that what is in the real code differs in some way from the prototypes that went before (perhaps the visuals change, or things are altered due to technical reasons), but that’s absolutely fine! If you need a “single source of truth”, make it the final software itself, not one of your design artefacts.
With respect to the last point, it is perhaps worth noting that many organisations are obsessed with having some “high fidelity” visual design that shows exactly how the finished software should appear on screen. Often, presumably as a misguided attempt at combining detailed visual design and documenting interactive behaviour in the same artefact, WYSIWYG tools are used to create them. Other times, visual design tools like PhotoShop or Sketch will be used to create non-interactive visual mock-ups. In either case, what then tends to happen is that this visual prototype or mock-up becomes the thing that clients and other stakeholders review. It becomes the yardstick by which everything else that follows is measured.
This attitude leads designers down an unproductive path of trying to refine and polish the visual mock-ups to perfection before they feed into development. It essentially forces a waterfall process onto the project and does not lend itself to a leaner, more agile way of working. Furthermore, as highlighted in the list of hygiene factors above, the output of WYSIWYG tools or, worse, static pictures, fail to capture and communicate many important aspects of the UI.
Conversely, adopting leaner, more granular prototypes and visual artefacts not only improves efficiency, but it also suggests that all those involved in the project better understand the nature of what they are trying to create.
So, in summary: WYSIWYG Must Die. 😜