When we talk with ambitious product builders, it’s easy to discuss how prototyping could fit into their product roadmap as a tool. However, it’s surprisingly hard to talk about what a prototype is.
Most people have an intuitive sense of an early or rough version of the final work, but agreeing on how coarse or even what “rough” means isn’t easy. It’s a common discussion point on whether a prototype should have functioning code, but discussions often break down when we disagree on whether a “prototype” should (or could be) feature complete or whether a prototype should look and feel like the final product. Each of these questions has merit, but it’s challenging to have that discussion if we can’t double-click into the word “prototype” and open up its dimensions.
Frustratingly, there are current models of language for talking about prototypes. Connected’s model, illustrated in our playbook, draws a line-in-the-sand between high-fidelity and low-fidelity as the distinction on code-vs-no-code. An example I personally like better is, Marty Cagan’s Flavors with multiple goals or methods defined. With either model, it’s not intuitively clear how to talk about combinations of goals (say, a High-Fidelity User Prototype with a Feasibility Prototype) or how close to approximate the final form. Implied but not explicit is the hierarchy of effort, although this may not translate into a better tool. In essence, a High-Fidelity User Prototype is more work than a Low-Fidelity User Prototype. Still, it won’t replace the need for a Low-Fidelity User Prototype, and it’s not better than a Low-Fidelity User Prototype in all circumstances.
To have a productive discussion on prototype goals it’s necessary to discuss who’s doing the work or which tool is appropriate to use. Still, extracting the valuable conversation on the intentions of the prototype is highly contingent on a set of shared assumptions on why tooling is chosen. Implementation language gives us language like “an InVision prototype,” which we can use to make reasonable inferences on goals. Unfortunately, this language increasingly breaks down as tools get better until outputs blur the line between functional and non-functional—InVision Craft has ample means to let designers pull in live data without building coded services.
To have better discussions about prototypes, we need to start talking about what we want the prototype to do or what elements we want it to have, not how it’s built. I posit that there are three main characteristics we can use to describe a prototype—each dimension is a spectrum, not a binary choice, and each is independently interesting.
How much does the prototype respond like the final product (to external data or interaction)?
Connected’s existing language around prototyping creates an initial dimension between Fixed Prototypes on one end (hard-coded data or flows) and Dynamic Prototypes on the other (using real-world data or reacting to the user’s input). I call this dimension interaction or functional fidelity. Traditionally this has required coding a business-logic and presentation layer (an app), and the discussion of engineered vs. designer prototypes has served as a proxy. Increasingly this distinction between skillsets becomes less of a barrier, designer-focused tools include data integration or complex navigation flows, and software prototyping tools or even production-grade toolkits like Flutter allow engineers to move faster than ever. Still, the decision on how much interactivity should be made based on the goals of the prototype, not the skills or availability of the team.
How realistic does the product look or feel (including mirroring real-world constraints)?
Will the prototype “feel” or “look” like a real product? The experiential or visual fidelity dimension defines not just the visual (or physical) polish of the prototype but also dictates how much the realities of implementation should constrain it. It’s easy to understand the difference between napkin sketches of a UI and a designer’s final Zeplin files for slicing and integration, staying strictly in the non-interactive side of the spectrum. Both of these have very low functional fidelity (they don’t do anything), but one conveys the broad-strokes of the experience while the other looks the same (and has the same elements) as the real product. Separating the dimensions of functional and visual fidelity allows us to be more precise on high-function low-experiential goals like early software validation prototypes that have 100% of the logic written but have extremely primitive interfaces.
How complete is the scope of functionality?
A final dimension is the spectrum of feature or complexity completeness. The traditional demo experience of deeply exploring one aspect in a fully interactive and visually-complete experience while other functions or areas are entirely locked out or static exemplifies lower complexity fidelity in an otherwise high-fidelity prototype. Conversely, a service blueprint or user flow with a focus on interactions, exception flows, and error recovery would have a high complexity fidelity with very low functional and experiential fidelity.
Finally, for orientation, it’s helpful to overlay the existing language we’ve picked up about prototypes on top of this three-dimensional framework.
With a more nuanced and descriptive language for prototyping, we can expand our toolkit to bridge between sketching and building. More importantly, this language lets us talk about the intention of the prototype over its construction. The act of setting project goals allows teams to do what it takes to be most impactful in early-stage projects and being clear in the intentions assures the prototype will help measure the expected risks instead of being built the expected way.