CSS min() All The Things

Did you see that Chris Coyier published back in August? He experimented with CSS container query units, going all in and using them for every single numeric value in a demo he put together. And was… not too bad, actually.

See the Pen by .

What I found interesting about this is how it demonstrates the complexity of sizing things. We’re constrained to absolute and relative units in CSS, so we’re either stuck at a specific size (e.g., px) or computing the size based on sizing declared on another element (e.g., %, em, rem, vw, vh, and so on). Both come with compromises, so it’s not like there is a “correct” way to go about things — it’s about the element’s context — and leaning heavily in any one direction doesn’t remedy that.

I thought I’d try my own experiment but with the CSS min() function instead of container query units. Why? Well, first off, we can supply the function with any type of length unit we want, which makes the approach a little more flexible than working with one type of unit. But the real reason I wanted to do this is personal interest more than anything else.

The Demo

I won’t make you wait for the end to see how my min() experiment went:

Taking website responsiveness to a whole new level 🌐

— Vayo (@vayospot)

We’ll talk about that more after we walk through the details.

A Little About min()

The min() function takes two values and applies the smallest one, whichever one happens to be in the element’s context. For example, we can say we want an element to be as wide as 50% of whatever container it is in. And if 50% is greater than, say 200px, cap the width there instead.

See the Pen by .

So, min() is sort of like container query units in the sense that it is aware of how much available space it has in its container. But it’s different in that min() isn’t querying its container dimensions to compute the final value. We supply it with two acceptable lengths, and it determines which is best given the context. That makes min() (and max() for that matter) a useful tool for responsive layouts that adapt to the viewport’s size. It uses conditional logic to determine the “best” match, which means it can help adapt layouts without needing to reach for CSS media queries.

.element {
  width: min(200px, 50%);
}

/* Close to this: */
.element {
  width: 200px;

  @media (min-width: 600px) {
    width: 50%;
  }
}

The difference between min() and @media in that example is that we’re telling the browser to set the element’s width to 50% at a specific breakpoint of 600px. With min(), it switches things up automatically as the amount of available space changes, whatever viewport size that happens to be.

When I use the min(), I think of it as having the ability to make smart decisions based on context. We don’t have to do the thinking or calculations to determine which value is used. However, using min() coupled with just any CSS unit isn’t enough. For instance, relative units work better for responsiveness than absolute units. You might even think of min() as in that it never goes below the first value but also caps itself at the second value.

I mentioned earlier that we could use any type of unit in min(). Let’s take the same approach that Chris did and lean heavily into a type of unit to see how min() behaves when it is used exclusively for a responsive layout. Specifically, we’ll use viewport units as they are directly relative to the size of the viewport.

Now, there are different flavors of viewport units. We can use the viewport’s width (vw) and height (vh). We also have the vmin and vmax units that are slightly more intelligent in that they evaluate an element’s width and height and apply either the smaller (vmin) or larger (vmax) of the two. So, if we declare 100vmax on an element, and that element is 500px wide by 250px tall, the unit computes to 500px.

That is how I am approaching this experiment. What happens if we eschew media queries in favor of only using min() to establish a responsive layout and lean into viewport units to make it happen? We’ll take it one piece at a time.

Font Sizing

There are various approaches for responsive type. Media queries are quickly becoming the “old school” way of doing it:

p { font-size: 1.1rem; }

@media (min-width: 1200px) {
  p { font-size: 1.2rem; }
}

@media (max-width: 350px) {
  p { font-size: 0.9rem; }
}

Sure, this works, but what happens when the user uses a 4K monitor? Or a foldable phone? There are other tried and true approaches; in fact, . But we’re leaning all-in on min(). As it happens, just one line of code is all we need to wipe out all of those media queries, substantially reducing our code:

p { font-size: min(6vmin, calc(1rem + 0.23vmax)); }

I’ll walk you through those values…

  1. 6vmin is essentially 6% of the browser’s width or height, whichever is smallest. This allows the font size to shrink as much as needed for smaller contexts.
  2. For calc(1rem + 0.23vmax), 1rem is the base font size, and 0.23vmax is a tiny fraction of the viewport‘s width or height, whichever happens to be the largest.
  3. The calc() function adds those two values together. Since 0.23vmax is evaluated differently depending on which viewport edge is the largest, it’s crucial when it comes to scaling the font size between the two arguments. I’ve tweaked it into something that scales gradually one way or the other rather than blowing things up as the viewport size increases.
  4. Finally, the min() returns the smallest value suitable for the font size of the current screen size.

And speaking of how flexible the min() approach is, it can restrict how far the text grows. For example, we can cap this at a maximum font-size equal to 2rem as a third function parameter:

p { font-size: min(6vmin, calc(1rem + 0.23vmax), 2rem); }

This isn’t a silver bullet tactic. I’d say it’s probably best used for body text, like paragraphs. We might want to adjust things a smidge for headings, e.g., <h1>:

h1 { font-size: min(7.5vmin, calc(2rem + 1.2vmax)); }

We’ve bumped up the minimum size from 6vmin to 7.5vmin so that it stays larger than the body text at any viewport size. Also, in the calc(), the base size is now 2rem, which is smaller than the default UA styles for <h1>. We’re using 1.2vmax as the multiplier this time, meaning it grows more than the body text, which is multiplied by a smaller value, .023vmax.

This works for me. You can always tweak these values and see which works best for your use. Whatever the case, the font-size for this experiment is completely fluid and completely based on the min() function, adhering to my self-imposed constraint.

Margin And Padding

Spacing is a big part of layout, responsive or not. We need margin and padding to properly situate elements alongside other elements and give them breathing room, both inside and outside their box.

We’re going all-in with min() for this, too. We could use absolute units, like pixels, but those aren’t exactly responsive.

min() can combine relative and absolute units so they are more effective. Let’s pair vmin with px this time:

div { margin: min(10vmin, 30px); }

10vmin is likely to be smaller than 30px when viewed on a small viewport. That’s why I’m allowing the margin to shrink dynamically this time around. As the viewport size increases, whereby 10vmin exceeds 30px, min() caps the value at 30px, going no higher than that.

Notice, too, that I didn’t reach for calc() this time. Margins don’t really need to grow indefinitely with screen size, as too much spacing between containers or elements generally looks awkward on larger screens. This concept also works extremely well for padding, but we don’t have to go there. Instead, it might be better to stick with a single unit, preferably em, since it is relative to the element’s font-size. We can essentially “pass” the work that min() is doing on the font-size to the margin and padding properties because of that.

.card-info {
  font-size: min(6vmin, calc(1rem + 0.12vmax));
  padding: 1.2em;
}

Now, padding scales with the font-size, which is powered by min().

Widths

Setting width for a responsive design doesn’t have to be complicated, right? We could simply use a single percentage or viewport unit value to specify how much available horizontal space we want to take up, and the element will adjust accordingly. Though, container query units could be a happy path outside of this experiment.

But we’re min() all the way!

min() comes in handy when setting constraints on how much an element responds to changes. We can set an upper limit of 650px and, if the computed width tries to go larger, have the element settle at a full width of 100%:

.container { width: min(100%, 650px); }

Things get interesting with text width. When the width of a text box is too long, it becomes uncomfortable to read through the texts. There are competing theories about how many characters per line of text is best for an optimal reading experience. For the sake of argument, let’s say that number should be between 50-75 characters. In other words, we ought to pack no more than 75 characters on a line, and we can do that with the ch unit, which is based on the 0 character’s size for whatever font is in use.

p {
  width: min(100%, 75ch);
}

This code basically says: get as wide as needed but never wider than 75 characters.

Sizing Recipes Based On min()

Over time, with a lot of tweaking and modifying of values, I have drafted a list of pre-defined values that I find work well for responsively styling different properties:

:root {
  --font-size-6x: min(7.5vmin, calc(2rem + 1.2vmax));
  --font-size-5x: min(6.5vmin, calc(1.1rem + 1.2vmax));
  --font-size-4x: min(4vmin, calc(0.8rem + 1.2vmax));
  --font-size-3x: min(6vmin, calc(1rem + 0.12vmax));
  --font-size-2x: min(4vmin, calc(0.85rem + 0.12vmax));
  --font-size-1x: min(2vmin, calc(0.65rem + 0.12vmax));
  --width-2x: min(100vw, 1300px);
  --width-1x: min(100%, 1200px);
  --gap-3x: min(5vmin, 1.5rem);
  --gap-2x: min(4.5vmin, 1rem);
  --size-10x: min(15vmin, 5.5rem);
  --size-9x: min(10vmin, 5rem);
  --size-8x: min(10vmin, 4rem);
  --size-7x: min(10vmin, 3rem);
  --size-6x: min(8.5vmin, 2.5rem);
  --size-5x: min(8vmin, 2rem);
  --size-4x: min(8vmin, 1.5rem);
  --size-3x: min(7vmin, 1rem);
  --size-2x: min(5vmin, 1rem);
  --size-1x: min(2.5vmin, 0.5rem);
}

This is how I approached my experiment because it helps me know what to reach for in a given situation:

h1 { font-size: var(--font-size-6x); }

.container {
  width: var(--width-2x);
  margin: var(--size-2x);
}

.card-grid { gap: var(--gap-3x); }

There we go! We have a heading that scales flawlessly, a container that’s responsive and never too wide, and a grid with dynamic spacing — all without a single media query. The --size- properties declared in the variable list are the most versatile, as they can be used for properties that require scaling, e.g., margins, paddings, and so on.

The Final Result, Again

I shared a video of the result, but here’s a link to the demo.

See the Pen by .

So, is min() the be-all, end-all for responsiveness? Absolutely not. Neither is a diet consisting entirely of container query units. I mean, it’s cool that we can scale an entire webpage like this, but the web is never a one-size-fits-all beanie.

If anything, I think this and what Chris demoed are warnings against dogmatic approaches to web design as a whole, not solely unique to responsive design. CSS features, including length units and functions, are tools in a larger virtual toolshed. Rather than getting too cozy with one feature or technique, explore the shed because you might find a better tool for the job.

It’s Here! How To Measure UX & Design Impact, With Vitaly Friedman

Finally! After so many years, we’re very happy to launch , our new practical guide for designers and managers on how to set up and track design success in your company — with UX scorecards, UX metrics, the entire workflow and Design KPI trees. Neatly put together by yours truly, Vitaly Friedman. .



Video + UX Training

$ 495.00 $ 799.00

Get Video + UX Training

25 video lessons (8h) + .
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00


Get the video course

25 video lessons (8h). Updated yearly.
Also available as a

The Backstory

In many companies, designers are perceived as disruptors, rather than enablers. Designers challenge established ways of working. They ask a lot of questions — much needed ones but also uncomfortable ones. They focus “too much” on user needs, pushing revenue projections back, often with long-winded commitment to testing and research and planning and scoping.

Almost every department in almost every company has their own clearly defined objectives, metrics and KPIs. In fact, most departments — from finance to marketing to HR to sales — are remarkably good at visualizing their impact and making it visible throughout the entire organization.

Designing a KPI tree, an example of how to connect business objectives with design initiatives through the lens of design KPIs. ()

But as designers, we rarely have a set of established Design KPIs that we regularly report to senior management. We don’t have a clear definition of design success. And we rarely measure the impact of our work once it’s launched. So it’s not surprising that moste parts of the business barely know what we actually do all day long.

Business wants results. It also wants to do more of what has worked in the past. But it doesn’t want to be disrupted — it wants to disrupt. It wants to reduce time to market and minimize expenses; increase revenue and existing business, find new markets. This requires fast delivery and good execution.

And that’s what we are often supposed to be — good “executors”. Or to put differently, “pixel pushers”.

Over years, I’ve been searching for a way to change that. This brought me to Design KPIs and UX scorecards, and a workflow to translate business goals into actionable and measurable design initiatives. I had to find a way to explain, visualize and track that incredible impact that designers have on all parts of business — from revenue to loyalty to support to delivery.

The results of that journey are now public in our new video course: — a practical guide for designers, researchers and UX leads to measure and visualize UX impact on business.

About The Course

The course dives deep into establishing team-specific design KPIs, how to track them effectively, how to set up ownership and integrate metrics in design process. You’ll discover how to translate ambiguous objectives into practical design goals, and how to measure design systems and UX research.

Also, we’ll make sense of OKRs, Top Task Analysis, SUS, UMUX-Lite, UEQ, TPI, KPI trees, feedback scoring, gap analysis, and Kano model — and what UX research methods to choose to get better results. or .

The setup for the video recordings. Once all content is in place, it’s about time to set up the recording.

<a class=”btn btn–large btn–green btn–text-shadow” href= to the video course →

A practical guide to UX metrics and Design KPIs
8h-video course + live UX training. .

  • 25 chapters (8h), with videos added/updated yearly
  • Free preview, examples, templates, workflows
  • No subscription: get once, access forever
  • Life-time access to all videos, slides, checklists.
  • Add-on: live UX training, running 2× a year
  • Use the code SMASHING to

Table of Contents

25 chapters, 8 hours, with practical examples, exercises, and everything you need to master the art of measuring UX and design impact. Don’t worry, even if it might seem overwhelming at first, we’ll explore things slowly and thoroughly. Taking 1–2 sessions per week is a perfectly good goal to aim for.



We can’t improve without measuring. That’s why our new video course gives you the tools you need to make sense of it all: user needs, just like business needs. ()

1. Welcome

+

So, how do we measure UX? Well, let’s find out! Meet a friendly welcome message to the video course, outlining all the fine details we’ll be going through: design impact, business metrics, design metrics, surveys, target times and states, measuring UX in B2B and enterprises, design KPI trees, Kano model, event storming, choosing metrics, reporting design success — and how to measure design systems and UX research efforts.

Keywords:
Design impact, UX metrics, business goals, articulating design value, real-world examples, showcasing impact, evidence-driven design.

2. Design Impact

+

In this segment, we’ll explore how and where we, as UX designers, make an impact within organizations. We’ll explore where we fit in the company structure, how to build strong relationships with colleagues, and how to communicate design value in business terms.

Keywords:
Design impact, design ROI, orgcharts, stakeholder engagement, business language vs. UX language, Double Diamond vs. Reverse Double Diamond, risk mitigation.

3. OKRs and Business Metrics

+

We’ll explore the key business terms and concepts related to measuring business performance. We’ll dive into business strategy and tactics, and unpack the components of OKRs (Objectives and Key Results), KPIs, SMART goals, and metrics.

Keywords:
OKRs, objectives, key results, initiatives, SMART goals, measurable goals, time-bound metrics, goal-setting framework, business objectives.

4. Leading And Lagging Indicators

+

Businesses often speak of leading and lagging indicators — predictive and retrospective measures of success. Let’s explore what they are and how they are different — and how we can use them to understand the immediate and long-term impact of our UX work.

Keywords:
Leading vs. lagging indicators, cause-and-effect relationship, backwards-looking and forward-looking indicators, signals for future success.

5. Business Metrics, NPS

+

We dive into the world of business metrics, from Monthly Active Users (MAU) to Monthly Recurring Revenue (MRR) to Customer Lifetime Value (CLV), and many other metrics that often find their way to dashboards of senior management.

Also, almost every business measures NPS. Yet NPS has many limitations, requires a large sample size to be statistically reliable, and what people say and what people do are often very different things. Let’s see what we as designers can do with NPS, and how it relates to our UX work.

Keywords:
Business metrics, MAU, MRR, ARR, CLV, ACV, Net Promoter Score, customer loyalty.

6. Business Metrics, CSAT, CES

+

We’ll explore the broader context of business metrics, including revenue-related measures like Monthly Recurring Revenue (MRR) and Annual Recurring Revenue (ARR), Customer Lifetime Value (CLV), and churn rate.

We’ll also dive into Customer Satisfaction Score (CSAT) and Customer Effort Score (CES). We’ll discuss how these metrics are calculated, their importance in measuring customer experience, and how they complement other well-known (but not necessarily helpful) business metrics like NPS.

Keywords:
Customer Lifetime Value (CLV), churn rate, Customer Satisfaction Score (CSAT), Customer Effort Score (CES), Net Promoter Score (NPS), Monthly Recurring Revenue (MRR), Annual Recurring Revenue (ARR).

7. Feedback Scoring and Gap Analysis

+

If you are looking for a simple alternative to NPS, feedback scoring and gap analysis might be a neat little helper. It transforms qualitative user feedback into quantifiable data, allowing us to track UX improvements over time. Unlike NPS, which focuses on future behavior, feedback scoring looks at past actions and current perceptions.

Keywords:
Feedback scoring, gap analysis, qualitative feedback, quantitative data.

8. Design Metrics (TPI, SUS, SUPR-Q)

+

We’ll explore the landscape of established and reliable design metrics for tracking and capturing UX in digital products. From task success rate and time on task to System Usability Scale (SUS) to Standardized User Experience Percentile Rank Questionnaire (SUPR-Q) to Accessible Usability Scale (AUS), with an overview of when and how to use each, the drawbacks, and things to keep in mind.

Keywords:
UX metrics, KPIs, task success rate, time on task, error rates, error recovery, SUS, SUPR-Q.

9. Design Metrics (UMUX-Lite, SEQ, UEQ)

+

We’ll continue with slightly shorter alternatives to SUS and SUPR-Q that could be used in a quick email survey or an in-app prompt — UMUX-Lite and Single Ease Question (SEQ). We’ll also explore the “big behemoths” of UX measurements — User Experience Questionnaire (UEQ), Google’s HEART framework, and custom UX measurement surveys — and how to bring key metrics together in one simple UX scorecard tailored to your product’s unique needs.

Keywords:
UX metrics, UMUX-Lite, Single Ease Question (SEQ), User Experience Questionnaire (UEQ), HEART framework, UEQ, UX scorecards.

10. Top Tasks Analysis

+

The most impactful way to measure UX is to study how successful users are in completing their tasks in their common customer journeys. With top tasks analysis, we focus on what matters, and explore task success rates and time on task. We need to identify representative tasks and bring 15–18 users in for testing. Let’s dive into how it all works and some of the important gotchas and takeaways to consider.

Keywords:
Top task analysis, UX metrics, task success rate, time on task, qualitative testing, 80% success, statistical reliability, baseline testing.

11. Surveys and Sample Sizes

+

Designing good surveys is hard! We need to be careful on how we shape our questions to avoid biases, how to find the right segment of audience and large enough sample size, how to provide high confidence levels and low margins of errors. In this chapter, we review best practices and a cheat sheet for better survey design — along with do’s and don’ts on question types, rating scales, and survey pre-testing.

Keywords:
Survey design, question types, rating scales, survey length, pre-testing, response rates, statistical significance, sample quality, mean vs. median scores.

12. Measuring UX in B2B and Enterprise

+

Best measurements come from testing with actual users. But what if you don’t have access to any users? Perhaps because of NDA, IP concerns, lack of clearance, poor availability, and high costs of customers and just lack of users? Let’s explore how we can find a way around such restrictive environments, how to engage with stakeholders, and how we can measure efficiency, failures — and set up UX KPI programs.

Keywords:
B2B, enterprise UX, limited access to users, compliance, legacy systems, compliance, desk research, stakeholder engagement, testing proxies, employee’s UX.

13. Design KPI Trees

+

To visualize design impact, we need to connect high-level business objectives with specific design initiatives. To do that, we can build up and present Design KPI trees. From the bottom up, the tree captures user needs, pain points, and insights from research, which inform design initiatives. For each, we define UX metrics to track the impact of these initiatives, and they roll up to higher-level design and business KPIs. Let’s explore how it all works in action and how you can use it in your work.

Keywords:
User needs, UX metrics, KPI trees, sub-trees, design initiatives, setting up metrics, measuring and reporting design impact, design workflow, UX metrics graphs, UX planes.

14. Event Storming

+

How do we choose the right metrics? Well, we don’t start with metrics. We start by identifying most critical user needs and assess the impact of meeting user needs well. To do that, we apply event storming by mapping critical user’s success moments as they interact with a digital product. Our job, then, is to maximize success, remove frustrations, and pave a clear path towards a successful outcome — with event storming.

Keywords:
UX mapping, customer journey maps, service blueprints, event storming, stakeholder alignment, collaborative mapping, UX lanes, critical events, user needs vs. business goals.

15. Kano Model and Survey

+

Once we have a business objective in front of us, we need to choose design initiatives that are most likely to drive the impact that we need to enable with our UX work. To test how effective our design ideas are, we can map them against a Kano model und run a concept testing survey. It gives us a user’s sentiment that we then need to weigh against business priorities. Let’s see how to do just that.

Keywords:
Feature prioritization, threshold attributes, performance attributes, excitement attributes, user’s sentiment, mapping design ideas, boosting user’s satisfaction.

16. Design Process

+

How do we design a KPI tree from scratch? We start by running a collaborative event storming to identify key success moments. Then we prioritize key events and explore how we can amplify and streamline them. Then we ideate and come up with design initiatives. These initiatives are stress tested in an impact-effort matrix for viability and impact. Eventually, we define and assign metrics and KPIs, and pull them together in a KPI tree. Here’s how it works from start till the end.

Keywords:
Uncovering user needs, impact-effort matrix, concept testing, event storming, stakeholder collaboration, traversing the KPI tree.

17. Choosing The Right Metrics

+

Should we rely on established UX metrics such as SUS, UMUX-Lite, and SUPR-Q, or should we define custom metrics tailored to product and user needs? We need to find a balance between the two. It depends on what we want to measure, what we actually can measure, and whether we want to track local impact for a specific change or global impact for the entire customer journey. Let’s figure out how to define and establish metrics that actually will help us track our UX success.

Keywords:
Local vs. global KPIs, time spans, percentage vs. absolute values, A/B testing, mapping between metrics and KPIs, task breakdown, UX lanes, naming design KPIs.

18. Design KPIs Examples

+

Different contexts will require different design KPIs. In this chapter, we explore a diverse set of UX metrics related to search quality (quality of search for top 100 search queries), form design (error frequency, accuracy), e-commerce (time to final price), subscription-based services (time to tier boundaries), customer support (service desk inquiries) and many others. This should give you a good starting point to build upon for your own product and user needs.

Keywords:
Time to first success, search results quality, form error recovery, password recovery rate, accessibility coverage, time to tier boundaries, service desk inquiries, fake email frequency, early drop-off rate, carbon emissions per page view, presets and templates usage, default settings adoption, design system health.

19. UX Strategy

+

Establishing UX metrics doesn’t happen over night. You need to discuss and decide what you want to measure and how often it should happen. But also how to integrate metrics, evaluate data, and report findings. And how to embed them into an existing design workflow. For that, you will need time — and green lights from your stakeholders and managers. To achieve that, we need to tap into the uncharted waters of UX strategy. Let’s see what it involves for us and how to make progress there.

Keywords:
Stakeholder engagement, UX maturity, governance, risk mitigation, integration, ownership, accountability, viability.

20. Reporting Design Success

+

Once you’ve established UX metrics, you will need to report them repeatedly to the senior management. How exactly would you do that? In this chapter, we explore the process of selecting representative tasks, recruiting participants, facilitating testing sessions, and analyzing the resulting data to create a compelling report and presentation that will highlight the value of your UX efforts to stakeholders.

Keywords:
Data analysis, reporting, facilitation, observation notes, video clips, guidelines and recommendations, definition of design success, targets, alignment, and stakeholder’s buy-in.

21. Target Times and States

+

To show the impact of our design work, we need to track UX snapshots. Basically, it’s four states, mapped against touch points in a customer journey: baseline (threshold not to cross), current state (how we are currently doing), target state (objective we are aiming for), and industry benchmark (to stay competitive). Let’s see how it would work in an actual project.

Keywords:
Competitive benchmarking, baseline measurement, local and global design KPIs, cross-teams metrics, setting realistic goals.

22. Measuring Design Systems

+

How do we measure the health of a design system? Surely it’s not just a roll-out speed for newly designed UI components or flows. Most teams track productivity and coverage, but we can also go beyond that by measuring relative adoption, efficiency gains (time saved, faster time-to-market, satisfaction score, and product quality). But the best metric is how early designers involve the design system in a conversation during their design work.

Keywords:
Component coverage, decision trees, adoption, efficiency, time to market, user satisfaction, usage analytics, design system ROI, relative adoption.

23. Measuring UX Research

+

Research insights often end up gaining dust in PDF reports stored on remote fringes of Sharepoint. To track the impact of UX research, we need to track outcomes and research-specific metrics. The way to do that is to track UX research impact for UX and business, through organisational learning and engagement, through make-up of research efforts and their reach. And most importantly: amplifying research where we expect the most significant impact. Let’s see what it involves.

Keywords:
Outcome metrics, organizational influence, research-specific metrics, research references, study observers, research formalization, tracking research-initiated product changes.

24. Getting Started

+

So you’ve made it so far! Now, how do you get your UX metrics initiative off the ground? By following small steps heading in the right direction. Small commitments, pilot projects, and design guilds will support and enable your efforts. We just need to define realistic goals and turn UX metrics in a culture of measurement, or simply a way of working. Let’s see how we can do just that.

Keywords:
Pilot projects, UX integration, resource assessment, evidence-driven design, establishing a baseline, culture of measurement.

25. Next Steps

+

Let’s wrap up our journey into UX metrics and Design KPIs and reflect what we have learned. What remains is the first next step: and that would be starting where you are and growing incrementally, by continuously visualizing and explaining your UX impact — however limited it might be — to your stakeholders. This is the last chapter of the course, but the first chapter of your incredible journey that’s ahead of you.

Keywords:
Stakeholder engagement, incremental growth, risk mitigation, user satisfaction, business success.

Who Is The Course For?

This course is tailored for advanced UX practitioners, design leaders, product managers, and UX researchers who are looking for a practical guide to define, establish and track design KPIs, translate business goals into actionable design tasks, and connect business needs with user needs.

What You’ll Learn

By the end of the video course, you’ll have a packed toolbox of practical techniques and strategies on how to define, establish, sell, and measure design KPIs from start to finish — and how to make sure that your design work is always on the right trajectory. You’ll learn:

  • How to translate business goals to UX initiatives,
  • The difference between OKRs, KPIs, and metrics,
  • How to define design success for your company,
  • Metrics and KPIs that businesses typically measure,
  • How to choose the right set of metrics and KPIs,
  • How to establish design KPIs focused on user needs,
  • How to build a comprehensive design KPI tree,
  • How to combine qualitative and quantitative insights,
  • How to choose and prioritize design work,
  • How to track the impact of design work on business goals,
  • How to explain, visualize, and defend design work,
  • How companies define and track design KPIs,
  • How to make a strong case for UX metrics.

Community Matters ❤️

Producing a video course takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. So thank you from the bottom of our hearts! We hope you’ll find the course useful for your work. Happy watching, everyone! 🎉🥳



Video + UX Training

$ 495.00 $ 799.00

Get Video + UX Training

25 video lessons (8h) + .
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00


Get the video course

25 video lessons (8h). Updated yearly.
Also available as a

Using Multimodal AI Models For Your Applications (Part 3)

In this third and final part of a , we’re taking a more streamlined approach to an application that supports vision-language (VLM) and text-to-speech (TTS). This time, we’ll use different models that are designed for all three modalities — images or videos, text, and audio (including speech-to-text) — in one model. These “any-to-any” models make things easier by allowing us to avoid switching between models.

Specifically, we’ll focus on two powerful models: Reka and Gemini 1.5 Pro.

Both models take things to the next level compared to the tools we used earlier. They eliminate the need for separate speech recognition models, providing a unified solution for multimodal tasks. With this in mind, our goal in this article is to explore how Reka and Gemini simplify building advanced applications that handle images, text, and audio all at once.

Overview Of Multimodal AI Models

The architecture of multimodal models has evolved to enable seamless handling of various inputs, including text, images, and audio, among others. Traditional models often require separate components for each modality, but recent advancements in “any-to-any” models like or allow developers to build systems that process multiple modalities within a unified architecture.

, for instance, utilizes a 1.2 billion parameter decoder-only transformer architecture with 24 layers, embedding sizes of 2048 and a hidden size of 8196 in its feed-forward layers. This structure is optimized for general tasks across various inputs, but it still relies on extensive task-specific fine-tuning.

, on the other hand, takes a different approach with training on multiple media types within a single architecture. This means it’s a single model trained to handle a variety of inputs (e.g., text, images, code) without the need for separate systems for each. This training method allows for smoother task-switching and better generalization across tasks.

Similarly, employs a multistage training scheme to handle a linear number of tasks while supporting input-output combinations across different modalities. CoDi’s architecture builds a shared multimodal space, enabling synchronized generation for intertwined modalities like video and audio, making it ideal for more dynamic multimedia tasks.

Most “any-to-any” models, including the ones we’ve discussed, rely on a few key concepts to handle different tasks and inputs smoothly:

  • Shared representation space
    These models convert different types of inputs — text, images, audio — into a common feature space. Text is encoded into vectors, images into feature maps, and audio into spectrograms or embeddings. This shared space allows the model to process various inputs in a unified way.
  • Attention mechanisms
    Attention layers help the model focus on the most relevant parts of each input, whether it’s understanding the text, generating captions from images, or interpreting audio.
  • Cross-modal interaction
    In many models, inputs from one modality (e.g., text) can guide the generation or interpretation of another modality (e.g., images), allowing for more integrated and cohesive outputs.
  • Pre-training and fine-tuning
    Models are typically pre-trained on large datasets across different types of data and then fine-tuned for specific tasks, enhancing their performance in real-world applications.

Reka Models

is an AI research company that helps developers build powerful applications by offering models for a range of tasks. These tasks include generating text from videos and images, translating speech, and answering complex questions from long multimodal documents. Reka’s models can even write and execute code, providing flexible, real-world solutions for developers.

These are the three main models Reka offers:

  1. A 67-billion-parameter multimodal language model designed for complex tasks. It supports inputs like images, videos, and texts while excelling in advanced reasoning and coding.
  2. A faster model with a 21-billion-parameter, designed for flexibility and rapid performance in multimodal settings.
  3. (PDF)
    A smaller 7-billion-parameter model was built for on-device and low-latency applications, making it efficient for local use and local or latency-sensitive applications.

Reka’s models can be fine-tuned and deployed securely, whether on the cloud, on-premises, or even on-device. Let’s start by testing Reka’s capabilities directly through its . This allows us to experiment with its multimodal features without writing any code, providing a hands-on way to see how the models handle various tasks, such as image and video comprehension.

Alright, we’ll kick things off by uploading an image of a diagram outline of the PaliGemma architecture and ask Reka for a detailed explanation.

Can you provide a more detailed explanation of this image?

Here’s what we get from Reka Core:

“This image is a schematic representation of a machine-learning model named PaliGemma. The model appears to be designed for multimodal input, combining both image and text data. At the top, there are two input sources: ‘Image Input’ and ‘Text Input’. The image input is processed by a ‘SigLIP Image Encoder’, which likely stands for a specific type of image encoding technique. The output of this encoder is then subjected to a ‘Linear Projection’, which is a common step in machine learning to transform data into a lower-dimensional space. The text input is directly fed into the model. Both the projected image data and the text input are then concatenated into ‘Concatenated Tokens’, which are then processed by a component labeled ‘Gemma’. The final output of the model is ‘Text Output’, which suggests that the model is designed to generate text based on the input data.”

It’s a clear and detailed breakdown of the PaliGemma architecture. The model accurately identifies image and text inputs, the , and the linear projection step. It also grasps how the image and text inputs are combined and processed to generate text output. I should admit that the response is quite neat, offering a solid explanation of the architecture.

Let’s try a video input instead. We’ll upload the video about , a software, and ask a question to see how well Reka Core handles video comprehension.

What is this video about?

“This video is a screen recording of a group of friends discussing their plans for the weekend.”

Well, it’s not a good answer compared to . Now let’s try a longer video, but with the Reka Flash model. The video is about a TTS model called . Instead of asking what the video is about, let’s prompt it like this:

I’m an AI developer, and I’m not sure how this TTS model can help me in my projects. Can you explain how I can use it or what it’s useful for?

Reka Flash generates a solid answer this time:

“Of course! This TTS (Text-to-Speech) model can be incredibly useful for developers, especially those working on projects that involve creating more human-like interactions or providing audio feedback to users. Here are some ways you can leverage this TTS model in your projects:

  1. Accessibility: TTS models can greatly enhance the accessibility of your applications for users with visual impairments or reading difficulties. By providing spoken feedback or narrating content, you can make your applications more inclusive
  2. Voice Assistants: You can integrate this TTS model into voice assistants or chatbots, allowing them to generate more natural-sounding responses. This can improve user experience and make interactions with your bots more engaging.
  3. Multimedia Applications: This TTS model can be used in multimedia applications, such as e-books, educational tools, or games, to provide spoken narration or instructions. This can make the content more engaging and easier to understand.
  4. Audio Notifications: You can use the TTS model to generate audio notifications for your applications, providing users with spoken alerts or reminders. This can be particularly useful for users who may not always have their screens visible.
  5. Language Learning: The TTS model can be used in language learning applications to provide users with correct pronunciation and intonation. It can also generate spoken feedback or narrate lessons, making the learning process more immersive.
  6. Audio Books: You can use the TTS model to generate audio books, allowing users to listen to written content instead of reading it. This can be especially helpful for users who prefer auditory learning or have difficulty reading.
  7. Explainer Videos: The TTS model can be used to generate voiceovers for explainer videos, providing users with spoken narration that explains the content on the screen. This can make the content more engaging and easier to understand.
  8. Audio Feedback: You can use the TTS model to generate audio feedback for users in your applications, such as confirming actions, providing instructions, or giving feedback on their performance. This can make the interactions more natural and intuitive.

In summary, the TTS model can be a valuable tool for developers looking to create more human-like interactions, improve accessibility, and enhance the user experience in various applications.”

It’s a good answer that’s informative and quite practical for developers. Let’s compare what happens when we try the Reka Edge model with a PDF input titled “Meta AI — Responsible Use Guide” ().

As an AI developer, how can I ensure that I am following the responsible AI guidelines outlined in this document?

“To ensure that your use of AI aligns with the principles outlined in this guide, you can follow these steps and best practices:

  1. Familiarize yourself with the principles of responsible AI, such as transparency, accountability, and user empowerment, as outlined in the guide.
  2. Understand the potential impacts of AI on users and society, and be aware of the risks and challenges associated with the development and deployment of AI systems.
  3. Embrace an iterative model-development mindset, continuously striving for improvement and learning from feedback and errors.
  4. Build a responsible AI ecosystem by working with partners, experts, and communities to develop and share best practices, tools, and benchmarking resources.
  5. Follow the guidelines and best practices outlined in the guide for each stage of model development, from data collection and preparation to fine-tuning, evaluation, and deployment.
  6. Consider the specific use case and requirements for your AI system, and tailor your approach accordingly.
  7. Regularly monitor and evaluate the performance and safety of your AI system and make adjustments as needed to ensure it aligns with the principles outlined in the guide.
  8. Be transparent and accountable in your interactions with users and stakeholders, and be open to feedback and suggestions for improvement.
  9. Stay informed about the latest developments and research in the field of AI, and be prepared to adapt your approach as needed to address new challenges and opportunities.
  10. Follow the specific guidelines and best practices outlined in the addendum for Code Llama, such as defining content policies, evaluating and benchmarking models, and addressing input- and output-level risks.”

Reka Edge’s answer is quite thorough and detailed, too!

If you want to dive into coding, start by and getting an API key. Then you can follow along with this . Try out their Python SDK, and make sure to explore the different It’s a great way to see how these tools work under the hood!

Gemini Models

, developed by , is designed to handle complex tasks efficiently, thanks to the new Mixture-of-Experts (MoE) system. MoE is like packing several specialized models inside one big model. Instead of using the whole network for every task, Gemini 1.5 activates only the parts (or what they call “experts”) that are most relevant. This means Gemini 1.5 can tackle complex tasks with less processing power than a typical monolithic model.

We can explore Gemini’s features in since it’s already baked right in. We’ll begin by uploading the following image illustrating VLM capabilities.

We’ll ask Gemini to analyze the image and extract details or concepts:

Explain this image. What are the specific details or concepts it represents?

“The image is a flowchart illustrating different types of image understanding tasks with a Vision Language Model.

  • Object Localization: This task aims to identify the precise location of objects within an image. The example question asks, “Is one cat behind another?”
  • Zero-shot Segmentation: This task involves segmenting an image into different regions based on a given label, even if the model has never seen that specific label before. The example question asks to segment “striped cat.”
  • Zero-shot Visual QA: This task requires answering questions about an image, even if the model has not been specifically trained to answer that type of question. The example question asks, “What is the breed of these cats?”
  • One-shot Learning with Instructions: This task involves providing the model with a single example and instruction to learn a new concept and apply it to other images. The example question asks, “Striped cats are called tabby cats. What is the breed of the cats in the image?”

Well, Gemini nailed it — it clearly understood the image and recognized the tasks a VLM handles. This is exactly what you’d expect from a model designed for advanced image analysis.

Now, we’ll upload another image — this time of an Italian dish — and ask Gemini if it can recognize the dishes and list the ingredients needed for them. This type of “food recognition task” will help us test its ability to identify foods and ingredients.

List the different dishes in this image, also from which country, and give me the ingredients I need to make them.

“The image shows an Italian feast with the following dishes:

  1. Seafood Stew: A rich and flavorful stew with seafood like clams, mussels, shrimp, fish, and vegetables in a tomato broth.
  2. Spaghetti with Tomato Sauce: Classic spaghetti pasta tossed in a simple yet flavorful tomato sauce.
  3. Meatballs in Tomato Sauce: Tender and juicy meatballs simmered in a rich tomato sauce.
  4. Lasagna: Layered pasta with meat sauce, cheese, and tomato sauce.
  5. Side Salad: A refreshing side salad with mixed greens, tomatoes, and other vegetables.
  6. Garlic Bread: Toasted bread slices rubbed with garlic and olive oil.
  7. Marinara Sauce: A simple tomato-based sauce.

This is a typical Italian meal with various dishes showcasing the flavors of Italian cuisine.

As for the ingredients, you’ll need tomatoes for many of these dishes, specifically for the sauces and the side salad.”

Well, the model did a good job recognizing the dishes, but it also didn’t give me all the necessary ingredients. Let’s see how it does with videos instead, using the same video we used earlier with the Reka models. This time, the task will focus on action recognition.

What is the purpose of this video, and what key actions or processes are being demonstrated?

“The video is a tutorial about the importance of friendship and how it can impact our lives. It demonstrates the use of a video editing software called “Genmy” to create a video and add text subtitles. The video creator is using Genmy to:

  • Record their voice: This is seen at the top of the screen with the mic icon.
  • Add text to speech: You see the “Text to Speech” option, which is likely being used to generate subtitles or captions for the video.
  • Edit the timeline: They’re working with a video timeline, adding and arranging different elements.
  • Add voice-over: The speaker is adding commentary to the video.
  • Create a visually appealing presentation: They’re using a combination of video footage, text, and voice-over to create an engaging tutorial.”

Well, Gemini gave a clear breakdown of the video, identifying it as a tutorial demonstrating the to highlight how it’s used for recording voice, adding text-to-speech, editing video timelines, and allowing voice-over. Impressive!

Let’s upload one more video, the same one we used for Reka about the Parler TTS model, and ask Gemini to analyze and summarize the contents.

Summarize this video and explain how the contents might be relevant or helpful for software engineers.

“This video showcases different voice samples generated by a text-to-speech (TTS) system called “Parler-TTS.” The video highlights various voice characteristics like gender, pace, pitch, and even the simulated environment. For software engineers, this video demonstrates the progress made in TTS technology. These are some insights:

  • Voice Variety: The video shows that Parler-TTS can create diverse voices, potentially useful for applications needing different personas or tones.
  • Fine-grained Control: The video emphasizes the ability to control voice parameters like pace and pitch, showcasing the potential for customizability.
  • Quality and Naturalness: The video focuses on the quality of the generated speech, demonstrating the advancements made in speech synthesis, which is crucial for user experience.”

Nicely done! I can go with that answer. Gemini explains adjusting voice settings, like pitch and speed, and how having different voices can be useful. Gemini also emphasizes the importance of natural, high-quality speech, which is handy for developers working with TTS systems!

Alright, for coding, you can grab the code from Google AI Studio by clicking the Get Code button. You can choose between formatting the code in Python, Swift, and Java, among other languages.

Conclusion

Both Reka and Gemini are strong multimodal models for AI applications, but there are key differences between them to consider. Here’s a table that breaks those down:

Feature Reka Gemini 1.5
Multimodal Capabilities Image, video, and text processing Image, video, text, with extended token context
Efficiency Optimized for multimodal tasks Built with MoE for efficiency
Context Window Standard token window Up to two million tokens (with Flash variant)
Architecture Focused on multimodal task flow MoE improves specialization
Training/Serving High performance with efficient model switching More efficient training with MoE architecture
Deployment Supports on-device deployment Primarily cloud-based, with Vertex AI integration
Use Cases Interactive apps, edge deployment Suited for large-scale, long-context applications
Languages Supported Multiple languages Supports many languages with long context windows

Reka stands out for on-device deployment, which is super useful for apps requiring offline capabilities or low-latency processing.

On the other hand, Gemini 1.5 Pro shines with its long context windows, making it a great option for handling large documents or complex queries in the cloud.

Build A Static RSS Reader To Fight Your Inner FOMO

In a fast-paced industry like tech, it can be hard to deal with the fear of missing out on important news. But, as many of us know, there’s an absolutely huge amount of information coming in daily, and finding the right time and balance to keep up can be difficult, if not stressful. A classic piece of technology like an RSS feed is a delightful way of taking back ownership of our own time. In this article, we will create a static Really Simple Syndication (RSS) reader that will bring you the latest curated news only once (yes: once) a day.

We’ll obviously work with RSS technology in the process, but we’re also going to combine it with some things that maybe you haven’t tried before, including Astro (the static site framework), TypeScript (for JavaScript goodies), a package called rss-parser (for connecting things together), as well as scheduled functions and build hooks provided by Netlify (although there are other services that do this).

I chose these technologies purely because I really, really enjoy them! There may be other solutions out there that are more performant, come with more features, or are simply more comfortable to you — and in those cases, I encourage you to swap in whatever you’d like. The most important thing is getting the end result!

The Plan

Here’s how this will go. Astro generates the website. I made the intentional decision to use a static site because I want the different RSS feeds to be fetched only once during build time, and that’s something we can control each time the site is “rebuilt” and redeployed with updates. That’s where Netlify’s scheduled functions come into play, as they let us trigger rebuilds automatically at specific times. There is no need to manually check for updates and deploy them! Cron jobs can just as readily do this if you prefer a server-side solution.

During the triggered rebuild, we’ll let the rss-parser package do exactly what it says it does: parse a list of RSS feeds that are contained in an array. The package also allows us to set a filter for the fetched results so that we only get ones from the past day, week, and so on. Personally, I only render the news from the last seven days to prevent content overload. We’ll get there!

But first…

What Is RSS?

RSS is a web feed technology that you can feed into a reader or news aggregator. Because RSS is standardized, you know what to expect when it comes to the feed’s format. That means we have a ton of fun possibilities when it comes to handling the data that the feed provides. Most news websites have their own RSS feed that you can subscribe to (this is Smashing Magazine’s RSS feed: ). An RSS feed is capable of updating every time a site publishes new content, which means it can be a quick source of the latest news, but we can tailor that frequency as well.

RSS feeds are written in an Extensible Markup Language (XML) format and have specific elements that can be used within it. Instead of focusing too much on the technicalities here, I’ll give you a link to the . Don’t worry; that page should be scannable enough for you to find the most pertinent information you need, like the kinds of elements that are supported and what they represent. For this tutorial, we’re only using the following elements: <title>, <link>, <description>, <item>, and <pubDate>. We’ll also let our RSS parser package do some of the work for us.

Creating The State Site

We’ll start by creating our Astro site! In your terminal run pnpm create astro@latest. You can use any package manager you want — I’m simply trying out for myself.

After running the command, Astro’s chat-based helper, Houston, walks through some setup questions to get things started.

 astro   Launch sequence initiated.

   dir   Where should we create your new project?
         ./rss-buddy

  tmpl   How would you like to start your new project?
         Include sample files

    ts   Do you plan to write TypeScript?
         Yes

   use   How strict should TypeScript be?
         Strict

  deps   Install dependencies?
         Yes

   git   Initialize a new git repository?
         Yes

I like to use Astro’s sample files so I can get started quickly, but we’re going to clean them up a bit in the process. Let’s clean up the src/pages/index.astro file by removing everything inside of the <main></main> tags. Then we’re good to go!

From there, we can spin things by running pnpm start. Your terminal will tell you which localhost address you can find your site at.

Pulling Information From RSS feeds

The src/pages/index.astro file is where we will make an array of RSS feeds we want to follow. We will be using , so between the two code fences (—), create an array of feedSources and add some feeds. If you need inspiration, you can copy this:

const feedSources = [
  'https://www.smashingmagazine.com/feed/',
  'https://developer.mozilla.org/en-US/blog/rss.xml',
  // etc.
]

Now we’ll install the in our project by running pnpm install rss-parser. This package is a small library that turns the XML that we get from fetching an RSS feed into JavaScript objects. This makes it easy for us to read our RSS feeds and manipulate the data any way we want.

Once the package is installed, open the src/pages/index.astro file, and at the top, we’ll import the rss-parser and instantiate the Partner class.

import Parser from 'rss-parser';
const parser = new Parser();

We use this parser to read our RSS feeds and (surprise!) parse them to JavaScript. We’re going to be dealing with a list of promises here. Normally, I would probably use Promise.all(), but the thing is, this is supposed to be a complicated experience. If one of the feeds doesn’t work for some reason, I’d prefer to simply ignore it.

Why? Well, because Promise.all() rejects everything even if only one of its promises is rejected. That might mean that if one feed doesn’t behave the way I’d expect it to, my entire page would be blank when I grab my hot beverage to read the news in the morning. I do not want to start my day confronted by an error.

Instead, I’ll opt to use Promise.allSettled(). This method will actually let all promises complete even if one of them fails. In our case, this means any feed that errors will just be ignored, which is perfect.

Let’s add this to the src/pages/index.astro file:

interface FeedItem {
  feed?: string;
  title?: string;
  link?: string;
  date?: Date;
}

const feedItems: FeedItem[] = [];

await Promise.allSettled(
  feedSources.map(async (source) => {
    try {
      const feed = await parser.parseURL(source);
      feed.items.forEach((item) => {
        const date = item.pubDate ? new Date(item.pubDate) : undefined;

          feedItems.push({
            feed: feed.title,
            title: item.title,
            link: item.link,
            date,
          });
      });
    } catch (error) {
      console.error(Error fetching feed from ${source}:, error);
    }
  })
);

This creates an array (or more) named feedItems. For each URL in the feedSources array we created earlier, the rss-parser retrieves the items and, yes, parses them into JavaScript. Then, we return whatever data we want! We’ll keep it simple for now and only return the following:

  • The feed title,
  • The title of the feed item,
  • The link to the item,
  • And the item’s published date.

The next step is to ensure that all items are sorted by date so we’ll truly get the “latest” news. Add this small piece of code to our work:

const sortedFeedItems = feedItems.sort((a, b) => (b.date ?? new Date()).getTime() - (a.date ?? new Date()).getTime());

Oh, and… remember when I said I didn’t want this RSS reader to render anything older than seven days? Let’s tackle that right now since we’re already in this code.

We’ll make a new variable called sevenDaysAgo and assign it a date. We’ll then set that date to seven days ago and use that logic before we add a new item to our feedItems array.

This is what the src/pages/index.astro file should now look like at this point:

---
import Layout from '../layouts/Layout.astro';
import Parser from 'rss-parser';
const parser = new Parser();

const sevenDaysAgo = new Date();
sevenDaysAgo.setDate(sevenDaysAgo.getDate() - 7);

const feedSources = [
  ',
  ',
]

interface FeedItem {
  feed?: string;
  title?: string;
  link?: string;
  date?: Date;
}

const feedItems: FeedItem[] = [];

await Promise.allSettled(
  feedSources.map(async (source) => {
    try {
      const feed = await parser.parseURL(source);
      feed.items.forEach((item) => {
        const date = item.pubDate ? new Date(item.pubDate) : undefined;
        if (date && date >= sevenDaysAgo) {
          feedItems.push({
            feed: feed.title,
            title: item.title,
            link: item.link,
            date,
          });
        }
      });
    } catch (error) {
      console.error(Error fetching feed from ${source}:, error);
    }
  })
);

const sortedFeedItems = feedItems.sort((a, b) => (b.date ?? new Date()).getTime() - (a.date ?? new Date()).getTime());

---

<Layout title="Welcome to Astro.">
  <main>
  </main>
</Layout>

Rendering XML Data

It’s time to show our news articles on the Astro site! To keep this simple, we’ll format the items in an unordered list rather than some other fancy layout.

All we need to do is update the <Layout> element in the file with the XML objects sprinkled in for a feed item’s title, URL, and publish date.

<Layout title="Welcome to Astro.">
  <main>
  {sortedFeedItems.map(item => (
    <ul>
      <li>
        <a href={item.link}>{item.title}</a>
        <p>{item.feed}</p>
        <p>{item.date}</p>
      </li>
    </ul>
  ))}
  </main>
</Layout>

Go ahead and run pnpm start from the terminal. The page should display an unordered list of feed items. Of course, everything is styled at the moment, but luckily for you, you can make it look exactly like you want with CSS!

And remember that there are even more fields available in the XML for each item if you want to display more information. If you run the following snippet in your DevTools console, you’ll see all of the fields you have at your disposal:

feed.items.forEach(item => {}

Scheduling Daily Static Site Builds

We’re nearly done! The feeds are being fetched, and they are returning data back to us in JavaScript for use in our Astro page template. Since feeds are updated whenever new content is published, we need a way to fetch the latest items from it.

We want to avoid doing any of this manually. So, let’s set this site on Netlify to gain access to their scheduled functions that trigger a rebuild and their build hooks that do the building. Again, other services do this, and you’re welcome to roll this work with another provider — I’m just partial to Netlify since I work there. In any case, you can follow Netlify’s documentation for .

Once your site is hosted and live, you are ready to schedule your rebuilds. A gives you a URL to use to trigger the new build, looking something like this:

https://api.netlify.com/build_hooks/your-build-hook-id

Let’s trigger builds every day at midnight. We’ll use Netlify’s . That’s really why I’m using Netlify to host this in the first place. Having them at the ready via the host greatly simplifies things since there’s no server work or complicated configurations to get this going. Set it and forget it!

We’ll install @netlify/functions () to the project and then create the following file in the project’s root directory: netlify/functions/deploy.ts.

This is what we want to add to that file:

// netlify/functions/deploy.ts

import type { Config } from '@netlify/functions';

const BUILD_HOOK =
  '; // replace me!

export default async (req: Request) => {
  await fetch(BUILD_HOOK, {
    method: 'POST',
  })
};

export const config: Config = {
  schedule: '0 0 * * *',
};

If you commit your code and push it, your site should re-deploy automatically. From that point on, it follows a schedule that rebuilds the site every day at midnight, ready for you to take your morning brew and catch up on everything that you think is important.

How A Bottom-Up Design Approach Enhances Site Accessibility

Accessibility is key in modern web design. A site that doesn’t consider how its user experience may differ for various audiences — especially those with disabilities — will fail to engage and serve everyone equally. One of the best ways to prevent this is to approach your site from a bottom-up perspective.

Understanding Bottom-Up Design

Conventional, top-down design approaches start with the big picture before breaking these goals and concepts into smaller details. Bottom-up philosophies, by contrast, consider the minute details first, eventually achieving the broader goal piece by piece.

This alternative way of thinking is important for accessibility in general because it reflects how neurodivergent people commonly think. While non-autistic people , those with autism often employ a bottom-up way of thinking.

Of course, there is considerable variation, and researchers have identified within the autism spectrum:

  • Visual thinkers who think in images;
  • Pattern thinkers who think of concepts in terms of patterns and relationships;
  • Verbal thinkers who think only in word detail.

Still, research shows that people with autism and ADHD rather than the top-down approach you often see in neurotypical users. Consequently, a top-down strategy means you may miss details your audience may notice, and your site may not feel easily usable for all users.

As a real-world example, consider the task of writing an essay. Many students are instructed to start an essay assignment by thinking about the main point they want to convey and then create an outline with points that support the main argument. This is top-down thinking — starting with the big picture of the topic and then gradually breaking down the topic into points and then later into words that articulate these points.

In contrast, someone who uses a bottom-up thinking approach might start an essay with no outline but rather just by freely jotting down every idea that comes to mind as it comes to mind — perhaps starting with one particular idea or example that the writer finds interesting and wants to explore further and branching off from there. Then, once all the ideas have been written out, the writer goes back to group related ideas together and arrange them into a logical outline. This writer starts with the small details of the essay and then works these details into the big picture of the final form.

In web design, in particular, a bottom-up approach means starting with the specifics of the user experience instead of the desired effect. You may determine a readable layout for a single blog post, then ask how that page relates to others and slowly build on these decisions until you have several well-organized website categories.

You may even get more granular. Say you start your site design by placing a menu at the bottom of a mobile site to make it easier to tap with one hand, improving ease of use. Then, you build a drop-down menu around that choice — placing the most popular or needed options at the bottom instead of the top for easy tapping. From there, you may have to rethink larger-scale layouts to work around those interactive elements being lower on the screen, slowly addressing larger categories until you have a finished site design.

In either case, the idea of bottom-up design is to begin with the most specific problems someone might have. You then address them in sequence instead of determining the big picture first.

Benefits Of A Bottom-Up Approach For Accessible Design

While neither bottom-up nor top-down approaches dominate the industry, some web engineers prefer the bottom-up approach due to the various accessibility benefits this process provides. This strategy has several accessibility benefits.

Putting User Needs First

The biggest benefit of bottom-up methods is that they prioritize the user’s needs.

Top-down approaches seem organized, but they often result in a site that reflects the designer’s choices and beliefs more than it serves your audience.

Consider some of the complaints that social media users have made over the years related to usability and accessibility for the everyday user. For example, many users complain that their Facebook feed will randomly refresh as they scroll for the sake of providing users with the most up-to-date content. However, the feature makes it virtually impossible to get back to a post a user viewed that they didn’t think to save. Likewise, TikTok’s watch history feature and still today is difficult for many users to find without viewing an outside tutorial on the subject.

This is a common problem: have Web Content Accessibility Guidelines (WCAG) errors. While a bottom-up alternative doesn’t mean you won’t make any mistakes, it may make them less likely, as bottom-up thinking of new stimuli so you can catch things you’d otherwise overlook. It’s easier to meet user’s needs when you build your entire site around their experience instead of looking at UX as an afterthought.

Consider this , a multi-billion-dollar holding company. The overall design philosophy is understandable: It’s simple and direct, choosing to focus on information instead of fancy aesthetics that may not suit the company image. However, you could argue it loses itself in this big picture.

While it is simple, the lack of menus or color contrast and the small font make it harder to read and a little overwhelming. This confusion can counteract any accessibility benefits of its simplicity.

Alternatively, even a simple website redesign could include intuitive menus, additional contrast, and accessible font for easy navigation across the site.

The offers a better example of web design centered around users’ needs. Concise, clear menus line the top of the page to aid quicker, easier navigation. The color scheme is simple enough to avoid confusion but provides enough contrast to make everything easy to read — something the sans-serif font further helps.

Ensuring Accessibility From The Start

A top-down method also makes catering to a diverse audience difficult because you may need to shoehorn features into an existing design.

For example, say, a local government agency creates a website focused on providing information and services to a general audience of residents. The site originally featured high-resolution images, bright colors, and interactive charts.

However, they realize the images are not accessible to people navigating the site with screen readers, while multiple layers of submenus are difficult for keyboard-only users. Further, the bright colors make it hard for visually impaired users to read the site’s information.

The agency, realizing these accessibility concerns, adds captions to each image. However, the captions disrupt the originally intended visual aesthetics and user flow. Further, adjusting the bright colors would involve completely rethinking the site’s entire color palette. If these considerations had been made before the site was built, the site build could have specifically accommodated these elements while still creating an aesthetically pleasing and user-friendly result.

Alternatively, a site initially built with high contrast, a calm color scheme, clear typography, simple menus, and reduced imagery would make this site much more accessible to a wide user base from the get-go.

As a real-world example, website. There are plenty of menus to condense information and ease navigation without overloading the screen — a solid accessibility choice. However, there does not seem to be consistent thought in these menus’ placement or organization.

There are far too many menus; some are at the top while others are at the bottom, and a scrolling top bar adds further distractions. It seems like Awwwards may have added additional menus as an afterthought to improve navigation. This leads to inconsistencies and crowding because they did not begin with this thought.

In contrast,

Bottom-up alternatives address usability issues from the beginning, which results in a more functional, accessible website.

Redesigning a system to address a usability issue it didn’t originally make room for is challenging. It can lead to errors like broken links and other unintended consequences that may hinder access for other visitors. Some sites have even claimed to lose after a redesign. While bottom-up approaches won’t eliminate those possibilities, they make them less likely by centering everything around usage from the start.

The in Stockholm, Sweden, showcases a more cohesive approach to ensuring accessibility. Like Awwwards, it uses menus to aid navigation and organization, but there seems to be more forethought into these features. All menus are at the top, and there are fewer of them, resulting in less clutter and a faster way to find what you’re looking for. The overall design complements this by keeping things simple and neat so that the menus stand out.

Increasing Awareness

Similarly, bottom-up design ensures you don’t miss as many accessibility concerns. When you start at the top, before determining what details fit within it, you may not consider all the factors that influence it. Beginning with the specifics instead makes it easier to spot and address problems you would’ve missed otherwise.

This awareness is particularly important for serving a diverse population. An estimated — 1.6 billion people — have a significant disability. That means there’s a huge range of varying needs to account for, but most people lack firsthand experience living with these conditions. Consequently, it’s easy to miss things impacting others’ UX. You can overcome that knowledge gap by asking how everyone can use your site first.

Bottom-Up vs. Top-Down: Which Is Best for You?

As these benefits show, a bottom-up design philosophy can be helpful when building an accessible site. Still, top-down methods can be advantageous at times, too. Which is best depends on your situation.

Top-down approaches are a good way to ensure a consistent brand image, as you start with the overall idea and base future decisions on this concept. It also makes it easier to create a design hierarchy to facilitate decision-making within your team. When anyone has a question, they can turn to whoever is above them or refer to the broader goals. Such organization can also mean faster design processes.

Bottom-up methods, by contrast, are better when accessibility for a diverse audience is your main concern. It may be harder to keep everyone on the same overall design philosophy page, but it usually produces a more functional website. You can catch and solve problems early and pay great attention to detail. However, this can mean longer design cycles, which can incur extra costs.

It may come down to what your team is most comfortable with. People think and work differently, with some preferring a top-down approach while others find bottom-up more natural. Combining the two — starting with a top-down model before tackling updates from a bottom-up perspective — can be beneficial, too.

Strategies For Implementing A Bottom-Up Design Model

Should you decide a bottom-up design method is best for your goals, here are some ways you can embrace this philosophy.

Talk To Your Existing User Base

One of the most important factors in bottom-up web design is to center everything around your users. As a result, your existing user base — whether from a separate part of your business or another site you run — is the perfect place to start.

Survey customers and web visitors about their experience on your sites and others. Ask what pain points they have and what features they’d appreciate. Any commonalities between responses deserve attention. You can also for inspiration on accessible functionality, but first-hand user feedback should form the core of your mission.

Look To Past Projects For Accessibility Gaps

Past sites and business projects can also reveal what specifics you should start with. Look for any accessibility gaps by combing through old customer feedback and update histories and using these sites yourself to find issues. Take note of any barriers or usability concerns to address in your next website.

Remember to document everything you find as you go. Up to is unstructured, making it difficult to analyze later. Reversing that trend by organizing your findings and recording your accessible design process will streamline future accessibility optimization efforts.

Divide Tasks But Communicate Often

Keep in mind that a bottom-up strategy can be time-consuming. One of the reasons why top-down alternatives are popular is because they’re efficient. You can overcome this gap by splitting tasks between smaller teams. However, these groups must communicate frequently to ensure separate design considerations work as a cohesive whole.

A DevOps approach is helpful here. DevOps has helped achieve a faster time to market, and 61% report higher-quality deliverables. It also includes space for both detailed work and team-wide meetings to keep everyone on track. Such benefits ensure you can remain productive in a bottom-up strategy.

Accessible Websites Need A Bottom-Up Design Approach

You can’t overstate the importance of accessible website design. By the same token, bottom-up philosophies are crucial in modern site-building. A detail-oriented approach makes it easier to serve a more diverse audience along several fronts. Making the most of this opportunity will both extend your reach to new niches and make the web a more equitable place.

The are a good place to start. While these guidelines don’t necessarily describe how to apply a bottom-up approach, they do outline critical user needs and accessibility concerns your design should consider. The site also offers a free and comprehensive for designers and developers.

Familiarizing yourself with these standards and best practices will make it easier to understand your audience before you begin designing your site. You can then build a more accessible platform from the ground up.

Additionally, the following are some valuable related reads that can act as inspiration in accessibility-centered and user-centric design.

  • by Regine M. Gilbert
  • by Ashley Firth
  • by Steve Krug

By employing bottom-up thinking as well as resources like these in your design approach, you can create websites that not only meet current accessibility standards but actively encourage site use among users of all backgrounds and abilities.

Further Reading On SmashingMag

  • “,” Eric Bailey
  • “,” Vitaly Friedman
  • “,” Sandrina Pereira
  • “,” Brecht De Ruyte

Interview With Björn Ottosson, Creator Of The Oklab Color Space

Oklab is a new perceptual color space supported in all major browsers created by the Swedish engineer Björn Ottosson. In this interview, Philip Jägenstedt explores how and why Björn created Oklab and how it spread across the ecosystem.

Note: The original interview was conducted in Swedish and is .

About Björn

Philip Jägenstedt: Tell me a little about yourself, Björn.

Björn Ottosson: I worked for many years in the game industry on game engines and games like FIFA, Battlefield, and Need for Speed. I’ve always been interested in technology and its interaction with the arts. I’m an engineer, but I’ve always held both of these interests.

On Working With Color

Philip: For someone who hasn’t dug into colors much, what’s so hard about working with them?

Björn: Intuitively, colors can seem quite simple. A color can be lighter or darker, it can be more blue or more green, and so on. Everyone with typical color vision has a fairly similar experience of color, and this can be modeled.

However, the way we manipulate colors in software usually doesn’t align with human perception of colors. The most common color space is sRGB. There’s also HSL, which is common for choosing colors, but it’s also based on sRGB.

One problem with sRGB is that in a gradient between blue and white, it becomes a bit purple in the middle of the transition. That’s because sRGB really isn’t created to mimic how the eye sees colors; rather, it is based on how work. That means it works with certain frequencies of red, green, and blue, and also the . It’s a miracle it works as well as it does, but it’s not connected to color perception. When using those tools, you sometimes get surprising results, like purple in the gradient.

On Color Perception

Philip: How do humans perceive color?

Björn: When light enters the eye and hits the retina, it’s processed in many layers of neurons and creates a mental impression. It’s unlikely that the process would be simple and linear, and it’s not. But incredibly enough, most people still perceive colors similarly.

People have been trying to understand colors and have created color wheels and similar visualizations for hundreds of years. During the 20th century, a lot of research and modeling went into color vision. For example, the model is based on how sensitive our photoreceptor cells are to different frequencies of light. CIE XYZ is still a foundational color space on which all other color spaces are based.

There were also attempts to create simple models matching human perception based on XYZ, but as it turned out, it’s not possible to model all color vision that way. Perception of color is incredibly complex and depends, among other things, on whether it is dark or light in the room and the background color it is against. When you look at a photograph, it also depends on what you think the color of the light source is. is a typical example of color vision being very context-dependent. It is almost impossible to model this perfectly.

Models that try to take all of this complexity into account are called color appearance models. Although they have many applications, they’re not that useful if you don’t know if the viewer is in a light or bright room or other viewing conditions.

The odd thing is that there’s a gap between the tools we typically use — such as sRGB and HSL — and the findings of this much older research. To an extent, this makes sense because when HSL was developed in the 1970s, we didn’t have much computing power, so it’s a fairly simple translation of RGB. However, not much has changed since then.

We have a lot more processing power now, but we’ve settled for fairly simple tools for handling colors in software.

Display technology has also improved. Many displays now have different RGB primaries, i.e., a redder red, greener green, or bluer blue. sRGB cannot reach all colors available on these displays. The new P3 color space can, but it’s very similar to sRGB, just a little wider.

On Creating Oklab

Philip: What, then, is Oklab, and how did you create it?

Björn: When working in the game industry, sometimes I wanted to do simple color manipulations like making a color darker or changing the hue. I researched existing color spaces and how good they are at these simple tasks and concluded that all of them are problematic in some way.

Many people know about . It’s quite close to human perception of color, but the handling of hue is not great. For example, a gradient between blue and white turns out purple in CIE Lab, similar to in sRGB. Some color spaces handle hue well but have other issues to consider.

When I left my job in gaming to pursue education and consulting, I had a bit of time to tackle this problem. Oklab is my attempt to find a better balance, something Lab-like but “okay”.

I based Oklab on two other color spaces, and . I used the lightness and saturation prediction from CIECAM16, which is a color appearance model, as a target. I actually wanted to use the datasets used to create CIECAM16, but I couldn’t find them.

IPT was designed to have better hue uniformity. In experiments, they asked people to match light and dark colors, saturated and unsaturated colors, which resulted in a dataset for which colors, subjectively, have the same hue. IPT has a few other issues but is the basis for hue in Oklab.

Using these three datasets, I set out to create a simple color space that would be “okay”. I used an approach quite similar to IPT but combined it with the lightness and saturation estimates from CIECAM16. The resulting Oklab still has good hue uniformity but also handles lightness and saturation well.

Philip: How about the name Oklab? Why is it just okay?

Björn: This is a bit tongue-in-cheek and some amount of humility.

For the tasks I had in mind, existing color spaces weren’t okay, and my goal was to make one that is. At the same time, it is possible to delve deeper. If a university had worked on this, they could have run studies with many participants. For a color space intended mainly for use on computer and phone screens, you could run studies in typical environments where they are used. It’s possible to go deeper.

Nevertheless, I took the datasets I could find and made the best of what I had. The objective was to make a very simple model that’s okay to use. And I think it is okay, and I couldn’t come up with anything better. I didn’t want to call it Björn Ottosson Lab or something like that, so I went with Oklab.

Philip: Does the name follow a tradition of calling things okay? I know there’s also a format.

Björn: No, I didn’t follow any tradition here. Oklab was just the name I came up with.

On Oklab Adoption

Philip: I discovered Oklab when it suddenly appeared in all browsers. Things often move slowly on the web, but in this case, things moved very quickly. How did it happen?

Björn: I was surprised, too! I wrote and shared it on Twitter.

I have a lot of contacts in the gaming industry and some contacts in the Visual Effects (VFX) industry. I expected that people working with shaders or visual effects might try this out, and maybe it would be used in some games, perhaps as an effect for a smooth color transition.

But the blog post was spread much more widely than I thought. It was on Hacker News, and many people read it.

The code for Oklab is only 10 lines long, so many open-source libraries have adopted it. This all happened very quickly.

from the W3C got in touch and asked me some questions about Oklab. We discussed it a bit, and I explained how it works and why I created it. He gave a at a conference about it, and then he pushed for it to be added to CSS.

Photoshop also changed its gradients to use Oklab. All of this happened organically without me having to cheer it on.

On Okhsl

Philip: In , you introduced two other color spaces, Okhsv and Okhsl. You’ve already talked about HSL, so what is Okhsl?

Björn: When picking colors, HSL has a big advantage, which is that the parameter space is simple. Any value 0-360 for hue (H) together with any values 0-1 for saturation (S) and lightness (L) are valid combinations and result in different colors on screen. The geometry of HSL is a cylinder, and there’s no way to end up outside that cylinder accidentally.

By contrast, contains all physically possible colors, but there are combinations of values that don’t work where you reach colors that don’t exist. For example, starting from light and saturated yellow in Oklab and rotating the hue to blue, that blue color does not exist in sRGB; there are only darker and less saturated blues. That’s because sRGB in Oklab has a strange shape, so it’s easy to end up going outside it. This makes it difficult to select and manipulate colors with Oklab or Oklch.

Okhsl was an attempt at compromise. It maintains Oklab’s behavior for colors that are not very saturated, close to gray, and beyond that, stretches out to a cylinder that contains all of sRGB. Another way to put it is that the strange shape of sRGB in Oklab has been stretched into a cylinder with reasonably smooth transitions.

The result is similar to HSL, where all parameters can be changed independently without ending up outside sRGB. It also makes Okhsl more complicated than Oklab. There are unavoidable compromises to get something with the characteristics that HSL has.

Everything with color is about compromises. Color vision is so complex that it’s about making practical compromises.

This is an area where I wish there were more research. If I have a white background and want to pick some nice colors to put on it, then you can make a lot of assumptions. Okhsl solves many things, but is it possible to do even better?

On Color Compromises

Philip: Some people who have tried Oklab say there are too many dark shades. You changed that in Okhsl with a new lightness estimate.

Björn: This is because Oklab is exposure invariant and doesn’t account for viewing conditions, such as the background color. On the web, there’s usually a white background, which makes it harder to see the difference between black and other dark colors. But if you look at the same gradient on a black background, the difference is more apparent.

CIE Lab handles this, and I tried to handle it in Okhsl, too. So, gradients in Okhsl look better on a white background, but there will be other issues on a black background. It’s always a compromise.

And, Finally…

Philip: Final question: What’s your favorite color?

Björn: I would have to say Burgundy. Burgundy, dark greens, and navy blues are favorites.

Philip: Thank you for your time, Björn. I hope our readers have learned something, and I’ll remind them of , where you go into more depth about Oklab and Okhsl.

Björn: Thank you!