THAT'S A PICKER!
Where the story begins
This page explores user experience and design patterns at scale, particularly focusing on simple elements like pickers (HTML select) in large applications. While many frontend developers are familiar with these patterns, this page also considers aspects like API design, prototyping, and other areas relevant to large-scale projects.
For example, my team once encountered a situation where an endpoint used in an HTML select component was responsible for nearly 2% of our total database time. The issue was that data was prefetching every time the page loaded without caching or pagination. By implementing a cache for recently selected options, we reduced average session times by 370ms.
Designing a simple HTML select: how hard can it be?
This page presents various UX and design patterns, such as prefetching, virtualization, pagination, infinite loading, debouncing, lazy loading, memoization, and caching, and explains what works well at scale and why certain approaches may or may not be optimal.
Prefetching
Prefetching data before it's needed can seem like a good UX practice, but in reality, most selects deal with frequently changing data, like a list of users. This type of data may not be cacheable, as it could result in outdated lists. Prefetching is only practical for critical paths with unchanging datasets. Additionally, how many entries should be displayed in the list? Loading 1000+ items in a dropdown can lag, especially on slower machines. Scroll through the dropdown to see the impact.
Usage
Critical, small datasets
Pros
- Data available after initial render
- Great for critical data
Cons
- Risk of overfetching
- Large datasets can cause lag
- Impacts time to interact
- Best for small datasets
- Frontend filtering
Example
Virtualization
Virtualization optimizes the rendering of huge datasets by creating a virtual list instead of rendering all items at once. It helps performance but still isn’t ideal if datasets aren't limited, as it can still lead to overfetching.
Usage
Critical, small datasets. Useful for complex, custom select options.
Pros
- Perceptible performance improvement for complex lists
- Data available after initial render
- Great for critical data
Cons
- Risk of overfetching
- Large datasets can still cause lag
- Impacts time to interact
- Best for small datasets
- Frontend filtering
- May require an external library, increasing bundle size
Example
Pagination
Pagination is a must-have at scale to avoid overfetching. While less data improves UI response, it also means more traffic. A challenge is handling cases where a search might return many similar results but only shows a limited set.
Usage
Best for large datasets, especially if initial data is 'smart', like recently modified or visited items.
Pros
- Ideal for large datasets
- Faster with big payloads
- Improves perceived performance
- Partial data available after the initial render
Cons
- Adds extra server load
- Problematic if results exceed the limit
Example
Infinite Loading
Infinite loading helps users who might not remember exact search terms. Just scroll down, and additional requests will be made automatically. However, keep in mind that at scale, the number of requests matters.
Usage
Best for large datasets with similar results. Great when the initial data is 'smart', like recent items.
Pros
- Ideal for large datasets
- Faster with big payloads
- Improves perceived performance
- Partial data available after the initial render
- Better for similar results
Cons
- Adds extra server load
Example
Debouncing
Debouncing is essential at scale, reducing the number of requests. Instead of fetching data with every keystroke, requests are delayed, saving server resources. Fewer requests mean less CPU load.
Usage
Best for large datasets, frequently used selects. Useful when the initial data is 'smart', like recently modified items.
Pros
- Ideal for large datasets
- Faster with big payloads
- Improves perceived performance
- Partial data available after the initial render
- Reduces traffic by avoiding fetches on every keystroke
Cons
- Adds extra server load
Example
Lazy Loading
Lazy loading makes sense when the data isn't critical. For example, at checkout, you would prefetch delivery options, but for optional fields, lazy loading can reduce initial load times.
Usage
Non-critical datasets or when prioritizing critical paths.
Pros
- Ideal for large datasets
- Faster with big payloads
- Improves perceived performance
- Non-critical datasets
- Unblocks critical paths
Cons
- Data not available after the initial render
- Impacts interaction time
Example
Memoization
Memoization helps avoid redundant requests. For example, after searching 'Henry' and then clearing the search, instead of fetching the data again, previously cached results can be used, improving performance.
Usage
Useful for selects with frequently changing props, where the user might need to re-select the same value multiple times.
Pros
- Great for slow endpoints
- Reduces traffic
- Improves perceived performance for repeated searches
Cons
- Data not available after the initial render
- Affects initial interaction time
Example
Recent Items Cache
Caching recent items boosts both user experience and performance. If the data isn’t sensitive, storing it on the client side allows users to quickly access recently used options without making additional requests. Try selecting a few users and see the recent items list update.
Usage
Best for frequently used selects.
Pros
- Great for slow endpoints
- Reduces traffic
- Improves perceived performance
- Data available after the initial render
- Smart select behavior
Cons
- Requires caching logic
- Not suitable for sensitive data
- Cache-clearing mechanism needed