If you have ever dived into frontend performance you will probably have faced the following acronyms: LCP, FCP, CLS, TTFB.
These metrics are useful for what they measure; however, the problem is that in many modern apps they are no longer enough. It is quite common to have acceptable metrics while the actual user experience is worse than they portray.
The bigger issue is that most RUM analytics tools still focus primarily on these metrics. This model is, however, no longer aligned with how modern web applications are built, giving teams a view that is technically correct but incomplete.
Old versus New
The usual setup that many of us grew up with is: server responds, the browser paints, and the page becomes usable! This is the classic setup of MPAs where each URL is a new document lifecycle, on which the above-mentioned metrics and RUM tools are based.
Today we ship mainly SPAs where route transitions do not always do a full reload, or hybrids where each route is server-bound but still boots a JS-focused application with progressive rendering patterns.
In practice this splits the timeline into two pieces: the page-load timeline and the JavaScript bundle completion timeline, and FCP, TTFB, and such focus only on the first half.
Page Is Painted, But Not Really Ready
Skeletons, spinners, route-level fetches after initial render, and partial hydrations, all modern concepts for better development and delivery, skew the results of FCP/LCP, and not to their fault. Therefore if the analysis stops at the point of “biggest element appeared quickly”, it would be true, but it would be a false positive in terms of the user’s experience.
This is the core distinction: page load tells you when the bytes arrived, while real JS load tells you when the page became functionally usable. The gap between these two is quite meaningful.
MPA vs SPA: Same URL Change, Different Reality
A very key part is that in an MPA navigation, the page load gives you a whole new browser lifecycle, while in an SPA navigation, URL changes while the document remains alive and the work is offloaded to the JavaScript bundle app. Due to this substantial difference we can no longer use the same metrics.
If we evaluate the SPA transactions with MPA assumptions we will under-measure the actual time a user spent waiting and conclude that everything is fine.
Why SPAs Are Hard To Track Correctly
Tracking SPAs is a different world of complexity in comparison to MPAs. SPAs have new measurement problems:
- There is no full document reset on route change, which makes tracking complicated
- The usual browser events are not equivalent to the hard nav events
- Route transitions are complex as they are pushed by the app programmatically
- The usage of async can hide true completion time
Personally I believe the most complex is the identification of an SPA navigation due to the complexity and fragmentation of approaches that different frameworks have.
We Need A Simpler Way To Express User Experience
We currently have a very broad but not fully aligned setup for frontend metrics, and in this I will include the Core Web Vitals which in my opinion are for purists and do not reflect real-world scenarios. What we need is not more acronyms but a clearer expression of user experience: “Was this navigation good or bad and why?”
When debugging real user pain, we need to inspect:
- navigation type (hard load vs soft nav)
- long-task pressure and jank windows
- request waterfall in navigation context
- visual readiness vs functional readiness
Without that context, teams can produce clean dashboards while staying blind to the actual problem.
Why This Matters Operationally
Performance is seen as a technical concept and monitoring it is left to developers; however, as many analytics discussions have shifted towards being more of an organizational topic, so should this. When the metrics do not align with what the users feel we see a rise in support tickets, loss of credibility, and misattribution of blame.
Where Witnes Fits
Witnes was built due to all these problems being faced first-hand. We do not discard the traditional metrics, but they are now only part of the truth. We aim to see the whole picture: we track hard and soft navigations, preserve navigation context, include waterfall and jank as evidence, and throw all that to our interpretation layer.
The point is simple, when we have a customer who suggests that our page is slow, any team should be able to open the flow and understand what happened without doing percentile math or googling acronyms.