Performance of large applications is often an afterthought. I wonder if it isn’t that it doesn’t matter, so much that it is a hard problem to solve, and at least part of what makes it hard to solve is that it is hard to define too. There are at least a few major types of performance that can be objectively measured
- app load (and potentially within that)
- cold start vs warm start
- app load at different routes if app is a SPA where routes are used
- time to first paint vs time to first interactivity
- component load/mount
- component state changes
- app memory use
- app size - more relevant if using bundled packages - but fewer bytes shipped is almost always more preformat
- js bytes vs html vs css
- efficient use or caching of API calls when possible
Then there is the more subjective side.
- It feels fast.
- I can get my task done quickly.
- I can find what I need easily.
It is actually not all that difficult to get metrics on any one of these things, the subjective ones and the objective ones, but the trick in any case is how much can you trust them, and even if you can trust them how much does that really inform which vectors are the right ones to tackle for the overly broad goal of “improving the performance of your application”.
So should that be the end of this post? We just throw our hands up and say “to hard”?
I would say that in one sense this problem as stated may be unsolvable. But that doesn’t mean nothing can be done. Broadly speaking you can follow these steps.
In other words, break the problem down and define a concrete goal. Of course how this will break down will probably depend on the perspective of the person doing it. A product manager may care most about some of those subjective metrics, a front end developer might care about app load times or component rendering speed, DevOps types might care about bytes over the wire and efficiency caching strategies to reduce server loads. There isn’t a right answer here, but an answer is needed if you want to have any hope of defining a reachable goal.
If this is a large actively developed application, and maintaining a certain performance threshold over time is a long-term priority then defining a strategy for gathering and measuring the metrics that matter for your goal over time is crucial. We’ll get to what that might look like soon. The rub is that the system for gathering and measuring those metrics in a trustworthy manner can involve a lot of up front effort with that will provide zero short-term payoff in terms of the performance improvement that is the ultimate goal.
This step will involve research and coding. The research part could involve reading or deeper learning about the ways that libraries work to understand their suggested paths for performance optimization or that mixed with making changes and using your measurement systems to test effects.
In my mind there is a perfectly justified case for skipping step two and just making performance improvement efforts one off things. As in just picking your approach or angle on performance, understand the theory of what will make it better/faster, and then just do that work. This is going to be controversial to say, but I think skipping any metrics is preferable in this case, because whatever metric you would gather is only likely to make you feel better about the work you did. It’s not real, or it has a high probability of being inaccurate at best, and incorrect at worst - and if not now then in the very near future - say when a browser update is released. Because of this it will also not be meaningful to report to either your users or your boss. That isn’t to say it wouldn’t have the potential to give them the same warm fuzzies it gives you to say something like “I improved the performance of the app by 22%” it just wouldn’t necessarily be very truthful. “”Lies, damn lies, and statistics” kinds of stuff.
Get to the good stuff, right!
First off. It is interesting how hard it turns out to be to measure micro performance of js functions within larger apps. For example I could get some metrics one day - run the performance tool 4 or 5 times get a mean on the time to render/run a particular function, make a change, repeat and get some new numbers that showed I won. But, the next day after pulling a new commit to dev, and getting a browser update I couldn’t replicate my success gains, like on the order of was it a 2000% boost or a -20% regression? Seemingly random GC events were one of a few things threw everything off for the results of the micro benchmarks. These might shake out at some sampling size, but getting at that via automating a test runner and reporter, who has time for that?! We’ve got stuff to build, amiright!
What about macro performance? If you are using something like Cypress (which I recommended BTW!) you can see time measurements there. Same story really. Sorry, just to many factors in play for this to be a reliable performance metric gathering tool.
Still, micro measures and macro measures can offer clues and insight. Use browser profiler and React dev tools to do some performance benchmarking. Just see what seems relatively slow on the flame charts for various common workflows, or where the big spikes are - will depend on what views you’re using. In large part I am not suggesting any thing novel from approach outlined in the React docs
Some of my observations were:
- Saw a lot of small calls to styled wrappers - and these piled up.
- Some small components that are hit a LOT! Small wins on these could really add up.
- Some components really thrash through their render functions, - potential to trim up the content of the render function, or explore use of
Then, research what common approaches to performance optimization - read up on the theory a bit. Here is some of what I gleaned
- Functional components should be a bit faster, even if not they tend to be less code.
- Reduce bundle size / bytes shipped
- feels like an obvious one, but we found few dependencies with overlapping functionality and were able to cut those down, take the easy wins.
- Leverage React’s strengths (make many small functional components vs. generic generators inside of a react component class)
- Take stuff out of the render function where possible. For example, we had option list generation that would happen with
someBigStaticList.map( => ... option elements ... )or some such, and moving that out of render so it happened just once helped on some components that rerendered with any frequency.
- Improve ratio of CSS/HTML to JS – not all bytes are created equal. The browser doesn’t need to wait for all CSS to load before starting. but JS is typically blocking. etc. etc.
- A few good utility classes, and moving styles into static css for components that render often seemed to have a significant impact.
- I get the appeal of CSS in JS, but in React, until the library you use for this can do statically compiled classes and not have a runtime part, moving to plain CSS is a performance win.
- Can a bit more of your layout live in the html? If so do that. Layout stuff really isn’t a strong point of React after all, but HTML and CSS do it very well these days. Maybe your headers and menu bars don’t need to be React components at all. Consider it at least.
- Good client side caching with service workers.
- Big potential pay off, especially for warm startup time, but bit hard to do well, requires a fair amount of testing especially on iOS where service workers have some limitations. and there is the whole issue of how and when to invalidate what is being stored in the cache… I punted on this for now, but will definitely be coming back to it someday. Soon hopefully.
If you were hoping for easy to implement advice. You are probably disappointed at this point. Me too, Sorry. Performance is hard. JS performance is harder. React performance is hard in some new ways. We do what we can.