React App Performance

Published on

Defining your goal

Performance of large applications is often an afterthought. I wonder if it isn’t that it doesn’t matter, so much that it is a hard problem to solve, and at least part of what makes it hard to solve is that it is hard to define too. There are at least a few major types of performance that can be objectively measured

Then there is the more subjective side.

It is actually not all that difficult to get metrics on any one of these things, the subjective ones and the objective ones, but the trick in any case is how much can you trust them, and even if you can trust them how much does that really inform which vectors are the right ones to tackle for the overly broad goal of “improving the performance of your application”.

So should that be the end of this post? We just throw our hands up and say “to hard”?

I would say that in one sense this problem as stated may be unsolvable. But that doesn’t mean nothing can be done. Broadly speaking you can follow these steps.

Step one: narrow the scope.

In other words, break the problem down and define a concrete goal. Of course how this will break down will probably depend on the perspective of the person doing it. A product manager may care most about some of those subjective metrics, a front end developer might care about app load times or component rendering speed, DevOps types might care about bytes over the wire and efficiency caching strategies to reduce server loads. There isn’t a right answer here, but an answer is needed if you want to have any hope of defining a reachable goal.

Step two: develop a metric measurement system.

If this is a large actively developed application, and maintaining a certain performance threshold over time is a long-term priority then defining a strategy for gathering and measuring the metrics that matter for your goal over time is crucial. We’ll get to what that might look like soon. The rub is that the system for gathering and measuring those metrics in a trustworthy manner can involve a lot of up front effort with that will provide zero short-term payoff in terms of the performance improvement that is the ultimate goal.

Step three: do the application dev work.

This step will involve research and coding. The research part could involve reading or deeper learning about the ways that libraries work to understand their suggested paths for performance optimization or that mixed with making changes and using your measurement systems to test effects.

In my mind there is a perfectly justified case for skipping step two and just making performance improvement efforts one off things. As in just picking your approach or angle on performance, understand the theory of what will make it better/faster, and then just do that work. This is going to be controversial to say, but I think skipping any metrics is preferable in this case, because whatever metric you would gather is only likely to make you feel better about the work you did. It’s not real, or it has a high probability of being inaccurate at best, and incorrect at worst - and if not now then in the very near future - say when a browser update is released. Because of this it will also not be meaningful to report to either your users or your boss. That isn’t to say it wouldn’t have the potential to give them the same warm fuzzies it gives you to say something like “I improved the performance of the app by 22%” it just wouldn’t necessarily be very truthful. ""Lies, damn lies, and statistics” kinds of stuff.

OK OK OK - React app runtime performance

Get to the good stuff, right!

First off. It is interesting how hard it turns out to be to measure micro performance of js functions within larger apps. For example I could get some metrics one day - run the performance tool 4 or 5 times get a mean on the time to render/run a particular function, make a change, repeat and get some new numbers that showed I won. But, the next day after pulling a new commit to dev, and getting a browser update I couldn’t replicate my success gains, like on the order of was it a 2000% boost or a -20% regression? Seemingly random GC events were one of a few things threw everything off for the results of the micro benchmarks. These might shake out at some sampling size, but getting at that via automating a test runner and reporter, who has time for that?! We’ve got stuff to build, amiright!

What about macro performance? If you are using something like Cypress (which I recommended BTW!) you can see time measurements there. Same story really. Sorry, just to many factors in play for this to be a reliable performance metric gathering tool.

Still, micro measures and macro measures can offer clues and insight. Use browser profiler and React dev tools to do some performance benchmarking. Just see what seems relatively slow on the flame charts for various common workflows, or where the big spikes are - will depend on what views you’re using. In large part I am not suggesting any thing novel from approach outlined in the React docs

Some of my observations were:

Then, research what common approaches to performance optimization - read up on the theory a bit. Here is some of what I gleaned

If you were hoping for easy to implement advice. You are probably disappointed at this point. Me too, Sorry. Performance is hard. JS performance is harder. React performance is hard in some new ways. We do what we can.