Optimizing a Nuxt app (part 1)
Recently I was tasked to optimise a few websites. Most of them were simple apps with static content but they also contained dynamic pages visible only under authentication.
Luckily these apps were built using Nuxt at version 3, so I was sure the task would be feasible. I already knew Nuxt is built with performance in mind and contains everything you need to make a website fast.
Here I would like to address the issues we faced and the ideas we had to solve them.
Part 2 is out check it here!
Table of Contents
Infrastructure
Starting with an easy one. These websites were hosted in a single dedicated server, running pm2 as a process manager. Most of the time this setup was just fine until it wasn’t…
During major traffic bursts, the server would slow down, or even become unresponsive, and as a result, all of the apps would face downtime. Also during deployments, there would be a small time frame that the app would stop till the new version comes up.
We first looked at Vercel, which ticks most of the boxes and comes with Nuxt support out of the box. Long story short Vercel was working just fine, except it had some major price increases in features we absolutely needed.
Next to Vercel, Netlify also provides everything we need but also gives us the freedom to opt out of some pricey features we could DIY.
CI/CD
Now with a fresh hosting provider, we also needed something better than pm2 and bash scripts!
Netlify offers the flexibility of just pointing to a Github repo and letting the platform do the rest. An awesome feature that comes with a price tag. Alternatively, you can handle builds and deployments on your own. With cost-effectiveness in mind, we went the DIY way.
I was always a fan of GitHub actions but never had the chance to explore them. After reading the documentation and some trial and error, we managed to set up the workflows that would build and deploy our apps.
At the same time setting up environments wasn’t that hard.
Preview environment:
- Triggers: Any PR that targets the staging branch
- Actions: Build, Test, Deploy
- Deploys at a random subdomain for quick preview by any team member
Staging environment:
- Triggers: Any push on the staging branch
- Actions: Build, Test, Deploy
- Deploys at the staging subdomain
Production environment:
- Triggers: Any push on the main branch
- Actions: Build, Test, Deploy
- Deploys at the production domain
Tip: These workflows are the same for each repo. It’s better to create a shared workflow and “use” it in the actual repos.
1 | name: Netlify Production Deployment |
Rendering method
After successfully migrating the apps to Netlify, we immediately saw some improvement. Especially in latency and the effortless deployments to newer versions. There was no slowdown due to traffic spikes or server restarts since Netlify uses a serverless architecture to run our code. Simply put, each request would run in a standalone lambda function.
After some moments of happiness, we quickly realized changing the infrastructure wouldn’t solve everything. We had to take more drastic actions and decisions.
These websites were fetching content out of a CMS. The following logic was everywhere…
1 | // pages/articles/index.vue |
Nuxt was still in SSR mode, which meant:
User visits a page → Netlify spawns a lamdba function → Server hits the CMS and returns the HTML
This happens for each request!
Even worse was the fact that we weren’t always using the useFetch
utility. As a result, we also made another call to the CMS in the client during hydration.
useFetch will make the call in the server and pass the data to the client as payload. The client during hydration it will re-use those data instead of fetching them again.
Nuxt’s rendering methods to the rescue! Specifically prerendering.
Nuxt offers the option to prerender a few (or all) pages during the build and deploy them as static HTML files. In the example above, it would mean we call the CMS once during build and we only upload to Netlify the final HTML.
This can easily happen with the following config
1 | //nuxt.config.js |
If we also need to prerender every article, without knowing the specific URLs, we could enable the crawler during build.
1 | //nuxt.config.js |
After running yarn build
we can see those pages being pre-rendered and being now inside the dist/
folder as plain HTML files. When a user visits such a page, Netlify will try to serve it from the static files before running the server. As a result, we now offer those pages from the CDN and we access the CMS only at the build time.
But those articles are now cached… What if an article changes?
In our case, those articles didn’t change often. And when they did, it was after a writer made a change. In order to purge the cache and show the new articles, we had to trigger another build and deployment. Since we were using Github actions we could either manually trigger the action from Github or create a webhook to run after the writer published the changes.
In order to trigger a workflow through an HTTP call we add the following to the workflow:
1 | #.github/workflows/production.yaml |
And we can trigger it through JS with the following:
1 | const axios = require('axios'); |
Prerendering is not a magic bullet
There are cases where prerendering works beautifully and probably offers the best performance but it’s not always the best solution.
As described before we also support authentication in those sites. The most obvious example is a navbar where there is a “Sign In” button for guests or an avatar for users. If we were to prerender such a page, during build we would capture the guest version of the navbar.
In reality, we would serve the guest version as static HTML, and during hydration, Nuxt would rerender the navbar depending on whether we view it as guests or users. For guests, it would be identical but for registered users, there would be a huge layout shift where they first see the “Sign In” button and then they view their avatar.
Imagine you visit a page as a guest, if you were the first to view it then the server would execute the code, serve the result, and store it in the cache. Any guest after you would get the cached result.
Now a registered user visits the page. If that’s the first time, the server runs as above. The next time the same user visits the page they get a cached result.
But how can we differentiate between guests and users? Cookies of course!
We already use JWT tokens for authentication, so that was the obvious choice for a cache key.
To better visualize how this cache works, imagine the following object:
1 | const cache = { |
Luckily with Nuxt is easier to implement this technique than describing how it works!
1 | //nuxt.config.js |
Double fetching
Until now we touched on how the server responds to user requests and tried to optimize it. But what about client navigation? Remember Nuxt is an isomorphic framework, and it runs our code both in the server and the client.
In a nutshell, we have optimized the user’s first impression, what happens when the user visits or reloads the page. But when the user clicks a link to navigate to another page of our app, then it’s pure client-side vue, Netlify cache or SWR can’t help us there, right?
Let’s take this example again
1 | // pages/articles/index.vue |
We can prerender this page or cache it somehow and avoid hitting our API more than is necessary but say the user visits this from a link in the homepage. In that case, as it’s a client-side navigation, Nuxt needs to run the code no matter how we cached it in the server.
And to make things worse, if the user goes back and forth again (without a full page reload) this code will execute again. Nuxt by default will only use the cached data during hydration. That’s a sensible default, but in our case an unwanted one.
The drawback of such an implementation is that since we are awaiting the data before rendering the page, Nuxt will block the navigation until these data have been resolved.
A slightly modified getCachedData
1 | // pages/articles/index.vue |
There are more ways to solve this like useLazyFetch (more on that later) but in our case working with cached data was the preferred one.
In the end, we managed to get that snappy feel both for hard refreshes and client-side navigations.