Recently I was tasked to optimise a few websites. Most of them were simple apps with static content but they also contained dynamic pages visible only under authentication.
Luckily these apps were built using Nuxt at version 3, so I was sure the task would be feasible. I already knew Nuxt is built with performance in mind and contains everything you need to make a website fast.

Here I would like to address the issues we faced and the ideas we had to solve them.

Part 2 is out check it here!

Table of Contents

  1. Infrastructure
  2. CI/CD
  3. Rendering method
    1. Prerendering is not a magic bullet
  4. Double fetching

Infrastructure

Starting with an easy one. These websites were hosted in a single dedicated server, running pm2 as a process manager. Most of the time this setup was just fine until it wasn’t…

During major traffic bursts, the server would slow down, or even become unresponsive, and as a result, all of the apps would face downtime. Also during deployments, there would be a small time frame that the app would stop till the new version comes up.

We first looked at Vercel, which ticks most of the boxes and comes with Nuxt support out of the box. Long story short Vercel was working just fine, except it had some major price increases in features we absolutely needed.

Next to Vercel, Netlify also provides everything we need but also gives us the freedom to opt out of some pricey features we could DIY.

CI/CD

Now with a fresh hosting provider, we also needed something better than pm2 and bash scripts!

Netlify offers the flexibility of just pointing to a Github repo and letting the platform do the rest. An awesome feature that comes with a price tag. Alternatively, you can handle builds and deployments on your own. With cost-effectiveness in mind, we went the DIY way.

GitHub actions

I was always a fan of GitHub actions but never had the chance to explore them. After reading the documentation and some trial and error, we managed to set up the workflows that would build and deploy our apps.

At the same time setting up environments wasn’t that hard.

Preview environment:

  • Triggers: Any PR that targets the staging branch
  • Actions: Build, Test, Deploy
  • Deploys at a random subdomain for quick preview by any team member

Staging environment:

  • Triggers: Any push on the staging branch
  • Actions: Build, Test, Deploy
  • Deploys at the staging subdomain

Production environment:

  • Triggers: Any push on the main branch
  • Actions: Build, Test, Deploy
  • Deploys at the production domain

Tip: These workflows are the same for each repo. It’s better to create a shared workflow and “use” it in the actual repos.

.github/workflows/production.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
name: Netlify Production Deployment
on:
push:
branches: # Trigger on push to the designated branches
- main
- master
jobs:
# Instead of copying the same steps everywhere we can import them
# from a shared repository.
Deploy-Production:
uses: my-org/shared-gh-actions/.github/workflows/production.yaml@main
# We want to pass the secrets from this repo to the shared one.
secrets: inherit

via GIPHY

Rendering method

After successfully migrating the apps to Netlify, we immediately saw some improvement. Especially in latency and the effortless deployments to newer versions. There was no slowdown due to traffic spikes or server restarts since Netlify uses a serverless architecture to run our code. Simply put, each request would run in a standalone lambda function.

After some moments of happiness, we quickly realized changing the infrastructure wouldn’t solve everything. We had to take more drastic actions and decisions.

These websites were fetching content out of a CMS. The following logic was everywhere…

1
2
3
4
// pages/articles/index.vue
const { data } = await useFetch("my-api.com/articles")
//
<Article v-for="article in data" />

Nuxt was still in SSR mode, which meant:
User visits a page → Netlify spawns a lamdba function → Server hits the CMS and returns the HTML

This happens for each request!

Even worse was the fact that we weren’t always using the useFetch utility. As a result, we also made another call to the CMS in the client during hydration.

useFetch will make the call in the server and pass the data to the client as payload. The client during hydration it will re-use those data instead of fetching them again.

Nuxt’s rendering methods to the rescue! Specifically prerendering.

Nuxt offers the option to prerender a few (or all) pages during the build and deploy them as static HTML files. In the example above, it would mean we call the CMS once during build and we only upload to Netlify the final HTML.

This can easily happen with the following config

1
2
3
4
5
6
7
//nuxt.config.js
export default defineNuxtConfig({
//...
routeRules: {
'/articles': { prerender: true },
}
})

If we also need to prerender every article, without knowing the specific URLs, we could enable the crawler during build.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//nuxt.config.js
export default defineNuxtConfig({
//...
nitro: {
prerender: {
crawlLinks: true, // Enable the crawler
// Start from "/articles" and look for other pages to prerender
routes: ['/articles']
}
},
routeRules: {
// If the crawler finds routes matching this pattern we want them prerendered
'/articles/**': { prerender: true },
}
})

After running yarn build we can see those pages being pre-rendered and being now inside the dist/ folder as plain HTML files. When a user visits such a page, Netlify will try to serve it from the static files before running the server. As a result, we now offer those pages from the CDN and we access the CMS only at the build time.

But those articles are now cached… What if an article changes?

In our case, those articles didn’t change often. And when they did, it was after a writer made a change. In order to purge the cache and show the new articles, we had to trigger another build and deployment. Since we were using Github actions we could either manually trigger the action from Github or create a webhook to run after the writer published the changes.

In order to trigger a workflow through an HTTP call we add the following to the workflow:

1
2
3
4
5
6
7
8
9
#.github/workflows/production.yaml
name: Netlify Production Deployment
on:
push: # Trigger on push to the designated branches
branches:
- main
- master
repository_dispatch: # Trigger on a "repository dispatch" (HTTP call)
types: [deploy_production]

And we can trigger it through JS with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
const axios = require('axios');
let data = JSON.stringify({
"event_type": "deploy_production" // OR deploy_staging
});

let config = {
method: 'post',
url: 'https://api.github.com/repos/my-org/my-repo/dispatches',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_GITHUB_TOKEN' // a github token that can dispatch actions (with "repo" access scope) - https://kontent.ai/blog/how-to-trigger-github-action-using-webhook-with-no-code/
},
data : data
};

axios.request(config)
.then((response) => {
console.log(JSON.stringify(response.data));
})
.catch((error) => {
console.log(error);
});

Prerendering is not a magic bullet

There are cases where prerendering works beautifully and probably offers the best performance but it’s not always the best solution.

As described before we also support authentication in those sites. The most obvious example is a navbar where there is a “Sign In” button for guests or an avatar for users. If we were to prerender such a page, during build we would capture the guest version of the navbar.

In reality, we would serve the guest version as static HTML, and during hydration, Nuxt would rerender the navbar depending on whether we view it as guests or users. For guests, it would be identical but for registered users, there would be a huge layout shift where they first see the “Sign In” button and then they view their avatar.

A GIF showing the hydration issue


Imagine you visit a page as a guest, if you were the first to view it then the server would execute the code, serve the result, and store it in the cache. Any guest after you would get the cached result.

Now a registered user visits the page. If that’s the first time, the server runs as above. The next time the same user visits the page they get a cached result.

But how can we differentiate between guests and users? Cookies of course!

We already use JWT tokens for authentication, so that was the obvious choice for a cache key.

To better visualize how this cache works, imagine the following object:

1
2
3
4
5
6
const cache = {
"jwt-for-user-alex": "<HTML>...",
"jwt-for-user-mary": "<HTML>...",
// If jwt is undefined, then it's a guest
"undefined": "<HTML>...",
}

Luckily with Nuxt is easier to implement this technique than describing how it works!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
//nuxt.config.js
export default defineNuxtConfig({
//...
routeRules: {
'/': {
swr: 60 * 60, // 1-hour cache expiration
headers: {
// Tell Netlify to cache based on the jwt cookie
'Netlify-Vary': 'cookie=jwt'
},
cache: {
// Tell the server to cache based on the cookie
// A different cookie will make the server run again
varies: ['Cookie']
},
},
}
})

Double fetching

Until now we touched on how the server responds to user requests and tried to optimize it. But what about client navigation? Remember Nuxt is an isomorphic framework, and it runs our code both in the server and the client.

In a nutshell, we have optimized the user’s first impression, what happens when the user visits or reloads the page. But when the user clicks a link to navigate to another page of our app, then it’s pure client-side vue, Netlify cache or SWR can’t help us there, right?

Let’s take this example again

1
2
3
4
// pages/articles/index.vue
const { data } = await useFetch("my-api.com/articles")
//
<Article v-for="article in data" />

We can prerender this page or cache it somehow and avoid hitting our API more than is necessary but say the user visits this from a link in the homepage. In that case, as it’s a client-side navigation, Nuxt needs to run the code no matter how we cached it in the server.

And to make things worse, if the user goes back and forth again (without a full page reload) this code will execute again. Nuxt by default will only use the cached data during hydration. That’s a sensible default, but in our case an unwanted one.

The drawback of such an implementation is that since we are awaiting the data before rendering the page, Nuxt will block the navigation until these data have been resolved.

A slightly modified getCachedData

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// pages/articles/index.vue
const nuxtApp = useNuxtApp()
const { data } = await useFetch("my-api.com/articles", {
getCachedData (key) {
// nuxtApp in the server is a fresh object on each request
// nuxtApp in the client is a global object remaining alive during a user session

// Once useFetch resolves data, it will store them in nuxtApp.payload.data
// Think of it as some kind of in-memory cache.
// So during a user session, we will only make this call once.
return nuxtApp.payload.data[key] || nuxtApp.static.data[key]
}
})
//
<Article v-for="article in data" />

There are more ways to solve this like useLazyFetch (more on that later) but in our case working with cached data was the preferred one.

In the end, we managed to get that snappy feel both for hard refreshes and client-side navigations.