VueConf US 2022

Talks

1. Opening Keynote

Evan You

2. How we migrated our HUGE app to Vue 3

Alex Van Liew

3. Maintainable & Resilient Projects Through Internal UI Libraries

Austin Gil

4. Unit Testing Vue Apps: Tips, Tricks, and Best Practices

Beth Qiang

5. Debugging Vue Applications

Cecelia Martinez

6. Dissecting the Pinia Source Code

Daniel Kelly

7. What's coming in Nuxt 3

Dan Vega

8. Building Accessible Components

Homer Gaines

9. Animating Vue with GSAP

J.D. Hillen

10. Deploy, Release, CI/CD, oh my! DevOps for the rest of us

Jeremy Meiss

11. Stress-free Testing for Vue 3

Jessica Sachs

12. Nuxt.js and Chrome

Kara Erickson

13. Modern Mobile Development

Mike Hartington

14. Create a Custom Component Library with Vue!

Paige Kelley

15. Improving Pagespeed Performance with Vue 3

Tom Dale

16. Why do we even test?

Bart Ledoux

17. Component Testing with Playwright

Debbie O'Brien

18. Know your Components

Lukas Stracke

19. What's 'this'?

Colin DeCarlo

20. Vue Traffic Light Chrome Extension

Adam Frank

21. Getting Started: Amplify Authenticator

Erik Hanchett

22. (Share) Point of Vue

Scott Hickerson

23. Building Docker Extensions using Vue & Nuxt

Evan Harris

24. Build a Community

Ricardo Vargas

25. Using Vue for your IoT Solution

Bill Baker

Nuxt.js and Chrome

Nuxt.js and Chrome

Kara Erickson

Transcript:

  • Hi, I’m Kara Erickson. I’m a senior software engineer and TL at Google working on the Chrome team. And today I’m here to talk about the collaboration between the Chrome team and the Nuxt framework. So I’ll start by giving a little background on the Aurora project and what it is that we do. I’ll share some of the web performance problems that the Aurora team has been exploring recently, the features that the Nuxt framework has shift to address these issues and some suggested best practices for how to use these features. And then, we’ll finish out with a performance roadmap that we’ve collaborated with the Nuxt team to create so that we keep moving the needle on web performance. But first, let me explain what the Aurora project is. So if you’re not familiar, it’s an initiative where Chrome engineers collaborate with open source framework and tooling authors to make the web better. So it’s a pretty small team, as you can see on the slide. It’s eight engineers, including me, the TL, and my manager Addy Osmani. So you might be wondering why would Chrome invest in frameworks? Well, as a browser vendor, we want the web to succeed. We care about user experience and a thriving web ecosystem, and we want websites to incorporate best practices for developer experience, especially in the realm of web performance so that everyone can have a great time on the web. The problem is best practices are constantly changing and it can be really hard to keep up with the latest recommendations in what you’re supposed to even be doing, and even if you know what they are, to bounce them with your other business priorities. And this is where frameworks like Vue and Nuxt really shine. they’re in a unique position to influence the whole web ecosystem. So according to the stack overflow developer survey in 2020, 75% of the 65,000 plus developers they surveyed report using a framework. So clearly frameworks are widely used. So if we can build best practices into these frameworks, then all the developers that are already using the frameworks will get the best practices for free, and they can focus their time on what matters the most to them, which is shipping a great product. So Aurora’s directive is to collaborate with frameworks to make sure these best practices get added. So how does that work? Well, functionally, we start by running a lot of performance traces. And based on these traces that we run on websites across the web and based on queries that we do for gigantic datasets like HTTV Archive, we start to boil down the common causes for performance issues for framework developers. Once we’ve figured out what these issues are, we start hypothesizing what potential features could be in frameworks to mitigate these issues. And once we’ve identified the feature and put it through some basic R&D, then we start to work with the framework team to either build the feature ourselves. Or in some other cases we consult and provide general support to the framework team that actually builds it. Which brings us to Nuxt. So Nuxt is one of our framework partners, and they’re a partner with whom we have a consultation model. So we meet regularly to connect about the latest performance topics and the latest in Nuxt. And generally the way it works is that our team identifies and researches performance optimizations. And then we check in with Nuxt to see, okay, what’s the status of these optimizations in Nuxt? And then we share information and design docs with the Nuxt team as necessary, but then the Nuxt team actually builds the feature. So now that you know what the award team is and what it is that we do, I’d like to talk a little bit about the performance issues that we’ve addressed as part of this collaboration. But before we jump into specific performance topics, it’s important to get on the same page about how we measure performance. You may have heard of core web vitals, but we’re gonna go through a quick review. So these are the three metrics that we use to determine whether a page is performing well. So if you’re not familiar, their largest content full paint or LCP, cumulative layout shift or CLS, and first input delay or FID. So starting with largest contentful paint, this is a metric that measures the render time of the largest image or text element that’s visible in the VuePort relative to when the page started to load. So this is the point when the user has enough visual information to figure out if the page is useful to them. CLS is the stability metric. So it measures how much the layout’s jumping around while you’re looking at it. And FID is the interactivity metrics. So that’s measuring the lagginess that a user might experience if you’re clicking on something or typing something into a form. So when we first started collaborating with Nuxt in early 2021, we looked at these three metrics across all of the Nuxt sites that we could find, across the internet. And what we found was that Nuxt stops were doing really great on first input delay, but like a lot of frameworks at the time and frankly, most websites at the time, they were struggling a little bit more with CLS and especially with LCP. So we decided to focus on performance areas that would improve LCP for websites, which brings us to our first performance area, which is images. So anecdotally, in performance tracings, what we see again and again, as a cause for bottlenecks, is image handling. And this anecdotal data is supported by quantitative data. So the HTTP Archive tells us that 79% of all desktop pages have an LCP element that’s an image. So what that means is the LCP is being delayed by an image in 79% of cases. And note that this stat is for all pages, not necessarily just for Nuxt, but it really illustrates how impactful images can be for LCP. So you might be asking yourself, “Okay, so how do I find out which image is blocking my LCP?” Or even if it’s an image at all, maybe I actually have a header that’s super-large. Well, there’s many ways that you can do this. The probably gold standard is running your production website through WebPageTest.org. This is a great option. You could also use the performance observer API, which is available through JavaScript. But the way that I usually do it is usually through ChromeDevTools. So if you open the performance tab, it has this timings track that has the LCP event labeled. So you can see that where the arrow is. So if you click on that, it should bring up a little window at the bottom that lists the related node. And if you click on that, it’ll take you to the LCP elements in the elements panel, so it’s really easy to figure out which one it is. One quick gotcha on this, make sure that you’re testing in a mobile device or a series of devices in addition to your desktop, because the LCP element could definitely change depending on your VuePort size. Okay, so for right now, let’s say that your LCP element is an image, but if you notice, if you look at this and you notice that your element is like a header or some other type of text, stay tuned for the second half of the talk where I’ll be talking about fonts, which are kind of text-related LCP concerns, but okay, so you found the image that’s critical to your loading, how do you make sure that it’s optimized? Well, the first thing to check is something called resource load delay. And this is the delay before the browser starts loading the LCP image. And this has a lot to do with when the LCP image is actually discovered in the dom. So the sooner the browser knows about it, obviously, the sooner it can connect and start downloading, great. So in this example, waterfall, the light colors represent when the browser becomes aware of a resource and the darker color regions represent when the browser has enough bandwidth to actually load that resource. So you can see from this, that the LCP image has been discovered when the light green bar starts. So that’s kind of later on in the waterfall, right? It’s offset a bit from when the document has completed downloading and parsing. And that delay between when the documents kind of parsed and ready to go, and when the LCP resource can start downloading, that’s the resource load delay. So why might there be this unfortunate delay? Well, there’s a variety of reasons, but maybe like in this example here, your LCP image happens to be a background image that’s defined in your CSS. In this case, your image element, isn’t in the dom, right? It’s in your CSS. So parsing the document’s not gonna help you. You have to download and parse the style sheet before the browser even knows that there’s an image there, or maybe your app uses client side rendering. In this case, the JavaScript would have to download and execute before the image element is even added to the job. So there’s a bunch of different ways this could happen, but regardless of how you get into the situation, the best practice for avoiding it, as I think somebody covered earlier is to add resource hints. So you can add resource hints to preload your LCP image. A preload hint is a link tag that you can add to the head of your document. And it basically, just tells the browser to start loading a particular resource earlier than it would have otherwise. So that way, even if your image is in your CSS and would normally be discovered when the style sheet is parsed, its download wouldn’t be blocked by that style sheet, it’s not waiting for it. So as we’re investigating these performance optimizations, we have to ask ourselves, these are the best practices, but is the community already doing this? Or is this something where a framework could step in and help? And so, looking at the data in 2021 from HTTP Archive, only 22% of all mobile origins actually use preload hints. So this is a ripe area for frameworks to help. And this is where nuxt/image comes in. So this is a core module built by Nuxt meant to facilitate best practices for image optimization, which includes preloads. So I’m gonna quickly show you how to preload a tag using nuxt/image. So let’s say you’ve already NPM or Yarn installed, your nuxt/image package, I’m sure you know how to do that, and you’ve added the package to your modules. The next step would be to look through your markup and find your image tags. So to convert an image to nuxt/image, you can change the element name from image to nuxt/image. It’s a drop in replacement. So it’s fairly straightforward. But if you wanted it to add a preload tag, normally what you would do is you’d add a link preload to your head, but with nuxt/image, you can add a preload attribute, which makes it a lot easier to do. So that’s one way that Nuxt makes implementing best practices easy. One quick note is that you don’t wanna preload all of your images. This is an optimization that has diminishing returns. So you really wanna save it for your most critical resources and just preload your LCP image. Okay, so we talked about ensuring early discovery for images, but there is another concern that can lead to loading delay, and that’s the priority of your LCP image in relation to other resources. So you want your LCP image, which is your biggest splashiest image to be loaded before anything else. So you don’t want other resources to be competing for the same bandwidth. However, it’s pretty common to see other images or resources delaying the loading of your LCP image. So in this example, the LCP resource is in line 91, but you can see that there’s a ton of images from line 81 on that are loading before the LCP image. So even though it’s discovered not long after those other images, which is why the light purple bars aren’t that far offset, you can see that it doesn’t actually start downloading until 600 milliseconds later, where those dark purple lines start. And that’s because all of those other images are delaying the loading. So there’s two things you can do to rectify this type of issue. So one, there is a new API out in Chrome called priority hints. And these hints allow you to tell the browser explicitly about the relative priority of resources. So it can start loading the resources that you mark highest priority first. So to use this, it’s a native HTML attribute. So you add fetchpriority and set it to high, and then that’s it. But know that this is an API that just came out. So unfortunately, it’s not supported by other modern browsers. However, because it’s an attribute, it’ll just be ignored where it’s not supported. So it’s safe to try it and it will just make your Chrome experience better. Another thing that we suggest is lazy loading noncritical images. So if you have images that are below the fold or images that are not visible for any reason, the canonical example here is if you have one of those image carousels, all the images that aren’t shown right away probably should be lazy loaded. So you can easily achieve this with the native loading attribute. So you set loading to lazy, and it was just recently added to Safari in the last few months. So it’s actually supported by all modern browsers, Firefox, Safari, Chrome, not IE, unfortunately, but even in IE, it’ll just be ignored. So again, it’s one of those progressive enhancements where it won’t break for browsers where it doesn’t have support. So like with preloading, you wanna be deliberate about which images you’re lazy loading. Experimentation is generally found that lazy loading images that are in the VuePort can actually degrade performance. So the best practice, as I said, is to just lazy load things that you’re not gonna see right away. So how do we apply these principles to nuxt/image? So if you go back to our LCP image, we have the preloaded from before, you can just go ahead and slap that fetchpriority attribute right on there, and it’ll just fall through to the image tag. Similarly, for our below the fold images, we can just add a loading attribute and it will fall through to the image tag. So there’s actually no difference in the API. So pretty straightforward. Okay, so the last big concept for image loading, I wanna talk about, is resource load time. So if we return to our example, waterfall from before, resource load time is the time it actually takes for the resource to download. So when we talked about discovery, we were trying to optimize the gap between the document parsing and the resource loading or the light green bar, right? When we were talking about resource priority, we’re trying to optimize the light green part of the bar, right? And now we’re trying to optimize the dark green part of the bar where you’re actually loading the resource. So how can we do this? Well, one way is to serve smaller images. That’s fewer bytes downloading. So if you switch to newer image formats like WebP and AVIF, that’s already a huge savings, you can save, 25, 50% of your total bytes just by switching to a more modern format. And one nice thing is that WebP again, is supported in all modern browsers, even in Safari. And you can also try AVIF. AVIF images are even smaller than WebP, generally speaking, but its browser support is not quite as wide, but these are both things to try. So if you wanna ensure that you’re serving modern image formats, where they are supported, but fall back to older formats where they’re not, the picture tag is a really useful element to know about. So one thing that’s cool about the picture tag is that you can list multiple sources. So you can list a source that’s in AVIF, you can list a source that’s in WebP. And the way that it works is the browsers that can load AVIF, will load that version, and browsers that don’t really understand picture tags or don’t understand AVIF or WebP will just render that image tag with the JPEG. So once again, looking at these modern image formats, is this something that people are already doing, or is this something where a framework should step in? Regrettably, we know that most websites aren’t using modern image formats. In 2021, archive data, only 7% of images were in WebP format. So how can you make sure that your images are using a WebP? If you are not using a CDN or I’d recommend switching from nuxt/image to its corresponding picture tag variant nuxt-picture. And it’s really similar to nuxt/image. The only real difference is that instead of outputting an image tag, it outputs a picture tag, but if it’s outputting a picture tag, then it can also support things like WebP. And so, what it creates is something that looks exactly like the last slide. So I’d highly recommend switching to nuxt-picture, if you’re not using a CDN. If you do use a CDN, which is a great option, CDNs will often perform content negotiation for you. And so, they can figure out whether the user’s browser supports an image format or not. And that means that you don’t actually have to do much on the client side yourself. And nuxt/image has a number of built-in providers for CDNs that make it easy for you to use them. So, for example, you can see this example from image X, you would set up the base URL for your image CDN in the config. And then in your markup, you just put the name of the specific image that you wanna look at. So I wanna note this particular line that adds some configuration options for image X. So some CDNs will serve modern formats by default, others require you to turn on this behavior with an option. So make sure that you read your CDN docs and know what the default behavior is, or if you need to be doing anything special. One great resource for this is images.tooling.report. It has a bunch of image optimization, best practices, and it lays them out by CDN, so it’s a great way to see what you need to be doing. Okay, so regardless of format, you also wanna check that your image is being served at the right size compared to the size that it’s being rendered at. So let’s say you have a fixed size image like a logo, and the intrinsic size of the asset is 800 pixels. But when it’s rendered on your website, you usually use CSS to make it 200 pixels. So in this case, you wanna make sure that you’re requesting an image that’s 200 pixels wide and not the intrinsic 800 pixels wide. So with Nuxt, if you set the explicit dimensions of a fixed size image with the width and height attributes, by default, Nuxt will request the image at that width and height. So it’ll save you the bytes. So even if your intrinsic size is enormous, it’ll request the correct size for you, which is pretty cool. And this has the added benefit of preventing layout shift, because if you size your images, then the browser doesn’t have to wait for the whole image to load, to know how much space to reserve, which is another best practice. Okay, so what about responsive images? If your device or if your image is not fixed in size, but you’re using CSS to shrink or grow it based on your device, you have a few more considerations. So if your user’s on a mobile device that’s only about 400 pixels wide, you shouldn’t be serving that mobile device the same 3,000 pixel-wide image that you want to show on a desktop monitor, right? So the recommendation here is always use source set and sizes where you can. If you’re not familiar with this API, just do a quick review. So in order to make the work, first, you wanna add the source set attribute, and you can think of the source set as a set of sources. So it’s a list of all the available images that you could potentially request and their corresponding sizes. So at the least, you probably wanna include a size that works well in mobile and a size that works well in desktop. So in this example, you have two URLs, one that serves the image at 400 pixels and one that’s at 800 pixels. And you give the browser the option to request either one. Then you wanna add the sizes attribute. And the sizes attribute tells the browser how to choose an image from that list that you provided in the source set. So here, the 100 vwidth basically says that the browser should choose to load the image from the source set that’s closest in size to the 100 Vue width in pixels. So if it’s a mobile device, the image that’s closest to 100% is probably gonna be the 400-pixel variant. But if you’re on a desktop, the image that’s closest in size to 100% is probably gonna be the 800-pixel one. So that’s kind of how that works. There’s also a slightly more advanced syntax that you might have seen where you can designate a specific width in pixels for both the VuePort size and the image size. So in this example, for a width of the VuePort that is 600 pixels or less, it’s gonna select the 400-pixel variant, otherwise, for everything else, it’s gonna choose the 800-pixel variant. So again, are people doing this? Well, according to the archive, only 26% of sites are using the source set attribute and of this 26%, only 35%, are also using the sizes attribute to tell the browser which image to pick. And then, even if you’re using the sizes attribute, 25% of sizes attributes are wrong enough that the browser is still selecting the wrong image. So this is a great opportunity for a framework to step in, because these APIs are complicated, and so, people just aren’t using them. And so, nuxt/image does provide a pretty cool API for this. It offers a simplified syntax. So rather than having to provide both a source set and the sizes, it has a default source set that’s based on common VuePort sizes, and then as the developer, all you have to do is define the desired size of your image for each VuePort size. So in this example, for a small VuePort, please size my image at 300 pixels, for a medium VuePort, please size it at 500 pixels and so on and so forth. Okay, so that’s it for images. So now that we’ve covered that, in some depth, I wanna move on to another LCP focus area that we looked at this year, which was web fonts. So to give a little context to start. So this is a graph from the Web Almanac, and on it, you can see the top 15 third-party requests and how render blocking they are. So you can see here that fonts tend to be more render blocking than other types of resources. So this is a characteristic of fonts in general. Ooh, okay, of fonts in general and not any specific font library, and this is because fonts are critical to loading and they can block text display. So if your LCP element is text and your web font hasn’t loaded, that’s gonna push back your LCP, 'cause that’s gonna flash some invisible text. So for example, let’s pretend you’re adding a Google font. Typically, you’d paste something like this from the Google font’s website. There’s a style sheet here at the bottom. There’s a few preconnect tags. So I guess, first off it’s important that you don’t edit this snippet when you paste it from Google fonts. It can be tempting to just paste in the style sheet, 'cause that will work, but the preconnects are very critical and that will give you an early start to connecting to both the font domain and the style sheet domain. And also note, that we really don’t recommend preloading the font anymore. One reason as I mentioned, is that preloading is best used on only a few resources at a time. And this isn’t the best use of that bandwidth typically. Preloading also means that you’re starting the download in a different mechanism than you typically are with fonts. And this mechanism doesn’t support Unicode ranges. So if you’re not familiar with Unicode ranges, they’re something that you’d usually see in a font style sheet and they can be used to define which characters should trigger the request of a font. So if you have some sort of international site and you have pages that are in Greek and you have fonts that are for those Greek pages, you don’t wanna request those fonts with your English character pages. So if you preload, you kind of go around that whole triggering mechanism and you might start downloading fonts that you don’t wanna download for all your pages. So just like with nuxt/image, Nuxt makes this somewhat easier with the Google fonts package. So again, you install the package, you add it to your modules. And then, it’s as simple as adding a Google fonts object that has a list of the font families that you wanna add. Doing this right here is enough to add the font style sheet link and both those preconnect links, and it also does not add a preload as we recommend. So just adding Google fonts this way is a way to guarantee that you’re using the best practices. Okay, so what else can we do? So let’s take a closer look at this font style sheet that’s being added to the head. So note very quickly that this is not a font. It looks like a font 'cause of the fonts.googleapis domain, it’s actually a style sheet. So if you look at the actual response to this request, you’ll see that it’s a bunch of font declarations. And inside the font declarations is the URL of the actual font, which actually has a slightly different domain, which is why you have two different preconnect tags. So the important thing to note here is that there are two requests necessary to get the font. There’s one request that’s triggered by the link tag for the style sheet that goes and requests the style sheet from the font’s API. And then as soon as this comes back and it’s parsed, there’s a second request that goes out for the font file itself. And only when that file comes back, can the font render? So you can remove one of these requests by inlining the font styles in the document head. So if you add a style tag to the head of your document and you just copy, paste in those font declarations, then you save yourself one round trip. And note here also, that there is a style that uses the web font in the inline styles. And that’s important, because it’s not the font declaration that will trigger the request, it’s the use of the font somewhere in your styles. Okay, so that takes us from two requests to just one. Is there any way we can optimize this last request? Well, it’s a cross-origin request. So one simple thing we can do is download the font file and serve it from our own server to make a same origin request instead of a cross-origin request. And in Nuxt, it’s really easy to make both of these optimizations. So if you add the download true option, it will both inline your font style sheets as I just showed, and it will also download the font and self-host it, which is pretty cool. And up until recently, this was opt-in, but in the latest major version, now it’s on by default, which is pretty cool. And that just came out today, so please, upgrade your modules. And the last thing I wanna talk about is flash of invisible text and trying to avoid that. So some browsers will show an invisible font until your web font loads, which means that your LCPs gonna be delayed until your web font can load. So you can avoid this with the font display property. If the web font is important to the design of the site, swap is generally the best choice, because it will show a fallback system font right away, and then it’ll just switch to the web font whenever it’s ready. Optional is another good choice, but the problem is that it only waits 100 milliseconds. So functionally, this means that the web font will never load unless it loads in that small window, and so, there’s a good chance that your users won’t see it. And again, with Google fonts, you can set the font display property by setting display swap. But as of the major version that came out today, it’s actually also on my default. So you get all of these best practices kind of for free. Okay, so finishing up here with all of these optimizations, what kind of results have we seen? Well, there’s been 106% increase in the number of Nuxt origins that meet good LCP thresholds. And obviously, there’s a lot of factors going on here. It’s not just us, but it feel like that shows that things are moving in the right direction, but we think things could be higher if more people adopted these modules, like nuxt/image and nuxt/Google-fonts. So I’d highly recommend trying them out in your apps if you haven’t already. And we’re also looking for production partners, willing to test their apps with these modules to help us get more performance outcomes. So please @ me on Twitter, if you’re interested in that. I don’t think we have time to go through the roadmap, but I’ll share the slides on Twitter. So thank you for your time.