Speeding Up Next.js: Shrink Your Bundle, Improve Your Vitals

Speeding Up Next.js: Shrink Your Bundle, Improve Your Vitals

Date
2/20/2026

You open the DevTools, click on Network, reload your page, and get a little shock. There aren’t just two or three JavaScript files, but twenty, thirty, or even more. And although Next.js is supposed to be a performance framework, the page feels sluggish on first load, the hero appears late, and if you try to click right away, everything seems frozen for a moment. When your page loads slower, it feels worse. When it feels worse, more users bounce. And when more users bounce, rankings and conversions quickly lose ground.

The important thought right at the start: many JavaScript files don’t automatically mean your setup is broken. Next.js splits code into chunks so not every user has to download everything every time. Multiple files can even be a sign of sensible splitting. The problem starts when the sum of file size, parsing, compiling, and execution gets too high, or when the wrong files are loaded too early. Then your Largest Contentful Paint suffers because important content appears later. Then your Interaction to Next Paint suffers because the main thread is too busy. And then your SEO suffers, because Core Web Vitals can hit Next.js projects just as hard as any other React app.

This article tackles the topic the way you would in a real project. First, we measure properly, then look at bundle analysis, then defuse third-party scripts, then use dynamic imports and code splitting strategically, then leverage App Router and React Server Components for big gains, and finally, we address images and fonts, since they’re often the hidden LCP killers. At the end, you’ll get a practical target in your mind for how to get a typical marketing or SaaS landing setup down to under ten truly relevant requests for the start, without over-optimizing your whole site into oblivion.

Train your dev brain

Train Your Dev Brain

If you’re currently tweaking your Next.js performance, you know the feeling when your brain gets mushy after an hour of debugging. That’s exactly what Cyberskamp Quiz is perfect for. You get short, snappy code snippets and have to guess the console output. If you’re interested, give it a try!

Check out Cyberskamp Quiz

When Many JS Files Are Actually a Problem

A common objection is that many requests aren’t so bad today because HTTP2 and HTTP3 can transfer multiple files in parallel. That’s partly true, but it doesn’t solve the core problem. The core problem isn’t just downloading. The core problem is that JavaScript creates work in the browser. Every file must be fetched, unpacked, parsed, compiled, and executed. If you deliver a lot of client JavaScript, you block the main thread and thus exactly the things users perceive as fast: visibility and interaction.

This is especially relevant for Core Web Vitals. LCP is often determined by rendering a large element, often a hero image, a big text block, or a product image. If your browser has to process a lot of JavaScript first, or if rendering is delayed by scripts and layout calculations, this element appears later. INP is even more directly tied to JavaScript. If your main thread is busy with long tasks, every click feels sluggish because inputs aren’t processed immediately. That’s why simply reducing bytes sometimes helps less than moving work away from the client.

Many small files can also cause an indirect problem. If you have lots of chunks that depend on each other through a chain of imports, your browser can end up in a kind of waterfall. Then it’s not the number itself that’s the killer, but the order. First comes runtime, then a shared chunk, then a page chunk, then a feature chunk, then finally the component. You see this in Network as a sequence that surprises you because you thought everything loads in parallel. In practice, dependencies often prevent parallel loading.

It’s also worth briefly mentioning hydration, because this is where Next.js projects often lose time without it being immediately obvious in the UI. If you server-render a page, the user quickly sees HTML. That’s good. But as soon as the JavaScript arrives, React has to connect this HTML with interactive event handlers. This takes time. The bigger your client bundle, the longer hydration takes. If you mark many components as client components, the page may be visually there, but it only becomes interactive later. It then feels like everything hangs for a moment, even though you rendered server-side quickly.

Measure First, Then Optimize

Before you sink into refactoring, you need a clear picture of what’s really happening. You don’t just want to know that 20 files are loaded, but why. You want to know which are critical, which come later, which you control, and which come from third parties. And you want to distinguish whether you have a download problem, a CPU problem, or a layout problem.

In Chrome DevTools, a simple process helps. Load the page with cache disabled, ideally in incognito mode, and look at the network waterfall. Look for large files, but also for late files that come after the first render but are still important. Then switch to the Performance tab and record a profile. If you see long yellow blocks, that’s JavaScript execution. Long purple blocks are rendering. Long green blocks are paint and composite. This color logic is simple but extremely helpful so you’re not blindly tweaking bytes.

Lighthouse is useful, but you should see it as a direction, not absolute truth. More important is that you have a before and after. Measure once, take an action, then measure again. If you want to be really reliable, test at least once with mobile throttling, because many performance problems only show up there. In practice, a setup that’s fine on your dev machine can suddenly fall apart on a mid-range Android. That’s where Core Web Vitals lose their innocence.

If you use real user monitoring, you get an even better picture. Many teams wonder why Lighthouse looks good but users still complain. That’s because the real world has slower devices, less stable networks, and more third-party overhead. Analytics, consent, and ads especially show their ugly side more in the field than in the lab.

Using Bundle Analyzer Properly

If you want to reduce your Next.js bundle size, the first hard step is to kill your gut feeling. In almost every project, there’s a library that was added for convenience but now eats up most of the client bundle. And as long as you don’t make that visible, you’re arguing in the dark.

The Next Bundle Analyzer gives you exactly this visibility. You build your app and get a visualization of the chunks. You see which dependencies land in which chunk and how big they are. More importantly, you see if a package only lands on one page or in shared chunks that almost every user loads. If a heavy library is in the shared chunk, it’s almost always poison for performance because it worsens the entry for all visitors.

The typical installation is quick, but interpretation is more important. Imagine the visualization as a map. Big blocks are big packages. If you see a huge icon set there, it’s usually not a Next.js problem but an import problem. Many icon libraries are set up so that a wrong import suddenly pulls in everything. Or you import from an index file that has side effects and tree shaking doesn’t work as you expect.

Date libraries are also a classic. In old projects, you often still see Moment, which is notorious for its size. In newer projects, it’s often a charts package or a rich text editor. Editors are heavy because they bring a lot of code for cursor, selection, plugins, and rendering. If you load something like that in the public marketing area, it’s almost always a mistake. That belongs in admin areas and even there, often only dynamically.

When you have the map in front of you, the next step isn’t to throw everything out immediately, but to ask the right question. Does the homepage really need this package on first paint, or is it enough if it comes on scroll or interaction? Can you limit it to a route so only users of that route get it? Can you replace it with a smaller alternative? Or can you solve it server-side so the client only gets HTML?

An underrated lever is how you structure data and logic. If you do data formatting, filtering, or sorting on the client, you often pull in utility libraries you wouldn’t need server-side. If you move that into a server component, you often save not just your own logic but also the dependencies introduced for it.

If you want to activate the analyzer, you can run the build once with an analysis flag. This is a real command you can run, so it belongs in a code block.

ANALYZE=true npm run build

If you see something in the article that looks like code but is really just a term, I treat it as an italicized term in the text. That’s how you should think about it too. A term like ANALYZE=true is a switch idea, not the core of optimization. The core is that you have a map afterward and can make decisions.

Imports That Bloat Your Bundle

Many Next.js projects load too many JavaScript files because they unconsciously use import patterns that undermine the bundler logic. One example is the difference between targeted import and bulk import. If you always import from a package’s main entry, the bundler may pull in more than necessary. If you import specifically, tree shaking can work better. This is especially true for libraries not cleanly built as ESM or with side effects in their entry files.

A second pattern is reusing components. That sounds good at first, but if you move a component that needs a heavy dependency into a global layout area, that dependency quickly lands in shared chunks. In Next.js App Router, the layout area is extremely powerful but also dangerous. Everything you include there acts like a base load. This applies to UI libraries, animations, icons, and even client state. So if you want to reduce JavaScript files, part of the answer isn’t a tool but architecture. What’s global, what’s local?

A third pattern is accidentally mixing server and client. As soon as you need interactivity in a component, you mark it as a client component. And then something happens that many don’t see. Everything that this client component imports potentially has to go into the client bundle. If you suddenly import a large utility library, or an editor, or a chart library, it’s in. If you instead set a clear boundary and only build a small interactive piece as a client, while the rest stays server-side, the bundle size drops by itself.

This is where understanding use client plays a huge role. Many set it high up, like in the layout or a page file, because then everything just works. Performance-wise, though, that’s often the worst case. You want to set use client as deep as possible so only the components that really need JavaScript get it.

Defusing Third-Party Scripts

There’s a pattern I see in almost every performance audit. The own bundle is okay, but the page is still slow. Then you look in Network and see requests to analytics, consent, heatmaps, chat widgets, AB testing, and sometimes five different pixels. These scripts often come from external domains, bring extra requests, and run at inconvenient times.

The trick isn’t just to use less third party. The trick is to load them differently. Next.js gives you the Script component, letting you define when a script loads. Terms like strategy="afterInteractive" and lazyOnload are gold here, giving you a simple language to protect the critical path. The critical path is what’s needed for your user to quickly see something meaningful and interact immediately.

The biggest trap is that third-party scripts often seem important for your business because they provide tracking. But if they worsen LCP and INP, they cost you the conversions you wanted to measure. You then track very precisely how users bounce because you slowed them down. That’s no joke—it happens daily.

A good way is to load scripts after the first render. You can include an analytics script with Next Script and have it load only after interactivity. This is real code you can use, so here’s a code block.

import Script from "next/script"

export function Analytics() {
  return (
    <Script
      src="https://www.googletagmanager.com/gtag/js?id=G-XXXXXXX"
      strategy="afterInteractive"
    />
  )
}

This pattern protects your first impression. The user gets the page, and only then does tracking load. If you want to go further, load certain tools only after an interaction. A chat widget doesn’t need to be there at page start. It’s often enough if it loads after clicking a chat button. You can treat AB testing tools the same way if they’re not needed for the first paint.

It’s also worth auditing third party like code. Don’t just check if the tool is cool, but if it blocks your main thread. Many of these tools execute work right on load, attach event listeners to scroll and click, collect information, and trigger extra requests. All this increases INP risks. And if you’re unlucky, a script causes layout shifts by injecting elements into the page after the fact. That’s a direct CLS killer.

If you use consent management, it gets even more interesting. Many consent tools load large scripts themselves and block rendering. Here, it’s worth asking a radical question: do you need the heaviest solution, or can you do it leaner without ignoring legal requirements? In many cases, a cleanly implemented consent flow is possible, loading tools only after consent and doing nothing before. That helps not just privacy but also performance.

Dynamic Imports and Code Splitting

Next.js already does route-based code splitting. That means each route gets its own chunks. Still, many features end up in the start bundle because they’re imported somewhere in a global component. That’s when dynamic imports get really fun, because you can pull the weight of features out of the initial bundle.

The mindset is simple: what really needs to be there on first visit? Everything else loads later. This especially applies to things like charts, syntax highlighting, big animations, editors, maps, social embeds, and sometimes entire UI areas that only become relevant after scrolling.

In Next.js, you can use dynamic imports for this. The term next/dynamic is the key here. You don’t import a component directly, but dynamically, and Next.js creates its own chunk for it, which is only loaded when the component actually needs to be rendered. A minimal example without options looks like this:

import dynamic from "next/dynamic"

const Chart = dynamic(() => import("./Chart"))

export default function Page() {
  return (
    <div>
      <h2>Results</h2>
      <Chart />
    </div>
  )
}

This pattern gets really powerful when you combine it with real user flows. Imagine you have a landing page with a testimonials section and below that an interactive demo. If the demo needs a heavy package, only load it when the user scrolls there or clicks a button. You can even put the demo in a separate segment so the rest of the page is super fast and only users who are really interested get the extra chunk.

This has two effects. First, your initial download and parsing time drop. That helps LCP and often TBT (the time the main thread is blocked). Second, the risk of early interactions being sluggish drops, because less work happens in the browser at once. This is where lazy loading in practice applies not just to images but to logic.

A common question is whether many smaller chunks just lead to lots of requests again. That can happen if you overdo it. But if you do it sensibly, you shift requests away from the start to later. That’s often exactly right for Core Web Vitals, since the vitals focus heavily on the early user moment. You want to win the first few seconds.

An added benefit is that you can build AB testing and feature flags more cleanly. If a feature is only active for some users, you can load it only for those users. That’s a kind of performance personalization.

App Router and Server Components

If you really want to make a difference, there’s almost no way around taking App Router and React Server Components seriously. Many teams use Next.js but code it like a classic client React app, just with SSR on top. That works, but it wastes what makes Next.js so strong today.

The idea of server components is that components run on the server by default. They can load data, generate HTML, and don’t automatically send JavaScript to the client. That’s a radical change, because it means you deliver large parts of your UI as pure output. The browser doesn’t have to compute everything itself. It gets pre-rendered content that appears quickly, and only small islands of interactivity need client code.

This is where use client is the boundary. As soon as you set it, you’re saying this component must run in the browser. That’s sometimes necessary because you need state, effects, event handlers, and browser APIs. But if you set use client too early, your client bundle gets big. If you set it late, your client bundle stays small. A good App Router style is to use server components for structure, data, and presentation, and client components only for interaction.

Take a practical scenario. You have a blog page with an article, a table of contents, a newsletter box, and a like button. The article text is static or comes from a CMS API. That can all be server-rendered. The table of contents can be generated server-side from the article. The newsletter box can be server-rendered; only the form needs interactivity. The like button needs interactivity. If you separate this cleanly, the client code might just be a form handler and a like handler. That’s tiny. The result is a much smaller bundle, less hydration work, and often better INP.

Many performance problems in Next.js arise because everything ends up in one huge client component. Then the whole layout is hydrated, even if only one button is interactive. That’s like driving a truck to buy bread. It works, but it’s absurd.

The App Router also brings streaming. That means the server can send HTML in parts. The user sees something early while more parts arrive. You can use Suspense to reveal areas with data later without blocking the rest. This feels fast to users because the page shows structure immediately. And it helps LCP if the largest content area comes early.

Another point is caching. If you load data server-side, you can also cache it server-side. You can use static generation or ISR, depending on how fresh the content needs to be. That reduces TTFB and makes the page more stable. And more stable TTFB often leads to better LCP because the browser can start earlier.

When you put this all together, you see why Core Web Vitals aren’t just about minify and gzip. It’s an architecture topic. How much work happens in the browser, and how much on the server?

Understanding Hydration and INP

INP is hard for many teams to grasp because you can’t see it as easily as a big image. But it’s often why a page feels sluggish. And JavaScript is the main suspect here.

When the page loads, a lot happens in the browser. It loads the HTML, builds the DOM, loads CSS, calculates layout, paints the first paint. Then scripts come. These scripts must be parsed and executed. React must hydrate and connect event handlers. And while that’s happening, the main thread can be blocked. If the user scrolls or clicks at that moment, it takes longer for a reaction to appear. That’s INP.

When you reduce bundles, you don’t just reduce download. You reduce CPU work. And CPU work is often the real bottleneck in modern apps. You notice it less on fast devices, brutally on weaker ones.

You can reduce hydration by having fewer client components. You can also activate interactivity later. One pattern is to first render a static version of a component and only load the interactive version after an interaction or after scrolling. That’s like progressive enhancement, just modern.

Dynamic imports help here again. If a widget isn’t needed immediately, it doesn’t need to be hydrated immediately. And server components help even more because they don’t need hydration at all. This often makes INP much better without micro-optimizing every event handler.

Optimizing Images for LCP

Many people think of reducing JavaScript files right away when it comes to bundles. But LCP is very often determined by an image. The classic example is the hero image on a landing page. If this image is large, uncompressed, or in an unfavorable format, it appears late. If it appears late, LCP is bad, no matter how small your JavaScript is.

Next.js gives you a powerful tool with the Image component. It helps you deliver the right size, use modern formats, and do lazy loading. It’s important to give the browser clear dimensions so it can reserve space. Otherwise, layout shifts occur and CLS gets worse.

An example of a hero image you can really use looks like this:

import Image from "next/image"

export function Hero() {
  return (
    <Image
      src="/hero.jpg"
      alt="Hero image"
      width={1400}
      height={800}
      priority
    />
  )
}

The priority attribute is often useful for the most important image above the fold. You’re telling the browser to load this image early. That can noticeably improve your LCP. But you shouldn’t set everything to priority, or you’ll lose the benefit.

Another lever is not embedding images at huge sizes and then scaling them down with CSS. If you load a 4000-pixel-wide file and display it at 1200 pixels, you waste bandwidth and time. Next.js can deliver responsive variants, but you also need to use layout sensibly.

Background images are also tricky. Many hero sections use CSS background images. These aren’t always prioritized like a real image element. If your LCP element is a background image, it can worsen LCP. In such cases, it’s often better to use a real image element and position it with CSS.

If you have very visual pages, it’s also worth looking at rendering. Large blur effects, filters, and overlays can make paint expensive. That’s not JavaScript, but it blocks the main perception. Performance is always a combination.

Train your dev brain

Train Your Dev Brain

If you’re currently tweaking your Next.js performance, you know the feeling when your brain gets mushy after an hour of debugging. That’s exactly what Cyberskamp Quiz is perfect for. You get short, snappy code snippets and have to guess the console output. If you’re interested, give it a try!

Check out Cyberskamp Quiz

Fonts Without CLS and Delay

Fonts seem harmless, but they can affect two vitals at once. First, they can delay text rendering if the browser waits for the font file. Second, they can cause layout shifts if a fallback font is rendered first and later a different font comes with different metrics.

The good news is that Next.js has solid options here. The font module lets you embed fonts efficiently and use self-hosting. The term next/font is the starting point. Self-hosting reduces external DNS and TLS costs and gives you more control over caching. Plus, modern font setups often automatically use swap so text is visible immediately.

It’s also important to only load what you really need. Many load multiple font weights, italic, bold, extra bold, and then two families, even though maybe only regular and semibold are used. Every extra file is a request and a render risk.

If you can use variable fonts, you often get multiple weights in one file. That can reduce requests. At the same time, you need to make sure variable fonts aren’t huge. Again, measuring helps.

A practical approach is to treat your fonts as a performance budget. Decide consciously how many files you’ll allow for them. And test whether the aesthetics are worth the price. If your site is optimized for conversions, speed is often more important than a perfect custom font.

Target: Under 10 Requests

Now we come to the part many want as a checklist, but without bullet points. Just imagine the target: a user loads your landing page and the browser only needs to fetch a few things to deliver the first meaningful impression. An HTML document, a CSS bundle or a small amount of CSS, one or two JavaScript chunks, a hero image, and maybe a font file. That’s the ideal.

The first step toward this target is to consistently remove third party from the critical path. That doesn’t mean you have to give up analytics. It means you load it later. As soon as you do that, several requests often disappear from the first seconds, and that’s exactly what Core Web Vitals rewards.

The second step is to reduce global dependencies. Using a UI library isn’t bad per se. But if it blows up your shared chunk, it’s a problem. Often you can use components selectively or build a lighter setup. Especially with icons, it’s worth not importing the whole world. A few SVG icons directly or a well tree-shakable library make a huge difference.

The third step is to isolate big features. Charts, editors, maps, syntax highlighting, video players. These are typical feature blocks you should load dynamically. If you remove these from the start, not only does the bundle size drop, but you also reduce CPU load in the first seconds, improving INP.

The fourth step is to really use App Router and server components. If your marketing area is largely server-side and only small islands are interactive, you need less JavaScript at the start. That’s the cleanest way to leverage Next.js. You’ll notice you suddenly have to think less about reducing JavaScript files because it happens automatically.

The fifth step is to treat images and fonts as first-class performance assets. Properly size and prioritize the hero image, keep fonts lean, and avoid layout shifts. If you ignore this, you can optimize bundles all you want and LCP will still be bad.

The sixth step is not to forget caching. Many requests are unavoidable on the first visit, but on the second visit, you want to load almost nothing. Next.js hashes assets, which is good. Make sure your hosting and headers take advantage of that. If a user returns and still loads everything anew, you lose part of the performance advantage that should be free.

If you keep these six steps as a story in your mind, you’ll almost automatically get close to under ten truly relevant start requests. And if you go over, that’s not a disaster. What matters is that the critical requests are early and the main thread stays free.

Practical Workflow in a Project

In practice, you don’t want to optimize forever. You want quick wins and no regressions. A good workflow is to first measure the status quo and take screenshots of the waterfall and performance trace. Then do bundle analysis and identify the two or three biggest chunks. Then decide for each chunk whether to remove, replace, or isolate it.

In parallel, do a third-party audit. Mentally list all scripts and ask yourself which really needs to be in the start. In most projects, the answer is almost none. Change the loading strategy and measure again.

Then tackle the architecture boundary. Look for large client boundaries created by use client and pull them apart. Make the server part big and the client part small. This is often the biggest lever for Next.js bundle size and Core Web Vitals, because it doesn’t just treat symptoms but reduces the cause.

Finally, address the visible LCP elements. Is it an image, a font, a big text block, a slider? Optimize the element so it appears early and stays stable. Check CLS and make sure dimensions are reserved. If you do this cleanly, the page suddenly feels much faster, even if it’s not completely minimalistic.

And then comes the most important step. Build a small routine so it doesn’t break again. You can define a performance budget, quickly check in pull requests if new libraries are added, and regularly run Lighthouse. Performance isn’t a one-time project; performance is hygiene.

Conclusion and Next Steps

The next time you see Next.js loading 20 or more JavaScript files, don’t just think in files. Think in work. What work are you forcing the browser to do in the first seconds? What work can you defer? What work can you remove from the client entirely? That’s how you reduce bundles, that’s how you improve Core Web Vitals, and that’s how you ultimately achieve better rankings and conversions.

The fastest way to more speed is almost always a combination of bundle analysis, smart third-party integration, consistent lazy loading in Next.js via dynamic imports, clean use of App Router and server components, and an honest look at images and fonts. Once you understand these levers, reducing Next.js JavaScript files goes from a frustrating symptom to a solvable task you can tackle in clear steps.

If you like practical performance guides like this and want regular, concrete Next.js tips, sign up for my newsletter. You’ll get updates, checklists, and real project patterns to help you build and rank faster—without starting from scratch every time.

Comments

Please sign in to leave a comment.

Reduce Next.js Bundles, Less JS, Better Core Web Vitals