TL;DR: we replaced Webpack with Vite. We’ve been happier ever since and our users got some value out of it as well. We built and open-sourced our own Flask-Vite integration; you can find it on PyPI: canonicalwebteam.flask-vite.
This article is the first in a two part series that covers why we did it and how. If you’re curious about the reasons behind this choice, keep reading; otherwise if you’re here for the juicy technical details, you can find them in part two.
Fixing what isn’t broken
Being a developer means working with tools; be it a text editor, an IDE, a task runner, a testing suite, a framework, it’s still a tool. It’s tools all the way down, and selecting the right set of tools for building something is one of the most important skills a developer should have. Choosing a tool means separating what’s possible from what isn’t, what is trivial from what is complex, so tooling is often what makes or breaks a new project.
But what happens when the project is already up and running, with all of its tools already configured and unchanged in years? Why fix what isn’t broken?
A brief history of snapcraft.io
The project we’re talking about is snapcraft.io, obviously. It started in August 2017 as a humble Flask application with Vanilla framework for styling and some simple NPM build scripts for compiling the stylesheet. A couple of months later, a few client-side scripts started to appear, then at the beginning of 2018 the first package publisher features were added, implemented as a React single-page application (SPA) built using Webpack.
That’s pretty much where the project’s tooling stopped evolving. New features were added, JavaScript code was migrated to TypeScript, the React class components were converted to functional components, state management moved from Context to Redux, then to Recoil, then to Jotai but, despite everything changing, Webpack remained the same. Sure, it was receiving updates and sure, new features came with every major version, but as the scripts multiplied, the configuration files ballooned in complexity, and the build times skyrocketed.
Now in 2025, snapcraft.io’s tooling felt like it came from the 2010s, and for good reason. Webpack is a tool born in 2014, at a time when many of today’s conventions and standards didn’t exist. Upon release it was a revolutionary tool that defined new standards for the developer experience (DX) in the web ecosystem, cementing its market share and reputation as a beloved tool. Unfortunately for Webpack, this isn’t true anymore, and it hasn’t been for a long time. Just look at the State of JS surveys, where it ranked as the build tool with the lowest satisfaction score for the last 4 years in a row. This dissatisfaction isn’t rooted in the tool being bad in a vacuum, but rather it’s a result of the web community’s work on a newer generation of web platform tools. Building on top of Webpack’s ideas, they iterated on its strengths and addressed its shortcomings, redefining the standards of web DX.
We believe snapcraft.io’s DX deserves to be aligned to these standards, so back in August we decided it was finally time to bring the project’s tooling into the 2020s. Coincidentally, this lined up with the 8th anniversary of the project… so happy birthday, I guess?
The meaning of “good DX”
At the heart of our decision is the conviction that good DX leads to good software. This isn’t some abstract philosophical stance, it’s a real, measurable phenomenon: a positive and frictionless DX increases productivity and quality, no matter how you define or measure them. No need to set up KPIs or metrics, the idea makes sense intuitively. If developers don’t have to fight their tools, they can focus their energy on solving the actual product challenges.
Unless strictly necessary or directly contributing to unique product requirements, mental bandwidth shouldn’t be spent on tooling, but rather on creating product value. Complex, quirky tooling imposes a mental tax and often transforms what should be routine operations into needlessly complicated energy-draining tasks. A tool with good DX, on the other hand, fades into the background and helps the developer enter and remain in a state of flow.
The speed and responsiveness of the tool are crucial from this point of view: clear and immediate feedback is one of the necessary conditions to achieve flow. All it takes to disrupt the fragile mental equilibrium is one long wait time to apply a code change; imagine how you would feel if you had to wait multiple seconds for your keyboard to register inputs – “frustrated” would be an understatement.
This makes it obvious that good tooling must focus on providing good performance by default. What isn’t so obvious is that it should do so not only during development, but also in production if the tool’s output affects the product’s performance (build systems are a prime example of this). Doing it by default is crucial: if achieving good performance in production requires extensive, non-trivial configuration, the tool is effectively forcing developers to implement such configuration because otherwise the user experience (UX) would be degraded. This doesn’t mean that tooling should never expose low-level configuration options – far from it – but it should strive to remove unneeded complexity by shipping a default configuration that covers the majority of use cases, hiding the advanced options under-the-hood when not in use.
Ok, and?
You might wonder what all this has to do with snapcraft.io. Now that we know the meaning of “good DX”, we can talk about what the project’s tooling looked like a few months ago: intertwined startup and build scripts stashed inside package.json, slow Webpack and Sass builds before starting the Flask app, a convoluted Webpack configuration that was split across multiple files but was actually monolithic, full page reloads for every single small change to any of the static resources. Sounds fun, right?
Just to further convince you, here’s what creating a new .ts client script, bundling it and linking its .js in a page looked like:
-
run
yarn startand wait about 40 seconds for Webpack and Sass to build static resources before Flask starts; -
then wait another 15 or so seconds for Webpack to enter watch mode;
-
create the new
.tsscript and add its path towebpack.config.entry.js; -
realize that Webpack doesn’t reload when the config files update, so you need to stop the process and restart it;
-
wait 40 more seconds for the build scripts to run, then another 15 to enter watch mode again;
-
finally, link the bundled script in the HTML template.
-
Bonus step: does your script export values that you want to make available inside the browser’s global
windowobject? Then you have to configureexpose-loaderthrough a module rule inwebpack.config.rules.js. Enjoy the additional wait time!
Does this look like “good DX”?
A corollary to “good DX leads to good software” is that the opposite is just as true. The “definitely-not-good DX” had some impact: our production bundles were huge due to duplicated dependencies and useless code that wasn’t being tree-shaken, so page load times were often slow. Addressing the issue through Webpack would have meant diving into the low-level configuration options, setting performance budgets, analyzing the bundle outputs, and iteratively applying changes hoping for the best; this doesn’t look fun when you consider that each change to the configuration takes about a minute to apply… So the team chose a more pragmatic solution instead. To improve performance in the React SPA, since most users don’t have access to the enterprise-specific features, the publisher and enterprise views were unceremoniously separated into two independent bundles, each containing a full copy of react, react-dom and @canonical/react-components. Despite the application layout being the same, moving between views across bundles – sometimes even within the same bundle! – meant a full page refresh. The single-page application had somehow become a multi-page single-page application.
Better tooling doesn’t magically fix problems – I wish it did – but it can help solve them. Tools define the constraints that shape what a solution can look like, so switching to a different tool can offer new possibilities.
The only issue is that the abundance of great tools makes it incredibly difficult to pick the right one.
Picking the right tool
I lied, it’s actually really easy. The right tool for the job is Vite, and it’s such an obvious choice. Great performance in development, automatic bundle optimization and splitting in production, simple default configuration and access to low-level internals if needed. Even better, it supports everything we use in our app (TypeScript, JSX, SCSS, ES modules) out-of-the-box, and it can be integrated with non-JavaScript backends. Many don’t know about this last part, but using Vite alongside a “traditional backend” – as they call it – is a widespread and well documented use case. Some backend frameworks have official first-party integrations with Vite, but most rely on community-built ones. In Flask’s case, the only community-built extension available proved to be a bad fit for our project: for one, the extension requires a specific codebase structure that doesn’t match ours, but most importantly it can only output a single application bundle, which made it a no-go for us. This meant we had to roll up our sleeves and get our hands dirty.
Are you curious about how we built our Flask backend integration? Read more about it in part two.