Thanks for reading and giving a feedback!
You stated, "Don't trust benchmarks" but then you asked the reader to trust the results of your isolated use case.
I stated "Don't trust benchmarks" and I did not ask the reader to trust my benchmarks. On the contrary, I gave my numbers, and right after that I told that after trying this on a real project, those numbers worth nothing. You should try it on your real project rather than trusting random articles over the internet, and you should be suspicious even for your own numbers.
Your library size doesn't make it a perfect real-world scenario for comparing the speed.
I agree. All the benchmarks you can find are measuring just one particular scenario. It's possible that you have a very similar but slightly different case, and you can receive totally different results.
In case of Jest, it's not officially documented that you can use it with swc or esbuild, people go with default settings (Jest is transpiling your code with Babel by default), its ending up terribly slow. But if you try it with swc, it's highly likely to become more performant than Vitest, at least, in my experience. There is no such a setting in Vitest that I can change and make it faster than Jest on my particular codebase.
And I doubt that Vitest can become faster anytime soon, because it's based on a full blown bundler that does a lot of things internally - Vite, while Jest + swc is just a test runner + transpiler and nothing more.
The sad truth is that there is no really good unit test framework so far.
Jest can't be considered good because of ESM and lack of support and not so good docs, but Vitest is really good, excellent docs, has everything you may need. Can you share what's missing from Vitest for you? What are the questionable choices that you mentioned?
Thanks for the effort of running the benchmarks. However, it can be misleading for some people. You stated, "Don't trust benchmarks" but then you asked the reader to trust the results of your isolated use case. How is that different from benchmarks? Your library size doesn't make it a perfect real-world scenario for comparing the speed. There are cases like big node monoliths where Jest shows atrocious results. I had to apply a bunch of hacky solutions so we at least wouldn't have to turn off unit tests on CI, and still had to use 8 parallel containers for minutes to finish the suite.
Vitest, at the same time, showed much, much better performance. My main problem with Jest is that it's so slow not because of some underlying architecture choice that's hard to change. No, it's just a conscious decision not to give the devs the freedom to configure some of the options. Vitest, at the same time, allows you to change many settings, tailoring specifically for your use case.
And I don't want to say that Vitest is not perfect. As a drop-in replacement for Jest, it had to copy some of the questionable choices from it.
The sad truth is that there is no really good unit test framework so far. You have to choose between multiple half-good options and hope it won't become an issue at some point