Thanks for the effort of running the benchmarks. However, it can be misleading for some people. You stated, "Don't trust benchmarks" but then you asked the reader to trust the results of your isolated use case. How is that different from benchmarks? Your library size doesn't make it a perfect real-world scenario for comparing the speed. There are cases like big node monoliths where Jest shows atrocious results. I had to apply a bunch of hacky solutions so we at least wouldn't have to turn off unit tests on CI, and still had to use 8 parallel containers for minutes to finish the suite. Vitest, at the same time, showed much, much better performance. My main problem with Jest is that it's so slow not because of some underlying architecture choice that's hard to change. No, it's just a conscious decision not to give the devs the freedom to configure some of the options. Vitest, at the same time, allows you to change many settings, tailoring specifically for your use case. And I don't want to say that Vitest is not perfect. As a drop-in replacement for Jest, it had to copy some of the questionable choices from it. The sad truth is that there is no really good unit test framework so far. You have to choose between multiple half-good options and hope it won't become an issue at some point