All languages eventually execute assembler / machine code (I'm going to use the terms interchangeably here even though they're not strictly the same).
Sometimes the language compiles directly to machine code (C, C++, Go, Swift, ...), sometimes it compiles to an intermediate language to make compiling to machine code easier and more efficient, that's where you get JIT (Just In Time) where it compiles code on the fly from an intermediate language to machine code and optimizes the machine code on the fly (Java, Scala, Kotlin, Clojure, Groovy, etc on the JVM all compiles to bytecode which then gets compiled to machine code), then you get languages that compiles on the fly straight to machine code without the intermediate bytecode step (NodeJS, not sure, but I think Python can do it as well) and then you get interpreted languages that reads your code and executes pre-built methods that are already compiled to machine code (PHP, perl, ...)
It all boils down to how quickly that machine code is executed, Java is often faster than C++ in many benchmarks even though C++ doesn't have to compile on the fly - since Java compiles to byte code, it's able to change the generated machine code on the fly, so it figures out that it can run a certain bit of machine code faster by making certain assumptions or doing certain things, it will optimize the generated machine code on the fly. C++ on the other hand compiles to machine code and then it's done, if you wrote crappy code or non optimal code, it will execute in a non-optimal way.
With interpreted languages you get latency of having to interpret the source code, then mapping it to pre-built methods in machine code and there's much less scope for optimization since it's effectively interfacing with a program that's already written in C / C++ and already compiled to static machine code.
The JVM which Java is using had 20 years of optimizations, so engineers literally spent the last 20 years to figure out how to generate better machine code based on your source code. GCC and GPP which compiles C and C++ also gets these optimizations, but as I said earlier, once you've compiled the code, it's static machine code whereas on the JVM, the machine code that gets executed can change if the JVM believes it can optimize your code for you.
NodeJS has these capabilities as well, but since it's JITing JavaScript, it has a much harder job to figure out how to generate optimal machine code. By having types in the code, you get to make certain assumptions about code which often leads to speed improvements, although a type system also adds overhead since you now need to typecheck things.
Dart for example uses the types at compile time to check that things are sound, then uses the type system to generate better JavaScript after which it no longer cares about the types. Server side Dart you potentially get the benefit of types to make certain assumptions, but it's generally thrown away after compile time to cut down on the type-checking overhead that other languages have to deal with.
TL;DR; all languages eventually run machine code one way or another, the speed of the language depends on how well it's able to generate optimal machine code and how quickly it can generate the machine code if it's doing JIT and in the case of interpreted, how well the existing machine code against which your code execute is working.