We often hear X is a faster language than Y. What are the factors that determines the speed of a programming language?
There are a lot of factors at a lot of levels.
A completely interpreted language will be slower in general than a compiled language which in turn will be slower in common cases than a language that is compiled to an intermediate representation and utilizes runtime optimizations like JIT.
At the same time, a language that is too bare bones may be very fast, but if it takes a lot of effort to write anything in it there will be less time/work left over to optimize it. A language that is bare bones but has a lot of libraries will often accumulate inefficiencies when interfacing between these libraries due to having to transcribe data formats or incompatible locking schemes.
A language that is too abstract will have layers and layers of classes or constructs and eventually the compiler will find a reason not to optimize it, and the reasons will be buried too deep for mere mortals to fathom. Or, probably even more common, it will manage to bury locking mechanisms too deep inside the abstractions so the program will spend most of its time waiting for other parts of itself to do something.
A language with a garbage collector will be faster than a reference counting language, except for the 100ms every second or so where it takes a pit stop and does nothing at all. Which will require it to have special intricacies and workarounds to handle real-time tasks.
A functional or declarational language will usually stress immutability, and will burn through memory/cache if not coded with care of if the compiler is not up to snuff, and that will slow it down. An imperative/procedural language is more likely to encourage mutable objects, and then introduce more locking or copy-on-write mechanisms which will be more CPU intensive.
So, I'll give you a much simpler answer: Good documentation makes a language fast, because it tells you how to write fast code, gives you a clear idea how and why things work, gives you pointers to educational material that helps you write better algorithms or avoid mistakes, and lets you find wheels before reinventing them.
It really depends on the task/job at hand...
If you're in need of heavy computational loops, compiled code will work best...
If you need a simple set of services that do generalized work, a bytecode/jit language will work better..
I/O constrained, usually it's not the language but the structure, an async event loop is better. Regardless of language.
Speed of development, or one off, rarely used, reach for scripting languages...
For the most part, if you don't need to service many thousands of simultaneous users... Use what you're comfortable with.
Most of it is down to the skill set of the individual programmer. Use of OS calls; algorithms; data structures; might be largely transferrable between languages for similar gains.
Jan Vladimir Mostert
Idea Incubator
All languages eventually execute assembler / machine code (I'm going to use the terms interchangeably here even though they're not strictly the same).
Sometimes the language compiles directly to machine code (C, C++, Go, Swift, ...), sometimes it compiles to an intermediate language to make compiling to machine code easier and more efficient, that's where you get JIT (Just In Time) where it compiles code on the fly from an intermediate language to machine code and optimizes the machine code on the fly (Java, Scala, Kotlin, Clojure, Groovy, etc on the JVM all compiles to bytecode which then gets compiled to machine code), then you get languages that compiles on the fly straight to machine code without the intermediate bytecode step (NodeJS, not sure, but I think Python can do it as well) and then you get interpreted languages that reads your code and executes pre-built methods that are already compiled to machine code (PHP, perl, ...)
It all boils down to how quickly that machine code is executed, Java is often faster than C++ in many benchmarks even though C++ doesn't have to compile on the fly - since Java compiles to byte code, it's able to change the generated machine code on the fly, so it figures out that it can run a certain bit of machine code faster by making certain assumptions or doing certain things, it will optimize the generated machine code on the fly. C++ on the other hand compiles to machine code and then it's done, if you wrote crappy code or non optimal code, it will execute in a non-optimal way.
With interpreted languages you get latency of having to interpret the source code, then mapping it to pre-built methods in machine code and there's much less scope for optimization since it's effectively interfacing with a program that's already written in C / C++ and already compiled to static machine code.
The JVM which Java is using had 20 years of optimizations, so engineers literally spent the last 20 years to figure out how to generate better machine code based on your source code. GCC and GPP which compiles C and C++ also gets these optimizations, but as I said earlier, once you've compiled the code, it's static machine code whereas on the JVM, the machine code that gets executed can change if the JVM believes it can optimize your code for you.
NodeJS has these capabilities as well, but since it's JITing JavaScript, it has a much harder job to figure out how to generate optimal machine code. By having types in the code, you get to make certain assumptions about code which often leads to speed improvements, although a type system also adds overhead since you now need to typecheck things.
Dart for example uses the types at compile time to check that things are sound, then uses the type system to generate better JavaScript after which it no longer cares about the types. Server side Dart you potentially get the benefit of types to make certain assumptions, but it's generally thrown away after compile time to cut down on the type-checking overhead that other languages have to deal with.
TL;DR; all languages eventually run machine code one way or another, the speed of the language depends on how well it's able to generate optimal machine code and how quickly it can generate the machine code if it's doing JIT and in the case of interpreted, how well the existing machine code against which your code execute is working.