We constantly see benchmarks about how certain programming languages perform better than others. How is it possible?
I think the professional programmers rather logical reasons than irrational and unclear answers to things around them! I for instance don't follow any religions because they include a lots of ambiguities and conflicts. so none of them make sense to me.
Some languages are interpreted, because of this that your statements are all study and parsed at runtime. This makes your code run slower, but interpreted languages have some of advantages whilst you do not always need that raw velocity, which include portability, code clarity, smooth debugging and others. Professional Dissertation Writers UK
Denny Trebbin
Lead Fullstack Developer. Experimenting with bleeding-edge tech. Irregularly DJ. Hobby drone pilot. Amateur photographer.
The performance impact a programming language can have is rarely shown correctly by benchmarks. Benchmarks usually only demonstrate the performance of different execution contexts.
What if the compiler/interpreter/linker/transpiler (CILT) infrastructure failed to detect certain CPU features when executing code of language X? While the CILT infrastructure for language Y fully identified the CPU capabilities and therefore executes the code faster? What if this behavior does not occur on other machines? Can we trust any benchmark which tries show casing the code execution performance between different programming languages? I think we can not trust such benchmarks.
To me, it is more important to have a good developer experience and a fast feedback loop. If my unoptimized code performs well, well, it's a bonus, but not a hard fact to refer to in the decision phase.