Has anyone solved this issue yet? I don't think so.
We use more or less the same deploy scripts to push new versions to different environments.
We haven't found a good way to strip and wipe out developing assets from test environments and test assets from production environments.
Our Java staff ships packages which contain all the test stuff like tests and testing libraries. They say it's too difficult to maintain Maven pom files for different environments; 1MB XML file is difficult to maintain. They also ship the code almost untouched. Exposing private attributes only to have them accessible for unit tests. I smell security issues, but Java developers tend to ignore ex-Java and non-Java developers.
Our Python team is almost good, but that's more or less cheated. Because they run integration tests only. That's why they have a clean production deployment.
In our Node.js team, we have some few minor issues. CSS files aren't minified in development env. DB migrations are done differently depending on NODE_ENV variable; in production, we never want to drop any data. In the production environment, we run (aka our Gitlab CI does) npm install --production while on all other environments we run npm install only.
In some projects, we can't establish a test system either. Mocking an entire B2B platform is impossible, and we've signed a contract not to reverse engineer any of its functionality. In such situations, our scripts differ very much.
I personally use Docker with CoreOS to solve the infrastructure problem (although I still dev against an embedded Jetty instance and then deploy on test and prod against a dockerized Jetty version, but have never had issues doing it that way), Liquibase to keep my dev and prod databases structure in sync, for configs I built my own solution to make sure that configs are sane and that if I have properties in dev config, that it's at least present in prod configs and get notified if those properties aren't being used anywhere in case I made typos.
Since I'm mostly building on the JVM, it's fairly easy to keep things consistent, I do remember the PHP, perl and Python days where even a minor version difference would cause endless problems on prod that couldn't be reproduced on dev, but I reckon even those issues can be minimised these days by developing against docker images and then deploying those same docker images.
Well, ain't this a hard one?
This depends a lot on your stack. If you have a huge ecosystem with lots of different nodes, you simply can't. And shouldn't. If you have memory or CPU intensive services, you also can't. Otherwise, just install all the software you use, optionally skipping the VM/container layer (ie. install everything directly on your machine.)
Last time I had to test a piece of code that communicates with a Cassandra database. Cassandra, like many Java based software, is a real memory hog, but that can be reduced by changing its config. Obviously this comes with a performance downside, but at least I could make it work.
Also, the library we use to communicate with Cassandra has a well defined interface. For unittests, I made another library that has the same interface, but stores everything in memory. For local testing I made another one that communicates with MongoDB instead, as it is a bit less memory intensive.
There are a lot of possibilities, but it all depends on your stack.
Dong Nguyen
Web Developer
I think docker and vagrant are the good solutions for this situation. In addition, a system with automation deployment, CI, with the developers who follow a standard git process... would help you minimize the differences between the environments.