I have multiple microservices in a single GitHub project. I want to deploy some services to the same server and some to a different server. Has anyone done this kind of deployment with capistrano?
If you are using docker, then the answer by @spsiddarthan is ideal. If not, I would recommend you to try it out before reading the rest of this post.
But if, for whatsoever reason, using docker is not an option for you, you can use capistrano to accomplish what you want, but it will require some effort on your part:
Essentially you will have to implement a custom SCM adapter. The SCM adapter is the only part of capistrano that has a strong assumption about single repository. Assuming you are using Git, you can use a new feature: sparse-checkout to check out a single directory of a repo.
You can extend the Git SCM adapter to setup a sparsely populated git repo for each service in the clone_repo method.
With this setup, you will have a sparsely populated repository in your server for each of your service which will contain only the code for that service. The rest of the deployment process will only rely on this local repository. So you will be able to independently deploy and restart individual services.
I've done this with fabric and rsync.
a yml file where machines are defined and all roles are defined a deploy user on all machines
packages were built on the deployment server. a new timestamp version was written in the "version_file"
rsync runs and copies all file. the symlink on the host systems gets changed to the new version and if something fails you can always go back one version based on the file name.
took me two weeks to write it that way but still it was fabric (python) and not capistrano (ruby)
and I actually used the linux kernel for the whole syncing and so on so the OPs guys could debug and easily trace those things in their domain.
An update: I did not have to write the SCM GIT plugin. There was a config param already available repo_path that suited my purposes.
To summarize: Use caphub for generating the skeleton configuration files. This internally uses capistrano-multiconfig. I just set repo_path to the subdirectory name in git, so that only that directory gets deployed.
In my case the microservice name and the git subdirectory name matched, so setting the repo_path to the microservice name worked.
Here is what I'm looking to do: CapHub
This is is an extension to Capistrano, to support multiple sub-projects.
CapHub lets you create nested set of directories and configurations for the sub-projects or microservices in my case. With that I'm thinking of writing a custom SCM adapter with git sparse checkout as suggested by @lorefnon
Looks like that will address my requirement as of now. Hope I get that to work!
Lorefnon
Open Web Enthusiast
Siddarthan Sarumathi Pandian
Full Stack Dev at Agentdesks | Ex Hashnode | Ex Shippable | Ex Altair Engineering
I think docker images will be one way to do this, we had the exact same requirement at my previous job.
Say you have one repository and each directory (let's say the directories are A, B, C and D) represents a microservice of its own. Each of them will have their own docker images. So, when you check in code (and make changes only in B microservice's directory), the docker image B will be built. A web hook will now get fired indicating that the docker image for microservice B has changed and only that microservice will get deployed.