The debugging methods depend on the service. Each and every service should be developed independantly, and therefor debugged seperately. Here automated tests are your friend. If you have an automated test for each and every part of the system, now you can test the collaborative nature of the whole. That means system wide testing. Since you have seperate tests for each service it should be fairly easy to create new tests that test the integration. A separate environment is nice (development, testing, upgrade assurance and production) but you can do without if you have some special handling of tests. (for example, send email to a dummy mailserver that will dump the email received instead. or have a special range of id's that are never allowed in production use. This will automatically mean you write code you will not test because it's only used in production, so the beforementioned designated test 'arena' is highly recommended)
I've made some tests a while ago for this purpose and use custom software for it. (if i ever get to writing the tests in the first place. mostly it's not the main focus, because hey, there's only 24 hours in a day and i am my whole dev team). Google enterprise application integration testing and you'll get some hints. This is great because it also works with services you haven't developed yourself.
Similar to the above idea is to have the tests be available as a microservice(s) itself. I love continuous testing like this. Combined with internal test per microservice (test yourself, you know how your internals work and if all the requiments are met). If you build it this way, you have your own integrated continiuous testing and it can be a monitoring sollution as a bonus.
I've noticed it greatly helps to use central logging. Wrote a microservice for that sole purpose. Just to collect the data from everywhere and warn me on slack if things go bad. Works like a charm. Mind you this will have impact on performance (because of the obvious extra logging) but you would want central logging anyway. Also it requires connection most of the time. What if connection drops to the logging or warning service?
The most basic option i have found that is very easily implemented is a you_okay()_ _call. It can do some testing and return a status along with extra data you might want to see. (performance alerts and warnings are useful). Now have a process (your testing microservice or EAI testing suite) query each and every service out there once in a while with a you_okay() call and see if things are going well. You test connectivity while at it. You could include the functionality of a dependencies_okay() call but that can be seperate, depends on what you want. If you have microservices with a lot of machines starting and stopping automatically, having these testing services and some registration of the service is very useful. Which is a topic on it's own as you probably know.
Mind you, this is a work in progress and i've automated parts of it. It's what i have in mind for the next big thing i'm working on. Looking forward to all the great contributions to your question that'll help me enhance my testing as well :)
Remco Boerma
CTO@NPO, python dev, dba
The debugging methods depend on the service. Each and every service should be developed independantly, and therefor debugged seperately. Here automated tests are your friend. If you have an automated test for each and every part of the system, now you can test the collaborative nature of the whole. That means system wide testing. Since you have seperate tests for each service it should be fairly easy to create new tests that test the integration. A separate environment is nice (development, testing, upgrade assurance and production) but you can do without if you have some special handling of tests. (for example, send email to a dummy mailserver that will dump the email received instead. or have a special range of id's that are never allowed in production use. This will automatically mean you write code you will not test because it's only used in production, so the beforementioned designated test 'arena' is highly recommended)
I've made some tests a while ago for this purpose and use custom software for it. (if i ever get to writing the tests in the first place. mostly it's not the main focus, because hey, there's only 24 hours in a day and i am my whole dev team). Google
enterprise application integration testingand you'll get some hints. This is great because it also works with services you haven't developed yourself.Similar to the above idea is to have the tests be available as a microservice(s) itself. I love continuous testing like this. Combined with internal test per microservice (test yourself, you know how your internals work and if all the requiments are met). If you build it this way, you have your own integrated continiuous testing and it can be a monitoring sollution as a bonus.
I've noticed it greatly helps to use central logging. Wrote a microservice for that sole purpose. Just to collect the data from everywhere and warn me on slack if things go bad. Works like a charm. Mind you this will have impact on performance (because of the obvious extra logging) but you would want central logging anyway. Also it requires connection most of the time. What if connection drops to the logging or warning service?
The most basic option i have found that is very easily implemented is a
you_okay()_ _call. It can do some testing and return a status along with extra data you might want to see. (performance alerts and warnings are useful). Now have a process (your testing microservice or EAI testing suite) query each and every service out there once in a while with ayou_okay()call and see if things are going well. You test connectivity while at it. You could include the functionality of adependencies_okay()call but that can be seperate, depends on what you want. If you have microservices with a lot of machines starting and stopping automatically, having these testing services and some registration of the service is very useful. Which is a topic on it's own as you probably know.Mind you, this is a work in progress and i've automated parts of it. It's what i have in mind for the next big thing i'm working on. Looking forward to all the great contributions to your question that'll help me enhance my testing as well :)
Hope this helps!