Right now we haven't happily run such a complete solution for our current use-case: containers running in ECS. Relying extensively on AWS, our current plan is to wait for the updated AMI with Docker 1.9 that feature the Cloudwatch Logs connection to the docker logs.
Since we don't really require right now since we're soft-launching our product, we estimated that we'd better spend our energy on more meaningful topics, rather than adding another layer of complexity to get the logs from Docker and push it to Cloudwatch... not that complex, but another agent or container, another thing to maintain, ... Another solution would have been to use our own AMI, but so far we used the ECS optimized AMIs provided by Amazon and were happy. Doing our own AMI would be easy (we tested the Docker 1.9 to Cloudwatch Logs flow succesfully), but it implies that we maintain that new AMI... we could do it, but with the updated AMI coming in a few weeks (maybe before), the updated freedom for a few weeks would have come with the burden of maintaining this after the gain would have been lost...
Once the logs are in Cloudwatch Logs (to collect the many sources), we investigated a few options, using lambdas and/or Kinesis to then push logs to ElasticSearch (ELK) and/or Redshift (depending on the source and use of the logs)
I would love to hear more from @JanVladimirMostert about the RabbitMQ (and consumers) part. In our case we would have either system logs (like nginx access and error logs) and custom logs on which we'd like to do complex queries (hence the Redshift option)
Jan Vladimir Mostert
Idea Incubator