I have a basic docker image containing a python script that comes in under 100mb. I'm not sure what distro I'm going to use but preferably one that results in the smallest file size as possible.
The goal is to deploy a docker image on t2.nano ec2 instance but it must meet the following conditions:
from the time a customer requests access via URL, it should respond as quickly as possible, preferably under a few seconds. the latency between the customer and newly deployed docker ec2 instance should be as small as possible, meaning ec2 on the closest availability region. Is this possible?
Any EC2 instance, either classic or running ECS, is running in an Availability-Zone, which means a given data-center in a given region. If your real goal is to run the code in the nearest region, then you'll need to deploy in multiple regions. If your goal is to connect it closer to the client, Cloudfront could work for APIs too: GET requests will be cached depending on the cache headers, whereas PUT/POST/DELETE/HEAD/OPTIONS will not be cached. The access point will therefore be close to your user, but the request will then (unless for cached GET requests) be routed to your deployment region.
If it's a single-container simple app, I would say ECS is the way to go, because creating a task-definition is very simple JSON and will do the job for you.
That said, since the workload sounds low (your mention of a nano instance) and infrequent (you mentioned 'from time to time'), you might also consider Lambda: Python is supported, you'll be charged only for the execution time (ie no charge when no requests are received), and there's even a generous free tier which might end-up make it free if it's really "from time to time". (Please remind that, even if you don't manage any server, you still choose a region in which the lambda is managed and run)
And finaly - but it's recent and I never used it yet - there's Lambda@Edge which lets you run lambda code at Cloudfront edge location, which could be what you're looking for: running code close to the user.
First things first, there's nothing called a docker instance. You have a docker container and that can be deployed on ec-2 (and other) instances.
There are two things you can do if you have your mind set on using AWS.
Use AWS's EC2 Container Service (ECS). They have a brilliant wizard setup to create a cluster and you pretty much just have to fill in the container configs (image name, ports to expose, environment variables etc) to have your container deployed. Also, the wizard ensures that the machines that get booted come with the necessary packages installed (docker for instance). Security groups, load balancing etc can be manged from the EC2 console. ECS is available in a decent amount of regions across the world, use the use that's closest to your customers. With ECS, you can see your container live in a few seconds.
The second way is to boot your own instance and install docker (And other dependencies). Manually set inbound and outbound traffic rules, pull your image and start your container. If I were you, I'll definitely go the ECS route mentioned in step 1.
Also, have you checked out Google Container Engine?