as requested here the "comment-answer" as a short idea summary
There is one "account-database" containing the system information for example
Account A is on Machines A-N and other not "user specific settings" you could say the infrastructure coordination. Stuff like:
these are basically your ops/key-account concerns. You can replicate them down if you want. or you create a read only view for the customer, but this is nothing he should be able to access in anything else then read permissions..
-> if someone hacks this database he should only get information about what to use not the passwords, users or their data.
Now we got 1 database only with system relevant information.
-> login requests come in: we switch to the according database of the customer and he can access the data of his account.
this step is only for security purposes and can be simplified ofc:
This simple mechanism Separates the User-Database from the Infrastructure-Database which is an extra layer of security and allows you to migrate the databases between machines.
This infrastructure allows you to stay normalized inside of the customer database as well. It pushes an initial complexity onto the system but increases the security and the way you can scale orthogonal.
If you would use pgSQL you could even go further! using roles and permissions. I would consult an expert on this.
Since only 1 customer is writing on 1 database you have less open connections per database.table customer which reduces the amount of table locks.
Edit: please be aware that for every open connection you need a certain amount of memory as an example mysql
And if you take your time to create a decent build setup using containers and puppet / fabric. you could actually scale up fast and easy. since you don't have to scale for all customers at once.
Edit: you can even thinking of building premium servers for certain customers because you're not bound to a machine. Or you can have a look into server-less architecture if you use amazon for mass images cropping and other tasks that actually need mainly cpu an can be streamed to your api endpoints. (leads to far atm just pointing out the benefit of such an architecture)
About the choice...
MongoDB should beat MySQL in writes but for example the infrastructure database I would do with MySQL and the maybe the local-system api. And you could use the TokkuDB engine which uses fractal-tree-buffers to increase the writes on indexes.
If you shard and scale a lot, maybe mongo will be better. But since it's optimized on write. the ACK for replication is handled on write. So you could query the database and it will return you the previous result which can lead to state confusions because it's eventual consistent.
MySQL will on the other hand use MVVC and can lock tables in atomic transactions which increases the consistency .... uber for example just switched back from pgSQL to MySQL, because of certain behaviours on the other hand Instagram is running on pgSQL and is quite happy with it.
craigslist is using mongo to scale to handle the massive writes .... it's hard for me to say what you need maybe you could use casandra ? or a neo4j. I read a lot about databases the last years even things like big-data with treasure or other services and how they work.
It's to complex to tell you what you should do, take what you know and design the software sacrifical.
I hope that helps.