I wrote a large system on Google's NoSQL database to see if it can be done, turns out things aren't transactional, you have delayed writes, so sometimes you would do an insert of two related items, then the one item would write and the other one not, so if you do an immediate read on that item, often times it's not there.
Sometimes the requirements change and you've designed your NoSQL implementation around a certain schema, now all of a sudden you need to add a new index and write code to go through each entity and create a searchable index for it. Tomorrow the client wants the results sorted, once again, you need to go create an index for the same results, wait sometimes up to 24 hours for indices to be created on a large dataset, tomorrow again the client wants it in descending order, so go create an index, write and run more code to go and update those indices.
Stuff got out of sync, so I Insert a new object that's based on the new requirements, the old data was still using the old object schema and reading them back using the new model would simply crash your app, unless you went through each of the old objects and migrated them using specially written code just for that purpose.
One of the requirements which I only got later was that entities needed incrementing numbers, so initially it was fine using Google's NoSQL implementation, but then Google decided to drop incremental IDs and give me random ids back, client wasn't very happy that I was all of a sudden returning these enormous IDs that are not in order and enormous (there are 3 test entities in the system, this is the ID I got back for entity #3 - 368796015 which at the time I converted to a Base36 number just to make it readable: 63KKTR).
Seeing as Google's NoSQL was not fully transactional, I couldn't reliably get sequential numbers generated when doing a count to find the next number, so had to eventually write an external service running on MySQL - all this service did was take the table name and returned an ID to get around the above problem - so now I had to maintain two systems when only one was needed and maintain two databases.
Concurrency was another issue, one person would book out stock, then another person searching for available stock would still find the just booked out stock and book it out again, then about a minute later in the background I would get a concurrent write exception long after the user got a success message and two people managed to book out the same stock. Using the version+1 trick to prevent two people from overwriting the same entity didn't work ... because there's no decent transaction support.
Running queries was a nightmare, initially the user only wanted website visits and email opens stored somewhere, so obviously I did an insert on the table and things worked really well, the client could browse through the data as they pleased. The trouble came in when they wanted super complicated queries to be run against the data ... there's a limitation of how many params you add for queries, stuff like sum is not supported, so I had to write a script that pages through all the data, load each individual record, get the data in it, add it to the relevant variable, go to next record, once I get close to the 10 minute mark, I had to snapshot the cursor, abandon the process and continue the process from the cursor.
I couldn't just go an update the record, because it was not a transactional update and you would lose data. Somebody on StackOverflow suggested creating about 25 different entities, then picking a random number from 1-25 and write to a random record count = count + 1, this worked mostly for low traffic, when the traffic went up, I had to up that 25 to several 100 and on some occasions three people would hit the same random number causing three concurrent writes which meant only one of the increments worked - not a great solution.
Google eventually added some sort of concurrent write support, but there was a limit on how many times per second you could do concurrent writes, so the above solution of having 25 - 1000 entities was still needed, even though on some occasions I would still hit my write limit and the data would get lost unless I queue it and retry later, but this solution had its own problems.
The the client wanted near realtime numbers, so now I had to start writing parallel scheduled code that would loop through each company's data, add up all the data in a variable and get me the figures every 5 minute so in a few hours time that the previous process is done, at least I have the "realtime results" of the previous query.
So now I see these spikes in my system load simply because I run these queries every 5 minutes on huge sets of data.
I eventually rewrote the whole system with RabbitMQ, MySQL and Memcached, serving the same clients under the same load, only now I have transactional support, I can run any query even if I don't have indices for it, using rabbitMQ, I can queue all the email opens and website reads and then every minute consume the whole queue and do one insert, so instead of an insert for every read / open, I do an insert saying 110 people just viewed this page in the last minute, every hour I run a query for the sum of all the previous 60 x 1 minute inserts, end of the day for the last 24 x 1 hour inserts, so I can do realtime feedback by simply querying the previous hour, plus the minutes which gives me realtime feedback with a minute delayed.
I can imagine with document-based NoSQL implementations, changing requirements can be much more severe, in my case I could at least somehow add more indices to make it work.
All-in-all, my experience with NoSQL was not so great, I can see where it can be useful, but that should be done after you've written a new system using something more flexible, then where you want to store log data or every email / sms sent or if you need to store every API call that was done and want to store the SOAP / JSON message, then you can dump that kind of data in a NoSQL database, for anything else, I will be sticking to SQL + Memcache / Redis for caching for now.
On a side note, the project could have been done in 3 months had I used MySQL, it ended up taking more than double the amount of time due to having to build workarounds all the time.