I know many answers will start with "it depends (on the type of application etc)" that's why I mentioned the dating/social-networking part. I think requests in a dating/social-networking app will come more frequently than some blog or news site, but how many requests per second.
I'd love to get answers from someone who already ran this kind of app, or knows about the real statistics of some app of that kind. I'll also like to get answers from everyone else, who have some rough idea, or even want to initiate discussion about this topic.
I think this type of application is very popular now. I also sometimes use it to have a good time. I recently found a source that helps me with this too https://kinkazoid.com/anal-vibrators/ I realized that it is important to experiment in these matters so that my personal life doesn't turn into boring and monotonous.
Depends on even more factors than that. The quality and methodologies by which the site is built can have an impact on this. Still at the end of this I'll give you a rough way of eyeballing it that's worked for me.
Remember, requests are more than just HTML served, it's every single file. How often are you sending them different files? How many separate files are being sent on first-load? How likely is it for them to visit more than one page on the site? What percentage of the traffic is bounce? Is the site using so much scripting and so image heavy that the browser lacks the storage needed and/or flat out refuses to cache anything no matter how much you play with the silly cache-control headers?
There's a reason when I see pages built from a dozen separate CSS files and several dozen separate scripts, with dozens of separate presentational images adding nothing of value to the page, I assume developer -- here come those words I'm always overusing again -- ignorance and/or incompetence.
It's just part why recombining your scripts is important, why using a monolithic stylesheet with full separation of presentation from your code is important, and why techniques like webfonts as vector image containers and CSS sprites came into being.
Likewise the length of the request is also important. The raw number of connections/requests is one thing, without wasting extra time you don't have to on them. A codebase that outputs 8k of semantic markup or a user request of 5k of REST data where the lions share of it is just data is going to create a lot less load than one that vomits up 60-100k of markup and/or an equally sized REST to convey the exact same content, just because the connection is going to sit there open longer reducing the number of requests per second a server can even handle.
Hence why when people scoff at my saying that they're wasting 60k+ of HTML on doing 12k or less' job, using "but everyone has broadband now" as an excuse, I generally assume they have no idea what they're talking about!
The scale of the codebase can be the difference between 200 page-views a second being handled flawlessly by a $10/mo VPN, and the same number of requests bringing a $500/mo dedicated server to its knees.
There is no simple answer to this until you have a site built or a site-building methodology and general idea of the data selected.
Though a good rule of thumb? Assume 30 seconds per page view per user, and five pages per user per visit. that means you take that 6000 users and divide by 30, then take the number of separate files they'd view cache-empty, divided by five for the repetative views, and multiply by that result.
So if cache-empty they'd be loading 12 files, that's:
6000 / 30 * (12 / 5) == 480 requests a second.
Whilst if the page is slopped together any-old-way with frameworks using 100 separate files:
6000 / 30 * (100 / 5) == 4000 requests a second
Naturally if you had a real live site you would/should be able to glean from your server logs just how well that's working out for you and what the real numbers are, but this is a decent starting place. Knowing how long the user will actually spend on each page on average is the hardest part. In my experience 30 seconds is a good conservative estimate as most people have a nasty case of the "twitter generation TLDR" illiteracy, but again it really depends on the content.
Likewise the separate files number is still way too high and ridiculously oversimplified; real traffic would/should be way underneath those numbers... but, well...
You want numbers higher than 'real world' as you should be planning for the worst case scenario, not dancing the razor's edge. The old engineers adage, figure out how much you really need, then double it as a safety margin. It's called "overprovisioning".
Of course, that assumes you don't have any pull/push going on, which can add up really quick.
It also doesn't take into account leveraging things like a secondary server/domain or CDN for static files, which can greatly reduce the requests that need to be served by the server doing the actual construction of the HTML and/or REST data.
So yeah, this is a hard thing to ballpark and the results can be all over the place depending on code quality, methodology, and the supporting infrastructure.