My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more

Why my website sometimes takes 1-2 seconds to load and show data and sometimes takes a very long time from 1 to 1:30 minute.

Default profile photo
Anonymous
·Aug 6, 2019

Hi Everyone, I am trying to figure out the reasons why my website loads faster sometimes and sometimes takes lots of time to fetch data and shows a loading screen.

The tech stack used here is React for front-end and Node for the backend and mongoose as ODM. Both front-end and backend are deployed on the same instance and the database is on a different instance.

I have an Nginx setup that delivers the react build on port 80/443 and the node backend runs on a different port so a proxy is added in Nginx conf for backend endpoint (base URL).

All the requests first go through Nginx and then through proxy go to the backend at port 8000.

I have used clustering in node to utilize both 2 cores.

This is the architecture and tech stack that the project is using.

The issue is that sometimes the website loads data (API calls) in split seconds but sometimes it just keeps loading.

If let's say I consider that the API codes are not optimized and it's taking time to handle so many requests at peak hours then the CPU should spike which is not happening. We are not dealing with big data records here.

I have tried the following things-

1) If the front-end was rendering slow then the UI should freeze and there should be some delay in animation on a button click or some events but the front-end is smooth so It's definitely not a front-end issue.

2) When I see the network tab of chrome dev tools I see the API request pending and get's response after 1-1.5 mins so I believe it could be a backend issue.

3) Some requests were getting timed out by Nginx so I added some proxy_read_timeout and proxy_send_timeout value in Nginx and that solved the issue of 504 time-outs but the request is still taking so much time to serve the response.

4) I do not have much experience with Nginx but when I see Nginx error logs it shows that upstream prematurely closed connection while reading response header from upstream or connect() failed (111: Connection refused) while connecting to upstream,.

5) There is no spike in CPU utilization of backend or database server at peak hours when there are loads of customers using the website. I checked through TOP command as well as on cloud watch monitoring it's not going above 35-40%.

I'm not sure on this correct me If I am wrong. Could it be that after a certain no of connections/requests something is queueing/blocking rest of the requests? Nginx?

Let me know what you guys think could be the bottleneck here. Thank you for reading and appreciate your help.