I am building an IoT application which sends me data packets every 15 secs. Now I am using a data structure to store the relevant info in my Database(mongodb) that is coming from packets and use it subsequently in functions. The problem I am facing is, when I am sending my packets very fast(milliseconds) or even say 1-2 sec fast, my program crashes as value required for processing next packet is dependent on results of previous packet. This is aggrevated more when I using more number of cores as latest packet run on different cores having different cycles. Has anybody faced a similar kind of problem? I am ready for a hangouts session if somebody is interested to look at the code and then decide what is the real issue.
Siddarthan Sarumathi Pandian
Full Stack Dev at Agentdesks | Ex Hashnode | Ex Shippable | Ex Altair Engineering
There are a couple of ways I know to do this, there are probably more :) But, these are my 2 cents.
The first way is to use a document level lock in mongodb. But, locking is extremely detrimental to performance and should never be used in a production environment.
The second way is to use a messaging system like rabbitmq (https://www.rabbitmq.com/). Each packet can be considered as a single message and they will be delivered sequentially. So, you can have a microservice that consumes the messages from your IOT application and posts them sequentially to your api endpoint.
I recently worked on a sample application where given a textbox on the UI, I had to sync the text entered in the textbox to the database, every time the user pauses typing . So, everytime the user pauses typing, a message is fired to the microservice (the pausing is rather analogous to receiving packets from your IOT source). This is an example implementation - https://github.com/spsiddarthan/comment-syncer-api
Please let me know if you have any questions!