j : why you need those objects in memory? because the solution we need is time critical.
and why some of the filtering cannot be pushed to SQL? filtering can be pushed to SQL. But loading the filtered data back to memory would take time as the size of objects is of the order of 10 million.
does it happen once or 200 times. this happens in bursts. ie, in a day, the requests will come back to back for atleast 10 times per user.
couldn't you just serialize it on apache spark for a distributed map / reduce ? we have not tried this. this could be a solution.
Problems which we face now
- Loading of objects to memory on restart of the software is more than cceptable.
- Our current implementation was part of the Prototype, now we are facing scaling issues.