Scale Arc’s auto cache invalidation feature uses the transparent No SQL technology by extracting metadata from the query and tagging the cache objects used to associate cache entries with invalidation queries.With this invalidation method, Scale Arc can guarantee that its cache will not serve stale data.Anyone who's worked with Magento knows that even on relatively quiet sites, you can experience significant slowdowns impacting sales figures and so on, especially once you have more than even just a few thousand products.If you dig deep enough into Magento’s core you will eventually find the beating heart of the Zend Framework upon which Magento's has been built.Scale Arc supports string, boolean, long, double, short, byte, binary, decimal, byte Array, date, time, timestamp, and Character Streams as column types.
Inserting data INTO the cache is always fairly easy, but determining when and how to clear invalid caches requires a degree of problem solving to make sure the users always have up-to-date content as it becomes available.
, mostly INSERT statements, around 5-10 rows per second. In 5-10 seconds when it will be time to process new data again the same lock up will happen. Query #2 is the updating of the post-processed data.
A PHP based application is running on the server that reads the freshly replicated data every 5-10 seconds, processes it and stores (INSERT ON DUPLICATE KEY UPDATE) results in a separate database . A web application displays the post-processed data for the user. Query #3 is streaming (unbuffered) the newly replicated data for processing.
I committed a branch that removes keys while expiring them to ensure they don't get orphaned.
It will probably be considerably slower though so please test and let me know if it fixes the issue before I merge it with master and let me know if you have exorbitant memory usage issues (building the strings for Redis commands can consume a lot of memory) or locking or other performance issues.