Tips for tracking limits while processing queued jobs

This is more of a generic development question, but I am curious how others are tracking their endpoint usage while processing background jobs. In my setup - I have a Redis system that I hit synchronously and update the count everytime I make a successful request. I run all of this in a background job manager that can process up to 100 jobs at a time - which naturally adds some issues since I could have 100 jobs trying to write to Redis at the same time and overwriting each other.

I found a Ruby wrapper that gave me some inspiration with caching the results in Redis, but it’s kind of tricky to get a list count on Redis (at least with what I have found in .NET Core). Looking for inspiration on how I can improve in this area. :grinning:

If you use the right Redis features you don’t have to bother about how many concurrent jobs you have, you NEVER should increment a key on your code, you should always call https://redis.io/commands/INCR command for counter increment.

1 Like

Woah! I didn’t know this was a thing. Well - that basically takes care of nearly all of my issues then. It’s always the simple things :sweat_smile:

There are lots of features like lists, hashes, etc Really worth taking a moment to explore the options.

1 Like