Hello,
Having thought about it there's a few ways to go but my concern is that having to use an entire DB, as some sort of intermediary marshaller, is a bit wasteful.
Regardless, the options:
1) As you say make use of an upstream API to get the records, sort and filter and write to individual tables in the database. That way each server is locked to and identified by a corresponding table name.
2) Do some form of ETL on the fly. For example if we know to expect a mix of server names in the payload we can do one single query to get the payload and put it in cache (memory, redis, disk, somewhere) and then have a corresponding API that when queried by your program will expose the necessary parameters and abilities for you to filter on that data be it server name, date range and all the other usual things. This removes the need to manage a database and as a non-persistent source gets overwritten or expired via TTL in the case of redis.
3) Possibly fix the upstream API. It is obviously getting the data from somewhere (central log server maybe?) so there must be a possibility to, at that point, inject the required intelligence to enhance its functionality.
Well, that's my top three but I would love to explore the options with you in greater detail.
Warmest Regards,
Stephen Lombard