Are there any limit on how many files can be monitored by one indexer at the same time provided that it doesn't hit the OS file descriptor limits and H/W capacity?
There is no any limitation about how many files Splunk can monitor at the same time.
On local disk, you may see delays when the tailing processor is exposed to several hundred of thousands/millions of files, but eventually they will be indexed.
This issue occurring input-side, so you won't see any queues blocked.
At the end of the day, this does really depend on the hardware resources available. A 16 cpu machine with 24 Gb of memory will be able to process far more files in a minute than a 1 core 386 with 512 Mb.
To put it another way, you'll hit the limits of your hardware before you hit the limits of the software. I would always recommend benchmarking any application (Splunk or otherwise) on as close to the hardware you are planning to use as you can get.
There is no any limitation about how many files Splunk can monitor at the same time.
On local disk, you may see delays when the tailing processor is exposed to several hundred of thousands/millions of files, but eventually they will be indexed.
This issue occurring input-side, so you won't see any queues blocked.
Note that if the monitored files are hosted on distributed filesystems such as NFS, which incur high latency for file access, you start to incur latency in the acquisition of data well before 100,000 files exposed to the tailing processor.
Using inputs.conf parameters such as ignoreOlderThan can help to reduce the tailing processor's scope and keep it up to date with the important files.