Thanks for the hints. In terms of data retention all the sections will have similar policy. However, access grants can be an issue. In my use case, the dashboards will be monitored by section personnel and also by the SOC. Therefore in terms of access, SOC will be able to see DMZ, ZoneA and ZoneB while the respective members of each section should only be able to see their zones (need-to-know basis policy) At the moment I am using different indexes so I can perform some transforms specific to each zones, as the syslog log sending formats are different due to the different log aggregator used by each zones. By using the different indexes in the heavy forwarder, I am able to perform some SED for particular log sources, and host & source override on the HF. I remember that I can limit access based on indexes, but I guess this is not possible with data models but will this be a concern? If I put them all in a data model, is it still possible to restrict access? For example, if the user can only manipulate views from dashboard and not be able to run searches themselves, that will still be OK. Pros and Cons in my mind: Separate data model: - Pro's: I can easily segregate the tstats query - Cons: Might be difficult to get an overview stats need to use appends and maintain each additional new zone. Each new data model will need to run periodically and increase the number of scheduled accelerations? Integrated data model: - Cons: might be harder to filter, eg between ZoneA, ZoneB and DMZ. Seems like I can filter only based on the few parameters in the model, eg source, host - Pros: Easier to maintain, as just need to add new indexes into the data model whitelist. Limit the number of Scheduled runs. - And as mentioned the point on data access? Will it be still possible to restrict? I am still quite new to Splunk so some of my thoughts might be wrong. Open to any advice, still in a conundrum.
... View more