Hi guys,
I'm trying to isolate what is being responsible for most of the data size on phantom. My data/db/base folder is huge and it keeps growing even though the logging level is really low and the vault is not something that I often use.
/opt/phantom/data]$ du -hsx * | sort -rh | head -10 | grep db
2.6T db
Is there any way for me to query and see what is consuming much space and maybe delete some old stuff?
I know that Phantom has those scripts to remove containers and etc but I personally don't think containers are the bad guys in this context, and the way it is I don't have like double the space available to do a Vacuum if I delete them all.
Thanks!
@victor_menezes there are lots of things that can cause the DB to be large, usually action_run/playbook_run/artifact/audit tables are very large.
First, you should setup data retention policies if not done already (docs link) as this should keep the database trimmed.
I have seen DBs of similar and larger sizes for heavily used installations. What's your setup and how any events do you currently have on the platform?
The only other way would be to interact with the DB to trim tables but ofcourse I would 100% recommend engaging with support before deviating from established ways to manage DB capacity.
-- Hope this helps! If so please mark as a solution! Happy SOARing! --
Would ingestion summary help? I haven't used Phantom in a long time so I may be totally off base.
System Health > Ingestion Summary.
Not really. That's just an overall count of ingested events.
I'm looking more at the DB level because I suspect that there is something "stuck" in DB side, so something like table sizes and etc.