Looking for Guidance on TFD Workflow Optimisation for Complex Simulations

Hello Everyone :hugs:,

I’ve been using TurbulenceFD (TFD) for a while now, and I adore the features it provides. But as I get farther into the simulations, I’m finding it harder and harder to strike a compromise between performance optimisation and producing high-quality findings.

Large-scale fire and smoke simulations are a part of the project I’m working on right now, and I’m realising that my process could use some tweaking to handle these bigger situations more effectively.

Here are some particular places where I could benefit from guidance:

Cache Management: The size of the cache files might increase significantly when working with high-resolution simulations. What are some practical methods for controlling cache size without making too many quality compromises? :thinking: Do you advocate any particular TFD settings or compression methods? :thinking:

Grid Optimisation for Voxels: I’ve read that performance can be greatly affected by voxel grid optimisation. How can one strike the ideal balance between simulation detail and grid resolution? :thinking: Any advice on how to properly set up adaptable grids? :thinking:

Hardware Points to Remember: Although my system, which runs TFD, is fairly powerful, I’m wondering whether there are any particular hardware choices or upgrades that could improve performance even further. When selecting GPUs, CPUs, or storage options for TFD, is there anything specific I should be looking for? :thinking:

General Workflow Advice: When handling intricate TFD simulations, are there any other general guidelines or best practices that you’ve discovered to be beneficial? :thinking: I’m open to recommendations for settings, plugins, gcp, or outside tools.

Thank you :pray: in advance.