Why your OpenVDB never stop?

If you have a very large scene with heavy simulation. Rendering directly is a poor option. So converting to vdb is a much better solution. vdb’s much more faster, stable and can be used on any render engines, not only very narrow range like Arnold, redshift, Octane. However, if you have largq simulation, example there are 100 bcf frame and each frame file have 20 GB ( 4Gb for temperature, 4Gb for density, 4x3=12Gb for velocity ), the converting will never finish. Why? Because bcf2vdb command does not convert frame by frame, it try to convert nearly all the bcf files at THE SAME TIME. So with the large file, it will need 100 years to finish processing.

My normal solution is copy one or two bcf file to a seperate folder, then convert them, then delete that bcf files, then keeping copy one or two next bcf file… until it finish. It took 4 days for me to convert 104 bcf files to vdb files. But it’s better than waiting in 1 week and processing’s still not working.

But it should be better if Jascha change the way it convert. should be frame by frame instead trying to mess up with all the frame at the same time. Thanks :sparkling_heart: