Redshift Bails Out On A Large Sim

Been experimenting with RS and have hit innumerable issues. Firstly and the most worrying RS fails to render a fairly simple TFD image claiming it has not enough VRAM…the bcf file is about 1 GB and I am using a 12B Titan and an 8GB GTX1070. Load times for that frame are around 6 mins then it fails. In fact load times are generally very long
The other issue is noise…I can crank my settings to way high 4096 + on all settings but still there is noise. And I just cannot get the sharpness I am used to in Advanced render. I enclose a scene file for testing. You may have to or want to lower the resolution…it just fits in the Titan 12 GB I got AR to render the last frame in 9 mins on an i7 3930K at 3.2 Ghz 64 GB RAMhohoo.zip (77.3 KB)

I suggest posting such issues in the Redshift forums. TFD is involved in the rendering process only as a data source.

1 Like

OK will do just hope I can get to learn to love RS!!

Read this on a forum " Yep; for some operations if will push to system memory if it runs out of VRAM - but it gets hell slow, you want to avoid it where possible. Things like volumes can’t render out-of-core either.
If the scene itself is too heavy to even load up on one card, it will crash the render completely. If you work smart you shouldn’t run into too many issues, make sure you’re using instances/proxies rather than copies etc.

Here’s a doc the dev’s made that covers the hardware side of things:

https://docs.google.com/document/d/1rP5nKyPQbPm-5tLvdeLgCt93rJGh4VeXyhzsqHR1xrI/edit?usp=sharing "

RS is only using one GPU to render a TFD sim think because of the slot they are in RS uses my 1070 with 8 GB RAM so to use the Titan 12 GB I have had to manually uncheck the 1070 in preferences. When I have done this test render i will reactivate it and see if the 1070 can be used to render the scene while the Titan is rendering the TFD.

I think what happens is that RS loads the bcf file into a single GPU and cannot share that data…seems fair enough. So multiple GPU’s will not help the TFD renders…I may be wrong but that seems logical. I have to sell a kidney and get a 24 GB card to go with a Thread-ripper…no for a kidney I would want a 4 CPU xeon beast with 2 x 24 GB cards.

And sorry, though I think RS is a wonderful render engine and it’s material system is superb as far as TFD goes I am not convinced. I think better of investing in a good graphics card for sims and a beast of a machine for cpu renders…renderfarms may be an answer but I am not sure how they will cope with hundreds of GB of bcf files !!
I simmed out a scene to max the Titan out and then did some animations, here is an example frame from the last frame, the lack of detail is because I went right close up, the actual sim is high level at 220 MV for this frame ( yes I could go way higher but not on the GPU)
AR renders in 1 m 56 secs Redshift renders ( on low settings) in 3 min 27

Here is the scene file stresstest.zip (88.3 KB)

RS won’t split volume data across GPUs. It will still reduce render times significantly if you use more GPUs to render, because the image is split across the GPUs.
However, when using multiple GPUs with different amounts of memory, it will have to abort as soon as the smallest GPU’s memory is exhausted. You can select which GPUs it uses to render in the main Preferences. If you render only on the largest GPU, you might be able to render larger volumes.
This is true for all GPU renderers today and it would be interesting to see what the differences in volume size are that Arnold GPU, Octane and Redshift can handle on the same hardware.

1 Like