Virtual memory and GPU simulation?

Fluid Dynamics for LightWave 3D
Posts: 3
Joined: 28 Jun 2017, 02:35

Virtual memory and GPU simulation?

Postby WYoder » 28 Jun 2017, 02:59

I am new to Turbulence FD, and i'm running into a couple issues that I have been unable to find fixes for on the forums or google in general.

First, my workspace is a Mac with an Nvidia GeForce GTX 980 Ti, yet when I try to select the card for the GPU simulation, it almost immediately gives me an error message "The simulation had to abort because of an unexpected error." The scene will simulate on CPUs with no real issue, but I figured since I have this power to potentially speed up the simulation, I may as well use it, if possible. As it stands now, the scenes take 16+ hours to finish running simulations. Not a major issue if I can't fix it, but like I said, if I can make it faster that would be awesome.

Second, I've now hit a wall with simulating my scene. It did the first half of the simulation with no problems, where my emitters were barely moving, but as soon as the real motions starts, my RAM usage jumps from ~8gb to 35-40gb. I only have 32gb in the computer, so as soon as it jumps past that, the simulation crashes. Is there any way to toggle for Lightwave or TFD to use virtual memory? I have many, many TB it can gladly use if I can just figure out how to get to it. I only recently came back to using a Mac for work after 10 years of being a primary Windows user, so i'm pretty rusty, but I thought Mac had virtual memory settings on by default, since there's no way to toggle them in the system settings, so I had hoped LW would automatically use it if necessary. I toggled "Use less memory but more time" on to see if that would help any, but it still crashed so i'm assuming any difference it made was negligible.

Any assistance is greatly appreciated, and thanks in advance!

Return to “TurbulenceFD for LightWave 3D”

Who is online

Users browsing this forum: No registered users and 1 guest