I’ve kinda just had a thought, I don’t know if it’s horrible or not, so you tell me.

  • 3d terrain in video games is commonly starts with a fractal perlin noise function.
  • This doesn’t look very good alone, so some passes are employed to make it look better, such as changing the scale.
  • One such technique is employing hydraulic erosion with a heavy gpu simulation, to create riverbeds and folds in terrain.
  • However hydraulic erosion is VERY slow, and is as such not viable for a game like Minecraft that works in real time. It also doesn’t chunk well.

But what if it didn’t have to? Why not train something like a diffusion image model off of thousands of pre-rendered high quality simulations, and then have it transform a function like fractal perlin noise? Basically “baking” a terrain pass inside a neural network. This’d still be slow, but slower than simulating thousands of rain droplets? It could easily be deterministic to loop across chunk borders too. You could even train off of real world GIS data.

Has this been tried before?

  • Munkisquisher@lemmy.nz
    link
    fedilink
    arrow-up
    5
    ·
    6 hours ago

    The current leader in this space that we use in the film industry (also heavily in games) is Gaea https://quadspinner.com/ it’s a node based erosion engine that lets you start with any height field, GIS, something you’ve sculpted, or even just a few shapes mashed together.

    It’s waaaay beyond just layering a few noises together and is so much fun to use. Designed to be procedural, but also art directable, as we have to match artwork and reference of real world locations.

    There’s probably some of the nodes that would benefit from being encoded in a ML framework, but not a single step to do it all.

  • klankin@piefed.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    We dont yet have proof AI can “imagine” new things, just interpolates between existing. For complex relationships such as realistic fluid/particle dynamics it also requires billions of inputs before approximating reasonable outputs - so the cost to potentially nonexistent ROI timeline just doesnt add up. Its made even worse if youre already simulating billions of viable simulations, just to generate thousands.

    This is why most modern techbro AI requires massive internet piracy, without already having the training data readily available (but not efficiently simulated) the algorithms arent worth much.

    Tangentially this is why such algorithms have many applications in the medical field, they generally have access to a large dataset of human annotated diagnosis that can’t readily be created by a computer.

  • 474D
    link
    fedilink
    arrow-up
    2
    ·
    7 hours ago

    It most likely could, but it would also be extremely expensive and power hungry to train for such a specific purpose. You would need to develop the training framework before anything else. Creating the groundwork is the hurdle, and a huge one at that

  • droning_in_my_ears
    link
    fedilink
    arrow-up
    1
    ·
    6 hours ago

    Sounds like a good project idea for you. I do know there are diffusion models for meshes and voxels and other things

    Of course it wouldn’t be perfect but the goal is just looking good enough. You’re not after some high fidelity physics sim.