Search Unity

Confused about the concept of "applying raytracing" to objects that don't exist

Discussion in 'Shaders' started by asperatology, Apr 23, 2019.

  1. asperatology

    asperatology

    Joined:
    Mar 10, 2015
    Posts:
    981
    I have followed an online tutorial on creating a basic raytracing shader, and then I played around for a bit with the shader to get this result:



    In essence, the shader from the tutorial is calculating the reflections of rays, as well as simulating light reflections of spheres.

    But in reality, there is nothing in the Unity scene, only a main camera that renders the output of the shader to the target destination (the screen viewport as RenderTexture), and a directional light.

    So, I'm confused.

    My goal is to apply the concepts of raytracing, shown above, onto game object meshes if the scene is filled with meshes and models.

    But the tutorial I read is about applying the concepts of raytracing to objects that don't exist. Like, how does one make reflections onto the Unity primitive Sphere meshes, with a customized Material applied, in the scene, instead of simulating that light is being reflected out of thin air?

    If anyone knows where I can start from, thanks for the help.
     
  2. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Technically no object exist in unity scene, they are all data, in games, usually the primitive are triangles, which are then group into meshes and then link to other data inside a game object. In order to trace object in the scene, you must access their mesh data and then use that. So you must cycle through all objects, then through all their triangles as a naive tracer, or you build an acceleration structure like BHV, store the data elsewhere, then trace inside it. Of course for every scene update you will need to update the acceleration structure too.
     
  3. asperatology

    asperatology

    Joined:
    Mar 10, 2015
    Posts:
    981
    Is that BVH (bounding volume hierarchy), or this is something new? Thanks.

    Would you happen to know better reading materials and/or YouTube videos on the process of iterating through objects in the scene, and how to access the mesh data, use the data, and then trace through each of them to get the final resulting RenderTexture to apply?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Let’s step back a moment.

    Traditionally when you’re rendering a scene in Unity, there’s a bunch of systems in play for handling materials, meshes, and textures, passing those off to the gpu, culling objects that are out of view or occluded, then having the GPU render the resulting objects in multiple passes using material data and shaders to produce the necessary data or apply lighting and shading.

    When you’re doing raytracing in a shader, you’re doing all of that yourself in a single shader. You have to handle uploading the mesh data in a form the compute or fragment shader has access to, as traditionally it does not. You have to handle each object having it’s material type and data being assigned to it, as traditionally a shader doesn’t need to know about multiple different materials. You need to handle culling or some kind of space partitioning so every ray doesn’t have to iterate over every object and every triangle, and because you can’t cull to only the objects within the frustum since rays can bounce in any direction. Etc. etc. etc.

    Basically you have to write a complete, custom rendering system on your own. There’s not really a cheap and easy way to do any of this for arbitrary scene data. Usually it’s just giant lists of data packed into structured buffers, and other buffers that index into that.
     
    neoshaman likes this.
  5. asperatology

    asperatology

    Joined:
    Mar 10, 2015
    Posts:
    981
    I see... looks like what I had in mind is too abstract, too high-leveled for what is about to come.