Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Have a look at our Games Focus blog post series which will show what Unity is doing for all game developers – now, next year, and in the future.
    Dismiss Notice

Need help with term search and feasibility questions.

Discussion in 'Getting Started' started by Amuscaria, Aug 24, 2017.

  1. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Hello Unity Forums. I'm completely new to Unity and am looking for some help looking for the proper terms to search for, so I can look up the correct topics and tutorials online for a project I'll be working on for class. I'm also looking for opinions on rather the project I have is feasible from those that have much more experience than me. Refer to the reference image below. ProjectRef.png

    1) I'm trying to make a ultra-sound simulation for class. I basically need a probe-object to interact with a number of tissue-objects based on in-game pressure applied. I need the probe to deform the skin-object (the square tile), and to a lesser extent the artery-object (the tube). The Skin- and tube needs to deform when the probe pressed down on the surface, but needs to rebound when the probe is removed, and the amount of deformation should be proportional to the in-game pressure. Is this doable? If so, what is the proper terminology for this in-game behavior? I've looked up mesh deformation, but that's always done for a 2D plane, not a 3D flattened cube. Or is the concept the same?

    2) I want a simulated cross-section plane attached to the tip of the probe that renders out the cross-section of the intersected objects defending on the probe angle and depth. I need this to be rendered on a separate UI element. I've seen something similar done with cross-section shaders, but never seen it the cross-section itself rendered on a different UI element or window. Is this doable? If so, might someone point me in the proper direction?

    3) This one is not important, but I would like a shadow to be cast on the cross-section render, so that objects above casts a shadow effect on those below, just to give it some extra realism. Low priority, really, but is this also doable? Do I need to make 2 or more different shaders to achieve this?

    All of your help is much appreciated. :)


    ProjectRef.png
     
  2. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    This sounds like a really hard project. All doable, of course, but not easy.

    Mesh deformation is mesh deformation. You will need to find the collision with the probe (in a more thorough way than Unity's standard intersection tests), and deform the top surface of your box accordingly. Then you'll need to also adjust the bottom surface of the box so as to maintain a fixed volume.

    As a possible good-enough shortcut, you could skip the volume calculations, and just deform the lower surface in the same way as the upper surface.

    If you take the end of the probe to be a sphere, then calculating the mesh points in this sphere — and where you need to move them to get them out of the sphere — is relatively easy. So, item 1 isn't too bad really.

    Item 2 is a bit harder. It'll take some math to calculate where the various meshes intercept the cross-section plane. (If you can restrict this cross-section plane to be aligned with the coordinate system — say, to have a constant Z value — then the math gets a little easier.) Then for displaying it... uh... I don't have a lot of great ideas. Maybe you could draw the intersected mesh surfaces with something like Vectrosity. But if you want filled areas, as you probably do, then that won't work very well.

    Probably what you'll end up doing there is making another, 2D mesh, just for this cross-section display. Then you deform this mesh in the same way as the 3D ones below.

    For item 3... uh... maybe that could be done with a shader effect. More likely, you adjust the vertex colors in your 2D cross-section mesh, according to how much stuff is "above" each one.

    All this is pretty advanced stuff for someone who is not a professional developer, and also new to Unity. But it is doable!
     
  3. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    I'll looking into these things. As for number 2, I'm talking about something like this:


    I know that's an animation, but I thought maybe something like that is doable in real-time in Unity. And yeah, I think it's a bit advanced for a beginner, but I think I can get some help from a professional Unity person at work for some guidance. I just didn't want to ask him how to do something that might not even be doable.

    Your help is much appreciated! :)
    -----------------------------------------------
    EDIT #1:

    I think I know how I might be able to approach the problem. I'll use something like a in-game security camera that renders on half of the in-game display.

    1) I'll have a second identical but invisible model for the tissues that would mirror all the deformations of the first somewhere else in the scene, not visible to the main in-game camera.

    2) I'll have the cross-section plane in the second model that mirrors the movements of the probe. Even though the model is invisible, I think I can make the cross-section shader render the cross-section.

    3) I'll attach a camera that is always normal to the cross-section plane, set at a distance that allows the entire cross-section to be render. This camera will render the cross-section on the second half of the main display.

    4) For the shadow effect, I'll probably add add some random effects on the shader. It's not really that important though.

    This works in my head, but would love some feedback on if this works in reality from people with experience. Thanks. :)
     
    Last edited: Aug 25, 2017
    JoeStrout likes this.
  4. Schneider21

    Schneider21

    Joined:
    Feb 6, 2014
    Posts:
    3,498
    Most of this is over my head, but mesh deformation may also be referred to as "soft body physics" if you're looking around for things. Definitely complex stuff, but it's like eating an elephant: you just gotta take it one bite at a time.
     
    Amuscaria likes this.
  5. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    I've been looking up soft body physics tutorials and found no tutorials online on how to approach the problem. I have found a video that is pretty close to what I want:


    Anyone got ideas on how to achieve what is show in the video above?

    I've downloaded the Bullet Physics asset from the store. Not sure if that's what I need for this project, though. I've also found VertEXMotion development kit for soft body. But its pretty expensive and I'm not sure if that will even be what I need.
     
  6. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    I don't think you need to get so fancy as to try pushing the soft-body physics onto the GPU. It should be fine to just do it in regular C# code in this case.

    I don't know of any tutorial that will cover exactly what you want. But learn about procedural mesh generation, and you'll have pretty much everything you need to know to deform a mesh in response to the probe.
     
  7. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    I'm currently looking into using a bitmap to generate the deformation. If I could render a bitmap (or something like it) in real-time depending on the pressure and coordinate of the probe, I might be able to do that. But how to make the bitmap in real time.

    Also, I know I can do that for the surface, but am unsure how to apply that to a tubes below :/
     
  8. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    I don't quite get what you mean by "using a bitmap to generate the deformation". It's easy enough to render a bitmap (you could use PixelSurface for example), but I don't see how that helps deform a mesh.

    As for the tubes below, you'll deform them in the same way as the surface above, I would think. Possibly less if you want the tissue to compress a bit.
     
  9. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    I mean how people use perlain noise to bitmaps to control the surface topology of a plane. I was thinking about have a separate function that isn't visible to the user where the probe is actually just moving a black circle around on a white background. 30 times per second, I'll have unity generate a capture image internally with the black circle on white BG as a bitmap, and use that to control the topology of the surface plane. The white background is refreshed back to a white background shortly before the next operation to allow the probe to update its position and reset the surface plane height if the probe is no longer in the same location, and thus updating the terrain topology in real-time. This way the surface plane updates in real-time with the probe. Maybe there is a simpler way.

    One problem that comes to mind, though, is that this only works on a flat plane due to the coordinate system. If the surface is curved irecularly (which I'll need eventually), the Unity game coordinates would be off and the surface depression will no longer be normal to the surface of the model.

    As for the Tubes, so I just transform the Z-coordinate offset of the plane to the vertices of the tube below? How do I check where the probe is relative to the tubes?
     
  10. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    People don't use Perlin noise with bitmaps to control a surface topology. They just use the noise function to directly set the position of each vertex. Bitmaps are not involved.

    Similarly, there is no point in using code to draw a black circle on a black background, and then use this image of a circle to set your vertex positions. Just set your vertex positions directly from the position of the circle.

    As for the tubes, I can see that you're getting hung up on perceived differences that aren't really there. It's too much to grasp all at once. So, forget about them for now. Get some mesh deformation code working for a simple plane first, and then I think it will be much easier to see how to apply it to the tubes.
     
  11. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    I think the op is referring to gray scale heightmaps when he says 'bitmaps'. Perlin noise can be used with a heightmap to provide uneven topology. It can also provide an animation by repeatedly providing offsets to X and Y parameters, giving a bubbling or agitation effect.

    My first impression was a heightmap is a good idea. A grayscale footprint of the probe could be created in Photoshop (or ???). Changing the brightness of the footprint image then applying as a heightmap. The problem with that is it is just a simulation and can only be as good as the footprint image, but does not accurately represent the actual probe.

    @JoeStrout had the idea earlier in this thread to just use colliders. Colliders can register multiple points of contact, simply adjusting each contact point inward a small amount. Normals will have to be recalculated, but this should work fine for depressing the probe. Not really sure how to restore the mesh as the probe is lifted. Constantly raising each vertex slowly to a normal state while lowering collision points seems like a lot of ping-ponging going on. Maybe recalculate each time the probes Y position changes only. Either way i think colliders are your answer and they are not difficult to work with as they contain a lot of information about the collisions.
     
  12. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    @Bill_Martini I'm also worried about how the heightmap will work on a curved surface, especially one that is irregularly curved like, say a human neck or arm. How do I make it so that the probe always depresses in the direction normal to the surface rather than just in a negative-Z direction? Of course, this is mostly just a visual effect for the surface layer, but I would need the underlying structure to be compressed the right direction.
     
  13. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    I've done more thinking on this and my conclusion is, this is damn hard! Deforming the tissue mesh will not work properly using points of a circle or colliders because they only act on the vertices that would overlap. Tissue bends and a cross section looks similar to a sine wave. The peaks of the wave are the 'normal' tissue and the trough is the probe depression point. I wish my math skills were good enough to provide exactly how to use sin / cos to adjust the vertices. Ultimately you want to adjust for probe depth and tissue density. Softer tissue will have a larger depression dispersion while harder will be smaller.

    There are some softbody models in the asset store (search jelly). There might be some answers by inspecting these. Possibly the authors would be willing to assist you as they would know more about the subject.

    At some point you're going to have to actually start making attempts at this as seeing the results will guide you to the proper solution.
     
  14. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    It doesn't have to be that hard. Assuming, of course, that it only needs to look good. (If it needs to be actually realistic, then you've moved into the realm of simulation and life gets much more complicated.)

    So for example, I'd start with this: measure the distance of each point to the end of the probe. Pass that distance through a sigmoid function (which you could even set up as an Animation Curve, so you can tweak it visually right within Unity), to get a value that indicates how far the point should move. Move each point away from the probe accordingly.

    There are some refinements you'll probably want, such as making sure that no point ends up closer than some minimum distance to the probe (this represents the physical extent of the probe; the tissue should never intersect that). Just check the distance after doing the above procedure, and if it's too close, move it away to that minimum radius.

    If this proves insufficient for some reason, the next step would probably be a ball-and-springs model, where you treat the edges in the mesh as springs connecting little masses at each vertex. This sounds hard but is actually not that bad. This can result in neat effects like the tissue jiggling a bit when you move the probe around (though I suspect that in reality, the amount of jiggle is so minor that it's not really needed).
     
    Bill_Martini likes this.
  15. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    Yes, sigmoid! I couldn't for the life of me think of that. I like the ball and springs idea too, might be overkill, but with proper dampening, it might be a good choice also.
     
  16. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    The deformation is indeed only for the looks. Tissue accuracy isn't necessary as long as it's roughly emulate what it looks like in real life.

    Does anyone know of any open-source example with interactive deformation where I can take a look at their code to see how they approached the problem? Doesn't need to be precisely what I need.
     
  17. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
  18. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Thanks! :D That's an excellent start for me. Much appreciated.

    EDIT: Got a C# questions. What does the "10f" in this code mean? Rather, that is the "f" stand for:
    public float force = 10f
     
    Last edited: Aug 30, 2017
    Bill_Martini and JoeStrout like this.
  19. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    All float values require a trailing 'f'. 10f is a shortcut of 10.0f.
     
  20. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    So I used the tutorial example provided here: http://catlikecoding.com/unity/tutorials/mesh-deformation/ that JoeStrout gave me, and it works quit well. There is just one problem - it works with all the default Unity meshes (spheres, planes, cubes, etc), but not for any mesh imported from 3DSMax as objs. What might be the cause for this? The mesh looks fine, and has all the subdivisions I need. But wont deform at all.
     
  21. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    It certainly should work. Keep in mind that an imported model is often a hierarchy of game objects. The actual mesh might be a couple layers down. Twist things open in the Hierarchy window until you find the object with an actual MeshFilter on it, and then point your deformation script at that.
     
  22. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Capture1.PNG I think I've attached the script to the right component. See the screen capture. I added the script as a component to the Box001 under the Skintest OBJ, but it doesn't work. I did this for the in-Unity models and worked fine. Not sure why it doesn't work on the imported things, though.
     
  23. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    Hmm, yeah, that looks right. Confirm that your mesh is what you think it is by changing the mode in the Scene view from "Shaded" to "Shaded Wireframe".

    If that doesn't lead to an a-ha moment, then I think you'll have to dig into the script code, littering it with Debug.Log statements (and studying the output) until you figure out what's different in this case.
     
  24. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Capture2.PNG I don't see any apparent problems. The vertexes are all welded, and there are plenty of them to make the movement apparent. Unless I have to UV unwrap the model before I export I'm at a lost what the problem would be.
     
  25. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,796
    I agree, it looks fine. So it's down to instrumenting the code (i.e. inserting lots of Debug.Logs) so you can see what's going on.
     
  26. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    My co-worker figured it out. Turns out it was obvious - I didn't have a collider. Problem solved! :D Thanks.

    EDIT:
    Just a question for the near future. If I used a the tutorial that JoeStrout provided, modified the code a bit for my own uses, and used that for my research project (and future education purposes), is that considered plagiarizing? I'm not familiar with what the general consensus is when it comes to computer code.
     
    Last edited: Sep 5, 2017
  27. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Vessels-2.png

    Here's my progress so far. The over all effect is there, but I still need to find a way to make it so the raycasts go thru the models so that It affects things below it. I looked up "Physics.RaycastAll", which I understand can ignore models based on layers. Would that be the right way to go?

    Another issue. Although the deformation is close to what I want, it's not quite there When you push down on the mesh, it deforms outwards, I want the vertexes to deform inwards. Refer to the figure below:

    DeformerChange.png

    The part of the code that handles the deformation offset is here:

    void HandleInput ()
    {
    Ray inputRay = Camera.main.ScreenPointToRay(Input.mousePosition);
    RaycastHit hit;

    if (Physics.Raycast(inputRay, out hit))
    {
    MeshDeformer deformer = hit.collider.GetComponent<MeshDeformer>();
    if (deformer)
    {
    Vector3 point = hit.point;
    point += hit.normal * forceOffset;
    deformer.AddDeformingForce(point, force);
    }
    }

    I'm quire sure I just need to multiply the vector3 hit.point by a negative value for 2 of its axes. But I'm not sure how to express that in code. Putting a negative in-front of "forceOffset" doesn't give what I want, since it inverts all 3 axes.

    I mean, say I have a Vector3 variable "Axes" where there is a [X,Y,Z] value, and I only want to multiple the X and Y values by a -1, but not the Z value. How would I write that? Or is it as simple as Axes * [-1,-1,1]?

    Thank you all in advance again for the help. :D
     
    Last edited: Sep 6, 2017
  28. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    I don't see any initialization of forceOffset but assume it's a Vector3. And somewhere you have a Vector3 forceOffset = new Vector3(x,y,z);. Just change to, Vector3 forceOffset = new Vector3(-x,-y,z);.

    Hope this helps.
     
  29. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Apologies, here's the entire code for this function:

    Code (csharp):
    1.  
    2. using UnityEngine;
    3.  
    4. public class MeshDeformerInput : MonoBehaviour {
    5.  
    6.     public float force = 10f;
    7.     public float forceOffset = 0.1f;
    8.  
    9.     void Update () {
    10.         if (Input.GetMouseButton(0)) {
    11.             HandleInput();
    12.         }
    13.     }
    14.  
    15.     void HandleInput ()
    16.     {
    17.         Ray inputRay = Camera.main.ScreenPointToRay(Input.mousePosition);
    18.         RaycastHit hit;
    19.      
    20.         if (Physics.Raycast(inputRay, out hit))
    21.         {
    22.             MeshDeformer deformer = hit.collider.GetComponent<MeshDeformer>();
    23.             if (deformer)
    24.             {
    25.                 Vector3 point = hit.point;
    26.                 point += hit.normal * forceOffset;
    27.                 deformer.AddDeformingForce(point, force);
    28.             }
    29.         }
    30.     }
    31. }
    32.  
    forceOffset is just a modifier that you can change while the game is running.
     
    Last edited: Sep 7, 2017
  30. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    Oh, it's a float...

    Try this.

    Code (CSharp):
    1. using UnityEngine;
    2.  
    3. public class MeshDeformerInput : MonoBehaviour {
    4.  
    5. public float force = 10f;
    6. public float forceOffset = 0.1f;
    7.  
    8. private Vector3 reverseVector = new Vector3(-1,-1,1) // Reverse X and Y direction only!
    9.  
    10. void Update () {
    11. if (Input.GetMouseButton(0)) {
    12. HandleInput();
    13. }
    14. }
    15.  
    16. void HandleInput ()
    17. {
    18. Ray inputRay = Camera.main.ScreenPointToRay(Input.mousePosition);
    19. RaycastHit hit;
    20.  
    21. if (Physics.Raycast(inputRay, out hit))
    22. {
    23. MeshDeformer deformer = hit.collider.GetComponent<MeshDeformer>();
    24. if (deformer)
    25. {
    26. Vector3 point = hit.point;
    27. point += (hit.normal * forceOffset) * reverseVector; // do reversal here
    28. deformer.AddDeformingForce(point, force);
    29. }
    30. }
    31. }
    32. }
    33.  
    I added a line and modified a line. That should be all you need.

    I should have mentioned earlier, please us code tags when posting code. It makes it easier to read the code and spot issues.
     
  31. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Thanks! Much appreciated. How do I code-tag? Is it the same as other forums where it's
    Code (csharp):
    1.  blah
    ?

    EDIT: I guess it is. :p

    EDIT 2: Getting an error "error CS0019: Operator * cannot be applied to operands of type UnityEngine.Vector3 and 'UnityEngine.Vector3"

    It's pointing to this line:
    Code (csharp):
    1. point += (hit.normal * forceOffset) * reverseVector;
     
    Last edited: Sep 6, 2017
    Bill_Martini likes this.
  32. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    Yes, check the manual. You can set a raycast to ignore layers.

    I'm wondering if you can add your deformer component to your other objects. And then do;

    Code (CSharp):
    1. deformerV1.AddDeformingForce(point + vecOffset, force * 0.5f);
    Where vecOffset is a specified distance from the 'skin' mesh, and the force is reduced to compensate for tissue deformation. This way you raycast once and call the deformation of each mesh consecutively. The vecOffset can be calculated in the Start function, by getting the difference between the tissue mesh and vein positions. You may need a separate offset for each mesh below the skin mesh.

    At some point you are going to have to kill the mouse control and add your probe model. At that point I don't think you need a raycast at all. You just need to substitute the Vector3 point to the bottom center of your probe position.
     
    Last edited: Sep 7, 2017
    Amuscaria likes this.
  33. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Yes. I plan on replacing the camera raycast soon when I make a simple probe. I just need to figure out how I want the control scheme to work first. I'm thinking about just using a graphics tablet, like a Wacom Intuos, as the controller, since it already has pen pressure and pen tilt. I could just use the rotate wheel on it to control the orientation of the probe.

    I think this is similar enough to an ultrasound probe in real life. I'm unsure if Unity has something that lets you detect graphics tablet, though. I'm looking thru the forums and online for answers right now. :)

    I'll also need to figure out a way to 'glue' the probe to the surface of the skin so its always pointing normal to the surface when you pen isn't tilted and adjust the tilt with the pen after. I'm looking for cord that lets an object stick to the surface of another object like that Metroid Prime 2 Spiderball. :p
     
    Bill_Martini likes this.
  34. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    Yeah, that wrong... Clearly I'm giving untested code here.

    Code (CSharp):
    1. using UnityEngine;
    2. public class MeshDeformerInput : MonoBehaviour {
    3. public float force = 10f;
    4. public float forceOffset = 0.1f;
    5. private Vector3 reverseVector = new Vector3(-1,-1,1) // Reverse X and Y direction only!
    6. void Update () {
    7. if (Input.GetMouseButton(0)) {
    8. HandleInput();
    9. }
    10. }
    11. void HandleInput ()
    12. {
    13. Ray inputRay = Camera.main.ScreenPointToRay(Input.mousePosition);
    14. RaycastHit hit;
    15. if (Physics.Raycast(inputRay, out hit))
    16. {
    17. MeshDeformer deformer = hit.collider.GetComponent<MeshDeformer>();
    18. if (deformer)
    19. {
    20. Vector3 point = Vector3.Scale(hit.point, reverseVector); // do reversal here
    21. point += hit.normal * forceOffset;
    22. deformer.AddDeformingForce(point, force);
    23. }
    24. }
    25. }
    26. }
    27.  
    Still untested beware.

    Try not to add questions or statements in edits, use a new reply. Editing is used correcting/clarifying your posts and not for adding new content.
     
    Last edited: Sep 6, 2017
    Amuscaria likes this.
  35. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    Well the tablet should provide X, Y, Z coordinates. I'm thinking Unity should be able to read generic output. See the Input manger section of player prefs.

    A mouse should work fine too. The mouse will provide the X, Y coordinates and the right/left mouse buttons could raise and lower the probe. This can also be done using the keyboard too. I don't think finding a solution for this is difficult as you have many options and may want to incorporate more than one.

    You are going to be defining how the probe moves. If you don't want tilt, then don't change the rotation of the probe. You will have to make it tilt if you wanted, with additional code. Simple omitting code is the easiest way to avoid the problem.
     
    Amuscaria likes this.
  36. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Duly noted. I'm just used to forums that frown upon double-posting and shameless self-bumpings.
     
    Bill_Martini likes this.
  37. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    I'm probably being a little pedantic about it but there are good reasons to use a new post each time. When I click the link to this thread, I'm taken to the bottom (last post). If I hadn't scrolled up for some reason, I'd never have seen the second edit. Secondly, this is a conversation and having questions and their answers scattered throughout the thread is very confusing to follow.

    One of the largest benefits to these forums is that they live on far past the issue at hand and are searchable. What is said here now can help others in the future, that is if I can stop providing incorrect code:( Everything you say can and will be used against youo_O
     
  38. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    AH ok. I have a question regarding Syntax. I'm not too familiar with C#, and its been many years since I've learned basic C++ in engineering school. What does this syntax denote?

    Vector3.Scale(hit.point, reverseVector), I know the Class is "Vector3" and Scale is a member of that class, but what does the stuff in the parenthesis mean? Are they parameters?

    Thanks in advance. :)
     
  39. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Hm, the code isn't working. Any negative value in the reverseVector will stop it from doing anything. I'm unsure why. Putting them all back to positive 1s restores the original behavior.

    Maybe I also need to change something in the script that does the actual deformation, but I don't see any shared variables.

    Code (csharp):
    1.  
    2. using UnityEngine;
    3.  
    4. [RequireComponent(typeof(MeshFilter))]
    5. public class MeshDeformer : MonoBehaviour
    6. {
    7.  
    8.     public float springForce = 20f;
    9.     public float damping = 5f;
    10.  
    11.     Mesh deformingMesh;
    12.     Vector3[] originalVertices, displacedVertices;
    13.     Vector3[] vertexVelocities;
    14.  
    15.     float uniformScale = 1f;
    16.  
    17.     void Start ()
    18.     {
    19.         deformingMesh = GetComponent<MeshFilter>().mesh;
    20.         originalVertices = deformingMesh.vertices;
    21.         displacedVertices = new Vector3[originalVertices.Length];
    22.         for (int i = 0; i < originalVertices.Length; i++)
    23.         {
    24.             displacedVertices[i] = originalVertices[i];
    25.         }
    26.         vertexVelocities = new Vector3[originalVertices.Length];
    27.     }
    28.  
    29.     void Update ()
    30.     {
    31.         uniformScale = transform.localScale.x;
    32.         for (int i = 0; i < displacedVertices.Length; i++)
    33.         {
    34.             UpdateVertex(i);
    35.         }
    36.         deformingMesh.vertices = displacedVertices;
    37.         deformingMesh.RecalculateNormals();
    38.     }
    39.  
    40.     void UpdateVertex (int i)
    41.     {
    42.         Vector3 velocity = vertexVelocities[i];
    43.         Vector3 displacement = displacedVertices[i] - originalVertices[i];
    44.         displacement *= uniformScale;
    45.         velocity -= displacement * springForce * Time.deltaTime;
    46.         velocity *= 1f - damping * Time.deltaTime;
    47.         vertexVelocities[i] = velocity;
    48.         displacedVertices[i] += velocity * (Time.deltaTime / uniformScale);
    49.     }
    50.  
    51.     public void AddDeformingForce (Vector3 point, float force)
    52.     {
    53.         point = transform.InverseTransformPoint(point);
    54.         for (int i = 0; i < displacedVertices.Length; i++)
    55.         {
    56.             AddForceToVertex(i, point, force);
    57.         }
    58.     }
    59.  
    60.     void AddForceToVertex (int i, Vector3 point, float force)
    61.     {
    62.         Vector3 pointToVertex = displacedVertices[i] - point;
    63.         pointToVertex *= uniformScale;
    64.         float attenuatedForce = force / (1f + pointToVertex.sqrMagnitude);
    65.         float velocity = attenuatedForce * Time.deltaTime;
    66.         vertexVelocities[i] += pointToVertex.normalized * velocity;
    67.     }
    68. }
    69.  
     
  40. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    It should do a multiplication of the two Vector3s provided.
     
  41. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    Hmmm, we need to back up here a bit. Earlier you requested a way to invert values of a Vector3 and kindly obliged. I was too busy looking for a solution I failed to question the results. Altering hit.Point changes where the force is applied. I'm guessing making the X and Y values negative puts the hit.Point off the mesh completely.

    So remove the reverseVector variable and from the line defining Vector3 point (put back as you had originally). And change this;

    Code (CSharp):
    1. deformer.AddDeformingForce(point, -force);
    This is just a guess and hope it works, altering the Vector point was definitely wrong. I'll actually add your code to a project later so I can test before offering more suggestions.
     
    Amuscaria likes this.
  42. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    I think I've tried -force before I asked how to reverse the X and Y in a vector3. It ends up pulling things up. But it does pull the neighboring vertexes closer together. I'm pretty sure it has something to do with how the forceOffset distributes the neighboring vertex velocities. Maybe I'll use a debug thing to see what changing the X and Y values is actually doing. I'll play with it some more and report back in a bit.

    I do really appreciate all the help you've provided, thought. Thank you for that. :)
     
  43. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    No! hit.Point and the var point are positions in 3D space relative to the skin mesh. That's what the raycast is doing, finding a point on the skin mesh. Force is applied to that point. Altering the deformation point will only change the location. You want to change the direction of deformation not the point of deformation. Make sure -force does not work, that should reverse the deformation direction. I'm talking force not forceOffset. -force and force should give you different results.

    I see a few areas in the deformation code you provided that might be changed to correct the problem. if -force doesn't pan out try changing this line in the deformation code in the AddForceToVertex function.

    Code (CSharp):
    1. vertexVelocities[i] -= pointToVertex.normalized * velocity;
     
  44. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Untitled-2.png
    The Left image is default, the right is with -force Here's what I originally had by setting both force and offset to negative. Changing the "+=" to "-=" in "vertexVelocities" doesn't seem to do anything that I can tell.

    NegForceandOffset.png

    Here's what I originally had with the default code and just setting the force and forceOffset to negative. It works well for the skin, but messes up with tubes. I'll probably make 2 versions of the deformer script, and have the tubes use one, and the skin another with just the force reversed.

    But I think I'm just bogging myself down too much with the visual things and not focusing on the main functionalities - like getting the probe to do the deformation and having it orient itself on the skin surface. I'll currently looking at this forum post for details on how to do that: https://forum.unity3d.com/threads/character-align-to-surface-normal.33987/

    Thank you for the help thus far. I'm sure to have more questions soon.
     
  45. Bill_Martini

    Bill_Martini

    Joined:
    Apr 19, 2016
    Posts:
    445
    Sorry for not replying right away, real life is raining down on me right now and haven't had too much keyboard time lately.

    I'm not quite understanding what is happening in the images you provided. Clearly the vein object is being deformed on an incorrect axis. I assumed you were working on the skin issue only. I'm not going to me of much help without the actual model your working with. Is the skin, artery, vein, and nerve a single mesh? Are the artery, vein, and nerve submeshes or each a separate mesh? I'm guessing your two script approach is the way you will have to do it as the native orientations seem to be different.

    Yes you need to start incorporating the probe into the mix. The reason why is it's going to provide a whole new set of problems you're going to have to work out. Whatever you fix now, you'll be fixing again with the probe added.

    The normal way to keep an object perpendicular to another is to get the contact point normal of the second object and align the first to that normal. SInce you are deforming the area you want to obtain the normal from presents a bit of problem. I'm not sure how to go about this other than a second mesh off screen that you use to get the normal from.

    I can't tell from your sketches what your probe actually looks like. Is it more like the video? If so, you will have to make multiple deform calls along a line. You're currently working with a single point. Multiple deform calls will overlap and add additional deforming to a previous deformation. Probe rotation in itself is very easy and rotation can be applied to a specific axis only.

    Get your probe working properly as to position and orientation, then hammer out the rest. The probe is going to cause you more grief than you expect.

    I don't know how much longer I'm going to be dealing family issues, I'll pop back as often as possible.
     
  46. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Oh, no worries about not replying right away. I was away on the weekend myself. And real life is more important. Do what you need to do first. :)

    The entire scene is exported out as a single mesh with each component as submeshes. I imported the OBJ and assigned the script and materials to each submesh (skin, artery, vein and nerve). Right now, they're all running the same MeshDeformer script. I actually mistook the -force method to work on the skin, turns out it doesn't either. It works only at the surface that the camera is looking at, while the back face gets pushed out way way too far. It's just more apparent on the vein because I can see the other side due to its small size.

    And I hadn't thought about the alignment issue with deforming meshes. Maybe I can have an invisible mesh under the skin that does the alignment. It would also allow me to set a maximum depth the probe can go? Also, for this experiment, the probe is just a rounded cylinder, so a single point is enough for now. I just wanted to see if the process is possible before I proceed to the final product with actual anatomically realistic models.
     
  47. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Update: Got the probe surface alignment to work with the deformation after some ties. :D Still need to change the deformation to Raycast all so that the stuff under the skin is deformed too. Otherwise I'm off to the shaders section of the project. Thanks all for the help. I'll be back with more questions soon. :)

    ProbeDeform.png
     
  48. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Im having trouble figuring out why this code doesn't do anything.

    Here is the original Raycasting function that detects rather or not there is a mesh with the script component "meshdeformer". If it is, where ever the ray hits is deformed.
    using UnityEngine;

    Code (csharp):
    1.  
    2. public class MeshDeformerInput : MonoBehaviour
    3. {
    4.  
    5.     public float force = 10f;
    6.     public float forceOffset = 0.1f;
    7.  
    8.     void Update ()
    9.     {
    10.         if (Input.GetMouseButton(0))
    11.         {
    12.             HandleInput();
    13.         }
    14.     }
    15.  
    16.     void HandleInput ()
    17.     {
    18.         Ray inputRay = Camera.main.ScreenPointToRay(Input.mousePosition);
    19.         RaycastHit hit;
    20.  
    21.         if (Physics.Raycast(inputRay, out hit))
    22.         {
    23.             MeshDeformer deformer = hit.collider.GetComponent<MeshDeformer>();
    24.             if (deformer)
    25.             {
    26.                 Vector3 point = hit.point;
    27.                 point += hit.normal * forceOffset;
    28.                 deformer.AddDeformingForce(point, force);
    29.             }
    30.         }
    31.     }
    32. }
    33.  
    I tried converting this to a RacycastAll so that the underlying structures will also be deformed. I altered the code like this:

    Code (csharp):
    1.  
    2. using UnityEngine;
    3.  
    4. public class MeshDeformerInput2 : MonoBehaviour
    5. {
    6.  
    7.     public float force = 10f;
    8.     public float forceOffset = 0.1f;
    9.     public float maxRayDistance = 20f;
    10.     public GameObject probe;
    11.     public bool debugRay=true;
    12.  
    13.     void Update ()
    14.     {
    15.         if (Input.GetMouseButton(0))
    16.         {
    17.             HandleInputV2();
    18.         }
    19.     }
    20.  
    21.     void HandleInputV2 () //uses raycast all and originates from the probe
    22.     {
    23.         Ray inputRay = new Ray(probe.transform.position, -probe.transform.up);
    24.         RaycastHit[] hits = Physics.RaycastAll(inputRay,maxRayDistance);
    25.  
    26.         foreach (RaycastHit hit in hits)
    27.         {
    28.             MeshDeformer deformer = hit.collider.GetComponent<MeshDeformer>();
    29.             if (deformer)
    30.             {
    31.                 Vector3 point = hit.point;
    32.                 point += hit.normal * forceOffset;
    33.                 deformer.AddDeformingForce(point, force);
    34.             }
    35.  
    36.             if (debugRay)
    37.             {
    38.                 Debug.DrawLine(probe.transform.position, inputRay.origin + (-probe.transform.up) * maxRayDistance, Color.green);
    39.                 //Debug.Log("Debug raycast is working");
    40.             }
    41.         }
    42.     }
    43. }
    44.  
    The thing runs without any errors, and the debug DrawLine works fine. I've also attached the proper "probe" component but nothing happens. The mesh isn't deformed at all. What mistake am I not seeing?

    Thank you all in advance. :)
     
  49. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    OMG I'm an idiot. I didn't set the force for my new script properly. I left it as the default 10, instead of the 4000 it needs to work. Lol. It works now.
     
  50. Amuscaria

    Amuscaria

    Joined:
    May 30, 2017
    Posts:
    55
    Got a question regarding If statements limiting the rotation of objects. i'm trying to make the W and S keys rotate the object along the Z-axis as long as the game object is within +/- 36 degrees of the parent object that orients it to the surface of the skin. I wrote the following:

    Code (csharp):
    1.  
    2. void TiltFunction1 () //tilts the probe along Z-axis.
    3.     {
    4.         if (transform.eulerAngles.z <= transform.parent.eulerAngles.z + 36f || transform.eulerAngles.z >= transform.parent.eulerAngles.z - 36f) //limits the tilt to +/-36 degrees
    5.         {
    6.             if (Input.GetKey("w"))
    7.             {
    8.                 transform.Rotate(0,0,tiltSpeed);
    9.             }
    10.             if (Input.GetKey("s"))
    11.             {
    12.                 transform.Rotate(0,0,-tiltSpeed);
    13.             }
    14.         }
    15.         Debug.Log("Parent rotation is " + transform.parent.eulerAngles.z);
    16.         Debug.Log("Probe rotation is " + transform.eulerAngles.z);
    17.     }
    18.  
    19.  
    Everything works except the if statement. I can rotate the object endlessly, and I'm not sure why. What am I doing wrong?
     
unityunity