Need help with term search and feasibility questions.

Hello Unity Forums. I’m completely new to Unity and am looking for some help looking for the proper terms to search for, so I can look up the correct topics and tutorials online for a project I’ll be working on for class. I’m also looking for opinions on rather the project I have is feasible from those that have much more experience than me. Refer to the reference image below.

  1. I’m trying to make a ultra-sound simulation for class. I basically need a probe-object to interact with a number of tissue-objects based on in-game pressure applied. I need the probe to deform the skin-object (the square tile), and to a lesser extent the artery-object (the tube). The Skin- and tube needs to deform when the probe pressed down on the surface, but needs to rebound when the probe is removed, and the amount of deformation should be proportional to the in-game pressure. Is this doable? If so, what is the proper terminology for this in-game behavior? I’ve looked up mesh deformation, but that’s always done for a 2D plane, not a 3D flattened cube. Or is the concept the same?

  2. I want a simulated cross-section plane attached to the tip of the probe that renders out the cross-section of the intersected objects defending on the probe angle and depth. I need this to be rendered on a separate UI element. I’ve seen something similar done with cross-section shaders, but never seen it the cross-section itself rendered on a different UI element or window. Is this doable? If so, might someone point me in the proper direction?

  3. This one is not important, but I would like a shadow to be cast on the cross-section render, so that objects above casts a shadow effect on those below, just to give it some extra realism. Low priority, really, but is this also doable? Do I need to make 2 or more different shaders to achieve this?

All of your help is much appreciated. :slight_smile:

This sounds like a really hard project. All doable, of course, but not easy.

Mesh deformation is mesh deformation. You will need to find the collision with the probe (in a more thorough way than Unity’s standard intersection tests), and deform the top surface of your box accordingly. Then you’ll need to also adjust the bottom surface of the box so as to maintain a fixed volume.

As a possible good-enough shortcut, you could skip the volume calculations, and just deform the lower surface in the same way as the upper surface.

If you take the end of the probe to be a sphere, then calculating the mesh points in this sphere — and where you need to move them to get them out of the sphere — is relatively easy. So, item 1 isn’t too bad really.

Item 2 is a bit harder. It’ll take some math to calculate where the various meshes intercept the cross-section plane. (If you can restrict this cross-section plane to be aligned with the coordinate system — say, to have a constant Z value — then the math gets a little easier.) Then for displaying it… uh… I don’t have a lot of great ideas. Maybe you could draw the intersected mesh surfaces with something like Vectrosity. But if you want filled areas, as you probably do, then that won’t work very well.

Probably what you’ll end up doing there is making another, 2D mesh, just for this cross-section display. Then you deform this mesh in the same way as the 3D ones below.

For item 3… uh… maybe that could be done with a shader effect. More likely, you adjust the vertex colors in your 2D cross-section mesh, according to how much stuff is “above” each one.

All this is pretty advanced stuff for someone who is not a professional developer, and also new to Unity. But it is doable!

I’ll looking into these things. As for number 2, I’m talking about something like this:

I know that’s an animation, but I thought maybe something like that is doable in real-time in Unity. And yeah, I think it’s a bit advanced for a beginner, but I think I can get some help from a professional Unity person at work for some guidance. I just didn’t want to ask him how to do something that might not even be doable.

Your help is much appreciated! :slight_smile:

EDIT #1:

I think I know how I might be able to approach the problem. I’ll use something like a in-game security camera that renders on half of the in-game display.

  1. I’ll have a second identical but invisible model for the tissues that would mirror all the deformations of the first somewhere else in the scene, not visible to the main in-game camera.

  2. I’ll have the cross-section plane in the second model that mirrors the movements of the probe. Even though the model is invisible, I think I can make the cross-section shader render the cross-section.

  3. I’ll attach a camera that is always normal to the cross-section plane, set at a distance that allows the entire cross-section to be render. This camera will render the cross-section on the second half of the main display.

  4. For the shadow effect, I’ll probably add add some random effects on the shader. It’s not really that important though.

This works in my head, but would love some feedback on if this works in reality from people with experience. Thanks. :slight_smile:

1 Like

Most of this is over my head, but mesh deformation may also be referred to as “soft body physics” if you’re looking around for things. Definitely complex stuff, but it’s like eating an elephant: you just gotta take it one bite at a time.

1 Like

I’ve been looking up soft body physics tutorials and found no tutorials online on how to approach the problem. I have found a video that is pretty close to what I want:

Anyone got ideas on how to achieve what is show in the video above?

I’ve downloaded the Bullet Physics asset from the store. Not sure if that’s what I need for this project, though. I’ve also found VertEXMotion development kit for soft body. But its pretty expensive and I’m not sure if that will even be what I need.

I don’t think you need to get so fancy as to try pushing the soft-body physics onto the GPU. It should be fine to just do it in regular C# code in this case.

I don’t know of any tutorial that will cover exactly what you want. But learn about procedural mesh generation, and you’ll have pretty much everything you need to know to deform a mesh in response to the probe.

I’m currently looking into using a bitmap to generate the deformation. If I could render a bitmap (or something like it) in real-time depending on the pressure and coordinate of the probe, I might be able to do that. But how to make the bitmap in real time.

Also, I know I can do that for the surface, but am unsure how to apply that to a tubes below :confused:

I don’t quite get what you mean by “using a bitmap to generate the deformation”. It’s easy enough to render a bitmap (you could use PixelSurface for example), but I don’t see how that helps deform a mesh.

As for the tubes below, you’ll deform them in the same way as the surface above, I would think. Possibly less if you want the tissue to compress a bit.

I mean how people use perlain noise to bitmaps to control the surface topology of a plane. I was thinking about have a separate function that isn’t visible to the user where the probe is actually just moving a black circle around on a white background. 30 times per second, I’ll have unity generate a capture image internally with the black circle on white BG as a bitmap, and use that to control the topology of the surface plane. The white background is refreshed back to a white background shortly before the next operation to allow the probe to update its position and reset the surface plane height if the probe is no longer in the same location, and thus updating the terrain topology in real-time. This way the surface plane updates in real-time with the probe. Maybe there is a simpler way.

One problem that comes to mind, though, is that this only works on a flat plane due to the coordinate system. If the surface is curved irecularly (which I’ll need eventually), the Unity game coordinates would be off and the surface depression will no longer be normal to the surface of the model.

As for the Tubes, so I just transform the Z-coordinate offset of the plane to the vertices of the tube below? How do I check where the probe is relative to the tubes?

People don’t use Perlin noise with bitmaps to control a surface topology. They just use the noise function to directly set the position of each vertex. Bitmaps are not involved.

Similarly, there is no point in using code to draw a black circle on a black background, and then use this image of a circle to set your vertex positions. Just set your vertex positions directly from the position of the circle.

As for the tubes, I can see that you’re getting hung up on perceived differences that aren’t really there. It’s too much to grasp all at once. So, forget about them for now. Get some mesh deformation code working for a simple plane first, and then I think it will be much easier to see how to apply it to the tubes.

I think the op is referring to gray scale heightmaps when he says ‘bitmaps’. Perlin noise can be used with a heightmap to provide uneven topology. It can also provide an animation by repeatedly providing offsets to X and Y parameters, giving a bubbling or agitation effect.

My first impression was a heightmap is a good idea. A grayscale footprint of the probe could be created in Photoshop (or ???). Changing the brightness of the footprint image then applying as a heightmap. The problem with that is it is just a simulation and can only be as good as the footprint image, but does not accurately represent the actual probe.

@JoeStrout had the idea earlier in this thread to just use colliders. Colliders can register multiple points of contact, simply adjusting each contact point inward a small amount. Normals will have to be recalculated, but this should work fine for depressing the probe. Not really sure how to restore the mesh as the probe is lifted. Constantly raising each vertex slowly to a normal state while lowering collision points seems like a lot of ping-ponging going on. Maybe recalculate each time the probes Y position changes only. Either way i think colliders are your answer and they are not difficult to work with as they contain a lot of information about the collisions.

@Bill_Martini I’m also worried about how the heightmap will work on a curved surface, especially one that is irregularly curved like, say a human neck or arm. How do I make it so that the probe always depresses in the direction normal to the surface rather than just in a negative-Z direction? Of course, this is mostly just a visual effect for the surface layer, but I would need the underlying structure to be compressed the right direction.

I’ve done more thinking on this and my conclusion is, this is damn hard! Deforming the tissue mesh will not work properly using points of a circle or colliders because they only act on the vertices that would overlap. Tissue bends and a cross section looks similar to a sine wave. The peaks of the wave are the ‘normal’ tissue and the trough is the probe depression point. I wish my math skills were good enough to provide exactly how to use sin / cos to adjust the vertices. Ultimately you want to adjust for probe depth and tissue density. Softer tissue will have a larger depression dispersion while harder will be smaller.

There are some softbody models in the asset store (search jelly). There might be some answers by inspecting these. Possibly the authors would be willing to assist you as they would know more about the subject.

At some point you’re going to have to actually start making attempts at this as seeing the results will guide you to the proper solution.

It doesn’t have to be that hard. Assuming, of course, that it only needs to look good. (If it needs to be actually realistic, then you’ve moved into the realm of simulation and life gets much more complicated.)

So for example, I’d start with this: measure the distance of each point to the end of the probe. Pass that distance through a sigmoid function (which you could even set up as an Animation Curve, so you can tweak it visually right within Unity), to get a value that indicates how far the point should move. Move each point away from the probe accordingly.

There are some refinements you’ll probably want, such as making sure that no point ends up closer than some minimum distance to the probe (this represents the physical extent of the probe; the tissue should never intersect that). Just check the distance after doing the above procedure, and if it’s too close, move it away to that minimum radius.

If this proves insufficient for some reason, the next step would probably be a ball-and-springs model, where you treat the edges in the mesh as springs connecting little masses at each vertex. This sounds hard but is actually not that bad. This can result in neat effects like the tissue jiggling a bit when you move the probe around (though I suspect that in reality, the amount of jiggle is so minor that it’s not really needed).

1 Like

Yes, sigmoid! I couldn’t for the life of me think of that. I like the ball and springs idea too, might be overkill, but with proper dampening, it might be a good choice also.

The deformation is indeed only for the looks. Tissue accuracy isn’t necessary as long as it’s roughly emulate what it looks like in real life.

Does anyone know of any open-source example with interactive deformation where I can take a look at their code to see how they approached the problem? Doesn’t need to be precisely what I need.

Try this one.

Thanks! :smile: That’s an excellent start for me. Much appreciated.

EDIT: Got a C# questions. What does the “10f” in this code mean? Rather, that is the “f” stand for:
public float force = 10f

2 Likes

All float values require a trailing ‘f’. 10f is a shortcut of 10.0f.

So I used the tutorial example provided here: Mesh Deformation, a Unity C# Tutorial that JoeStrout gave me, and it works quit well. There is just one problem - it works with all the default Unity meshes (spheres, planes, cubes, etc), but not for any mesh imported from 3DSMax as objs. What might be the cause for this? The mesh looks fine, and has all the subdivisions I need. But wont deform at all.