as you can see from the title I try to setup a simple project, where the user can draw a simple object on runtime. If he raises his finger (I am developing for mobile), the object should be given a collider and a rigidbody, so it can interact dynamically with the ingame world (like for example be thrown somewhere etc).
I am familiar with the concept of Line Renderers in Unity so I at least know, how to visualize what the user has drawn.
But how would I make a simple 3d object from this drawing?
I was curious about this myself so I took a quick try at it. There are multiple problems to solve:
screen (finger) space to world space
accumulate the points in a buffer
show the points-in-progress with a line renderer
when the user’s finger comes up:
create 3D geometry in the Z == 0 plane (including useful UVs)
create a PolygonCollider2D with those same points
put a RigidBody2D on it so it falls
PROFIT! (I haven’t figured this step out yet)
I figured I’d play with it in my MakeGeo repository, which is where I have done other procedural geometry experiments and examples, so it’s in there right now, if you’d care to cheat and see one possible way of doing it.
The code is pretty straightforward. Hope it helps you in some way. LMK if you have any questions or see any errors.
I’m not sure what you mean: most of MakeGeo is actually 3D objects.
If you mean to change the 2D stuff I did yesterday into 3D, it wouldn’t take much:
use a MeshCollider instead of a PolygonCollider2D
make sure to supply the Mesh you created to the MeshCollider
You probably want to also extrude the given shape a bit to make it reasonable in 3D, which just means:
choose a thickness
make the front and back sides (there is already code) separated by that much
go around the perimeter with each set of front and back edge chunks and make extra polys to bridge the front and back
You’d just have to decide how to UV-map the perimeter… or not!
To make the raycasts work to a planar point in arbitrary 3D space, just make a Plane object where you want your stuff to be made and raycast from the Camera to that Plane, and make you world points that way.
You could use that to make one of those “paint in 3D VR” type things that you can see videos of, such as this one I found just now:
Hey Kurt,
yes exactly what you wrote above. The only problem i have is, that I can imagine what you mean/understand it, but I don´t know how to solve this programmatically.
Could you maybe give me a hand on this? Would be really appreciated!
Hey Kurt, thanks again for spending time on my problems.
So I basically want to be able to draw 3 dimensional objects in a 3d space. Its basically exactly what your code already does I only would need the generated objects to be real 3d objects. So for example if I draw a circle with my finger it would create me a sphere.
If I would draw a rectangular shape , it would create a box.
If i would paint a shape with multiple edges it would just take that shape and “extrude” it, so it becomes 3 dimensional.
Does this make sense?
I unfortunately am completely new to procedural mesh generation, so there is no code to show yet
So the difficulty would specifically be how to approach that mesh generation. I have basic knowledge how meshes are constructed and how they work, but not profound enough to solve programmatically what you wrote here
If you mean to change the 2D stuff I did yesterday into 3D, it wouldn’t take much: - use a MeshCollider instead of a PolygonCollider2D - make sure to supply the Mesh you created to the MeshCollider You probably want to also extrude the given shape a bit to make it reasonable in 3D, which just means: - choose a thickness - make the front and back sides (there is already code) separated by that much - go around the perimeter with each set of front and back edge chunks and make extra polys to bridge the front and back
What is going to be very difficult is recognizing a gesture and categorizing it appropriately.
How “perfect” does it have to be to be a circle versus a square? Do imperfect circles get drawn too? What happens to their depth? Same thing with boxes: what if you draw a non-parallel box?
It sound like you need to make a complete design document that tells what it does in each case, such as what if you draw two sides of a box with a rounded bottom? The answer isn’t simple and it is up to you to define it well enough that you can take a crack at it yourself. You can google up gesture recognition and shape recognition to see lots of discussions about it, but that does not mean it is an easy problem.