Picture to Unwrapped Texture to Object

Hello!

I am kinda new to this topic and seem to be a little bit stuck.

I am currently trying to build a mobile app, where i can load one or more pictures, and use those as a texture for an 3D Object. I created the Object in Blender and unwrapped it.

What i am trying to do is, managing those multiple pictures kinda like ‘layers’ in photoshop, and when i click a Button, i want them to be put together on one ‘layer’ (would be a texture then) and to use the created texture as texture of the object.

This works fine for a single picture, because all i have to do is load the pic as texture and place it on my object. But for multiple layers i don’t know how to do this.

I tried different things.

My first approach: I created a Shader that is capale of loading multiple textures and loads in the layers by a naming convention. This works ok, but doesn’t really cover the layer hierarchy because it just overlays the textures.

My second approach: Using a camera output as render texture. Making a ‘Snapshot’ of it and putting the snapshot on the object. The problem with this is, that setting up the correct parameters of the cameras viewrect is pretty painfull, because i have to keep the aspect ratios and size exactly the same as the texture i take a picture of, otherwise it won’t fit to the unwrap of my model! Since i want to be able to do this with multiple different models and textures i would like to get this to work dynamically so i don’t think this is of any good…

Does anyone have tried something like this before? Any ideas on what would be the best practise to achieve what i am trying to do?

Any help would be highly appreciated!

Thanks in Advance
Bob

I don’t understand that part, isn’t what layer are supposed to do?

Well, i guess. but first of all i don’t think shaders are capable of defining something like a list of textures right? I’d have to hardcode every texture i want to be able to use on my shader. This could become a problem when users want to add and remove layers dynamically. And the second problem i think i have allthough i’m not sure about that, is that it seems to be difficult to define layers that completely ‘cover’ the layer that lies underneath.

Why not do it manually using texture2d? You don’t have to do it using render texture.

Well i DID try that actually! But i seem to have thrown away the code allready.
The problem with that approach was, that combining the Texture2Ds resulted in weird clipped images. Transparency seems to be a problem with this approach. Or is there any way to avoid this? What’s the best way to combine multiple textures?

Btw, i don’t necessarily NEED to use RenderTexture, i just thought it would be a good solution from what i know, but that isn’t very much yet ;D

EDIT: The other thing i forgot to mention, is that i want the user to be able to rotate and move the textures. I achieved moving textures by changing the offset of the texture, but i don’t know how to rotate and i can’t find anything about it. SO i thought if i use a rendertexture i just rotate the object and let the snapshot do the rest