I think it would be great if muse can make multi layered image which can be used for Unity 2D Animation package.
What a wonderful idea! Please tell us more about how you think it should work.
Currently all the ai painting model generates single image. (Stable Diffusion, Muse Sprite, etc). But to make 2d Skeleton animation like this asset which I made, body parts should be seperated in seperate layer.
But actually I think this might be hard to achive in current ai tech level. (Not sure. I don’t know much about technical part.) So, I think more simple and useful solution might be making just single image model or LoRA which generates stylized and consistant images for Game Character Art(Exact Side View, No Perspective, consistant body propotion, consistant Artstyle, Transparant Background) like images below which needs artist to manualy seperate layer and retouch and make skeleton animation.
Currently Muse Sprite doesn’t have AI Model for Human type charater. It would be great to have AI Model which is stylized for generating game character.
P.S Currently I think all the AI painting tool is not production ready. (Stable Diffusion, Muse Sprite, etc) They can only be used in prototyping. To be used on production stage, they should be retouched by the Artist. Normal people might be hard to find the weird part of the AI artwork, but all the artists would find something is wrong with the raw AI painting.
So, In current tech level, AI image generator for gaming studio should not focus on making standalone AI generater which doesn’t need Artist. At the final stage of work flow there always should be Artist. AI should be focus on enhancing the productivity of the Artists. I think best driver of the AI image tool is the Artist. (AI tool is just a cool toy for normal people who can’t retouch and fix the artwork of AI)
I think the challenge here is interesting but maybe not as hard as one might think if approached creatively. The thing is to be able to create multiple layers that are consistent with shape and style. What if this was approached in the way that adobe firefly generates text. There could be an editor that allows a user to define general shape of body part silhouettes and then the AI more or less fills them in based on prompt/style. It could export a PSB file for further refinements
I think it’s ok to generate a single image with separated body parts as a start. In the input image we can draw how will be generated.
I tried this approach with the following description:
“sprite sheet for a viking with torso, head , arms , legs drawn seperately”
But all I get are images of a viking in a pose looking like the one in the doodle ( Oh yeah also one lama and an ox, go figure)
Tried playing around with the tightness setting for the doodle, but it still only draws a whole character.
Tried telling it to just draw " the arm of a viking" or " the head of a viking " but the results were not usable . Getting this feature right would elevate the usefulness of muse sprite tremendously for me because not being able to generate animation frames or body parts for animation rigging relegates the generated sprites to non animated objects for me.