Sorry, I am NOT a graphics person or a prompt engineer … probably very obviously as I write more and more.
I use these sprites in my game - these sprites are over 10 years old - so maybe there is a newer way to do these color changes.
I have a player, made up of three images, layered PNGs, each of the three could have the color changed to customize the player.
Image A is the face, skin and general outline of the player.
Image B is the uniform primary color (red) but can be any color the User wants.
Image C is the uniform secondary color (grey) also can be any color the User wants.
When using Muse Sprite, I can highlight all three sprites and drop them into the Input Image, but only the first image is shown.
First question, will Muse Sprite be able to accept multiple sprites and return each of the sprites as layers?
And/or, could you somehow prompt Muse Sprite to create three layered sprites as output?
Second, I was hoping to simply drag and drop my sprites and tell Muse to give me 4 variants of the Input Image. I have tweaked the Style Strength and Tightness, what seems to work is Style Strength = 0 and Tightness = 1. But the images returned are only vaguely similar to the original. What prompt/settings can I use to get my input image, holding a batter, or holding up both arms, or a sprite sheet of arm motions?
Again, sorry if I’m out of bounds here - maybe I was hoping for a miracle
Thanks
I’m not a Unity rep or anything, but from what you listed on the first question, this sounds like not what neural networks should be doing at this point.
As far as I understand, generative neural networks (GNNs) are primarily driven by your textual prompt, and reference images are used to roughly take information from them that you didn’t specify in textual form and augment the task conditions with it. Thus, the result you described can be achieved when you take N number of GNNs and set each of them a task to produce a separate layer using a specific reference layer from your sources. Likewise, your text prompt for each of the N GNNs should be edited to describe a separate task. So, if you automate breaking your input prompt into subtasks with a separate GNNs performer and a reference layer for each, and then collecting the results of each subtask into a final layered file, the result you want could be achieved, but this is not what currently exists as far as I know. Again, I’m just a student studying neural networks. I could be wrong and it would be nice to get a comment from a recognized expert.
By the way this is similar to assembly line production where each worker does their part to produce the final product. Accordingly, for each of your requests, such a system must develop something like a blueprint so that the parts can be assembled into a whole. How long did it take for mankind to move from one-man production to manufactories ?