I noticed that Muse Texture doesn’t like to draw triangles.
In this example I typed, “lots of red circles, blue squares, and green triangles”. There are some green squares and green circles, but no green triangles. There don’t seem to be any triangles at all.
Then I tried various other triangle tests and found that Muse just doesn’t seem to like triangles. Is there a certain syntax that should be used to group things together, or what method would I use for it to draw a green triangle?
I really like the textures it generates btw. It kind of puts a spin on what you type, but it generates very interesting repeating textures, and exports materials with several different maps (normal, metallic, AO; etc.) It’s just a little strange that it would leave out triangles completely in ten images
@Alex-Reid can you help?
I was able to get some triangles, but I guess this might be easier to be achieved with shapes?
Diffusion models are overall very weak at high consistency patterns that include specific repeating geometric shapes. If you supply a shape image you can get much more accurate results for what you have in mind.
We will have a feature in the near future that lets you take any image as input and it will create a shape/pattern image from this input, which you can use to guide your generations. It will let you draw some shapes with paint/photoshop/etc and then prompt to fill it with a nice looking texture.
Thank you for explaining about the diffusion model. I had no idea what type of AI model it was using.
Right after “lots of red circles, blue squares, and green triangles” I tried another triangle test with another ten textures. I thought this one was kind of interesting also.
I asked Muse Texture to create, “three large triangles” (image above)
Would it be possible to create another AI model to generate the shape/pattern input?
If there were a second model that was very strong with high consistency patterns and repeating geometric shapes – You could have it generate the shapes and patterns from a user’s text description and then feed it to the diffusion model.
You can already in the pattern tool right under prompt?
Interesting idea, although it may run into the same issue of getting consistent shapes that also follow the prompt. Will test a few ideas for this.
We have been exploring the idea of pre-generating a large amount of textures and letting users search, which would also show the inputs used to generate them. It would help people learn how prompts/reference images change the results on our model, and you could also convert them into a pattern image to generate something new (which will be automated soon).