Hello
this is my first time posting so sorry for any mistakes.
I’m trying to create a VR application to showcase AI-generated artworks for my capstone. I want to send the prompts from the application and receive and display the generated image. I’m new to unity and Ai-generated images. so I was wondering if anyone could give me a few guidelines on how to proceed and if what im thinking of is even possible.
thanks in advance.
If you can generate an image and store it in Unity’s StreamingAssets folder it can be accessed in Unity from there. However you mention prompts from an AI app. You could monitor that folder and when new content arrive then display it using a UI RawImage component or slot it into a materials Albedo slot.
Find out if whoever is hosting the application has a public web-sevices based interface like REST or WSDL.
I would run this locally and install a local copy of the open source Stable Diffusion. Then it’s just a matter of sending the prompt via command line and monitoring a folder where the images are stored.
thank you everyone for the help. one question though i want the app to run on the oculus quest 2. it is possible to make it so that the app monitors a google drive folder at runtime and downloads images from there as they are added?. i want the app to continuously retrieve images from the folder while i generate them and add them to the drive?
Your application boils down to displaying image created in external application.
So basically, you’d need to request it from somewhere (REST api or something similar, see UnityWebRequest Unity - Scripting API: UnityWebRequest ),
then store it somewhere (see Application.persistentDataPath and Application.temporaryCachePath Unity - Scripting API: Application ), load it into texture (ImageConversion.LoadImage Unity - Scripting API: ImageConversion.LoadImage), etc.
Regarding google drive…
Is there an API? If ther’s an API you can use it.