Just wanted to share that I’ve started to make MRKT3 work with PolySpatial on VisionOS.
Have a look at the fork if you are interested and/or want to help.
Simple touch3d input is working (like clicking buttons, playing the piano or dragging objects around)
I have not tested hand tracking or anything else as I don’t have a Vision Pro yet.
I’ve partly recreated some of the shaders in shader graph as you can see in the screenshot.
I added all the original shader properties into the shader graphs, so that settings are preserved when swapping the shader for a material. However, most properties are not implemented in the new shaders because my knowledge & time is limited
Shader graph doesn’t support uber shaders (shader variants based on features) AFAIK, so it would become quite computaionally heavy to support everything at once.
If anyone wants to improve the canvas shaders (for example to support lines, coloured edges and corner radius), that would be great As I don’t know how to implement that yet.
You can find the graphs here…
thank you! please set up a patreon or buymeacoffee or stripe (or other payment system) or have your repo be available so that we can collaboratively help sponsor your vision pro and contributions.
i feel like everyone from the mrtk/hololens team is too jaded to do what’s necessary here… so it is up to you and the community.
community is where it is at! keep going!
Thanks! The repo should be public! Link is in the original post.
Good idea with the coffee, because it does take some coffee to do this conversion haha! I set one up.
great that you have been looking into it. We will also try to analyse the implementation efford for MRTK soon. We do have our Vision Pro though and from looking at the XRHands implementation if might just work. Once the “Play to Device” feature from Unity works again I could make a test run and share my results.
@Jelmer123, great start, thanks for sharing !
Please report progress in this thread as you go, I will see if i found some capacities to do any testing/validation.
Probably really useful for you to comment about this effort on the MRTK github as you may find additional contributors/interested people there:
This project didn’t work for me but I managed to make MRTK3 work in fully immersive mode. Here are the steps
1- Download the sample in
Apple visionOS XR Plugin and understand the Main scene
2- Disable both hands interactors under MRTK3 XR rig
3- Use gaze interactor from MRTK3 and configure it by adding correct XRI Actions following sample project
4- It should work well in Unity, Simulator and Vision PRo
I’m unable to attach more screenshots because of the Unity limit on media.
What didn’t work? Did you go on the wip branch? The main branch is just MRTK as is.