I’ll provide a deeper view into our work on AR remoting, and hopefully address your question in doing so. AR remoting is currently in development; the first version will include the ability to test baseline AR features (camera, input, raycast, planes) on handheld AR (ARCore, ARKit) devices, without having to build the app. We we will soon be conducting user studies to gather feedback on early versions of the feature. As it continues to mature, we’ll aim to have a pre-release version publicly available for wider feedback thereafter. In terms of testing in the editor without using a device at all, we don’t have concrete plans to share at this time. We’ll share here as soon as we have an update.
Our current mono is circa late-2017/early-2018. We are working on updating the fork to recent master. At this point, it’s likely to be available during the 2021 release cycle. The work is underway, but it likely won’t be ready for 2020 releases.
I heard on the forums that Dots, ECS, Unity.Mathematics and Dots physics currently supports deterministic floating-point calculations on the same architecture.
Can you say a little more about determinism in unity? What are the plans for the sum checking of scenes? Sum checking would make detecting out of sync errors easier in multiplayer deterministic networking or any other critical cases which require verification.
Answer:
Hello! Great presentation
Just to know, there any ETA for Tiny + UI Elements Subset?
Thanks
Answer:
For the new environment system, have you guys thought of making a “TerrainGraph” akin to ShaderGraph or VFXGraph? I’ve done some experiments with the Job System and manipulating noise and terrain heights with a graph and it seems to work really fast. I’m just imagining that it would be cool to embed a graph inside of a terrain layer. Then you could apply that terrain graph or stack of layers to a terrain at runtime. You could also just make it easy to apply a shader from ShaderGraph to a layer in the terrain. Maybe you could have a special output node for that.
Answer:
Hi MechaWolf99,
We are working on several APIs; we want the Scene Tools to be generic concepts that are used as frameworks both internally and externally. We propose solutions out of the box but let you extend it to your liking.
In that specific case we are talking about the Prefab Handles that let you create handles through gameobjects and provides a framework that works in any context. You basically create your own prefab that can be used to replace/create gizmos, handles and manipulators.
Hope this helps!
It’s still planned - something we need to get on with - we know! Sorry for the delay on this front!
If for “multi-rendering path shaders” you want something like LODs within your shaders you can use keyword switches to build out different variants for your Shader Graphs (see documentation: https://docs.unity3d.com/Packages/com.unity.shadergraph@8.0/manual/Keywords.html).
If you’re looking for supporting multiple render pipelines within your Shader Graph we’re doing work there, and this will also it easier to get more specific about stages (see a UX mock up and more info https://portal.productboard.com/8ufdwj59ehtmsvxenjumxo82/c/52-master-stack-and-stage-blocks).
Currently we are not working on surface shaders and the shader team’s focus is on building the best possible shader graph and artists tools possible. Our goal is for you to be able to do what you could do with surface shaders with Shader Graph (supporting both node and code paths). That being said we are actively engaging with a users who see a big desire for surface shaders on the forums here: https://discussions.unity.com/t/781021 We would value any contribution.
Hi chrisk,
Kinematica preview package is ready and will be released by the end of April. I’m happy that you’re looking for it, it’s going to be awesome!
Cheers!
Hi! First of all I wanted to thank everyone involved in this roadmap session, the QA and the recent “state of unity” forum post. This kind of communication is really appreciated.
In the XR slide you mention “On-device remoting for handheld AR”, I’m not sure what it means. Could you clarify a bit?
Answer:
Hi Jawsarn,
Thanks for the feedback. This is something we are aware of and want to improve however we can.
Not directly related but we are working on improving the general visibility of what DOTS is doing while in the editor (What was previously constrained to the entity debugger). Also we have added Native debugging of Burst in the 1.3 release.
I’m interested in RTXGI from Nvidia RTX Global Illumination (RTXGI) | NVIDIA Developer
Do you have any information about how / when Unity will implement this? Thanks.
Answer:
We want to enable this in our new non-destructive environment, and we’re discussing how this can be integrated. We will share more once the plans firm up.
We don’t have any plans for direct integration of MRTK, but it’s worth noting that Microsoft has released a new version of MRTK (2.3) that includes support for XR SDK, which is our plugin framework that AR Foundation is built on top of. So current guidance is to use Unity’s Windows XR Plugin and Microsoft’s MRTK 2.3.
Hello,
We are currently looking into how we can improve the user experience of working with DOTS in the editor in general. One area in particular is how to view what is in a scene more clearly.
There will be updates throughout the year on those efforts in the DOTS subforum.
I wonder about foliage and grass support for HDRP terrain in 19.3. Could you improve these tools.
Answer:
Thanks for the overview.
Quick question: Why are you guys focusing on enabling tens of thousands of cameras by moving the system to dots? Is that really something you guys should focus on? Last time I checked no game needed anything close to that amount of cameras. Is that really a feature anyone asked for?
This is just one example of a roadmap feature that is just weird.The priorities mostly just seem wrong.
Overall almost the whole roadmap just feels kinda off. Some improvements like the new importer, the profiler features and the quality of life improvements in the editor are really nice (should have been improved years ago but better late than never, and the improvements are really really good thus far, great job by the teams) but quite a lot of improvements seem unnecessary, especially when looking at the massive issues Unity has.
URP not having a deferred renderer after almost 2 years is pretty rough, HDRP seems to only work well for artists (no shader documentation / the whole documentation seems to be geared towards artists instead of programmers, the fixed render queue with only limited injection points limits expandability, no realtime GI solution throughout 2020, …), why are there multiple physics solutions but both are in preview, why isn’t there proper build support for more platforms when using cloud builds, why is this the third networking solution in like 5 years, why is there a focus on machine learning but there aren’t even proper AI tools in Unity (Why isn’t there any form of behaviour tree), why the hell are sculpting tools in Unity? I cannot imagine a single person using them instead of ZBrush/Blender and why does it take about another full year to add C# expandability to the VFX graph?
I now this has been brought up quite a lot in forums etc. but you guys should really finally create a game using your own engine and tools instead of just making demos for specific features (with sometimes really hacky code in them) and calling it a day. It helps Unreal a lot, I’m sure it would help you guys as well.
This post got longer than originally planed and I’m sorry for some of the passive aggressive tone throughout it, I think I just had way higher expectations of this roadmap.
Answer:
Yes, our VFX Graph sample project is here, and here’s a blog post that goes into detail on them.
Hi TextusGames,
We are still experimenting with both options: the vertical flow is great for visibility when we have only one execution flow but is more difficult to handle for multiple flows (lack of space for the nodes because we still write text horizontally). Also in a context where the VS graph framework is also used for other graphs (state machine, procedural graphs, animation graphs, etc.) we have to consider several other use cases. We hope to be able to offer both options but we may have limit ourselves to only one for the first release.
Proxy ports are going to be supported as they are a great tool in larger graphs.
You can’t directly call classes in DOTS VS but you certainly write your own nodes.
Thanks for the feedback and have a great day!
Thanks for the update, I can hardly wait for this round of developments!
Is there any news about Cinecast development?
Answer: