I would like to read some objective feedback from people actually making use of the RTX technologies that Unity provides.
I’m trying to decide if swapping engine for a specific rtx project is a viable option.
Currently, the Nvidia branch of ue4 is about as useful as shooting yourself in the foot before going for a run.
Performance is seriously crippled by the engine itself, resulting in an essential inability to get anything running at 4k in real time.
CryEngine is OK at it, but it’s making me want to literally cry (aptly named?) over some of its core features and inner-workings…
Does Unity do this (rtx) any better?
Can someone (who maybe published something alteady?) detail their experience using the engine and the features?
The objective target is to get 60fps or better (ideally 144…) gameplay in a polygon heavy realistic looking environment at 4k native on a 3090 / intel 12th gen i9 development machine.
The ability to scale down and render on Lower end 3x series cards is nice, but not really a requirement at this point (The project is of a scientific nature, so hardware is not currently a concern).
OK then if hardware is not a cost factor, I recommend you download the HDRP template. Simply get 2021.2 Unity, make a new project and choose HDRP template. From there enable raytracing and test it using DLSS. It’s not viable without DLSS.
Depending on your hardware and what you are doing, you can reach 60 @ 4K but I have no idea if you could reach 144.
This only requires a few clicks, and will inform you all you need to know. After all, it’s impossible to predict your use case and which features you need.
If you get stuck, just post more, it’s quite an active forum.
Thanks for the quick reply.
4k 144fps would be the holy grail - even for Nvidia.
I’m not expecting such unrealistic results, though I’m sure we all wish for them considering the cost of a 3090 in the first place (why doesn’t it make me coffee too?).
If anyone has more specific feedback from their own personal experience using Unity and Ray Tracing I’d love to hear it.
Further update, I’d also like to hear about any short comings in the ray trace rendering pipeline.
For instance, unreal doesn’t do foliage well (or at all) unless you go build from the Nvidia branch.
And stuff like hair Grooms behave even more erratically than in point lights. Glass doesn’t properly ray trace either, having to often push glass items into the deferred rendering pipeline via materials.
Cryengine on the other hand renders quite well and without needing an Nvidia card specifically to boot / haven’t seen any issue with the rendering specifically so far, but I have also not used the ray tracing as extensively as with unreal…
Wondering if there are any similar issues for Unity specifically that I should consider.
Epic Games is abandoning pure raytracing in favor of a hybrid approach called Lumen which has the potential to be much faster while maintaining most of the quality. If you need realistic lighting then you need to try Unreal 5 not 4.
What do you mean by 4k native? Are you referring to 4k without the use of an upscaler? Because currently the only way to achieve good performance is to internally render at a lower resolution (UE5 defaults to 1080p internally) and upscale to 4k. I don’t see this changing either for at least a couple generations.
Also, don’t believe what you read. UE5 lumen is as awful as the rest. If not even worse. Currently, also with heaps of limitations - and I’m being kind because it still in beta.
And you are Correct. Epic seems to think that upscaling is an ok solution to solve the fact that their rendering pipeline has been crippled for nearly 2 years (since .26 was released).
Not only is that not going to fly for just about anyone trying to produce anything professional IMHO, but it’s also costing them heaps of users…
Have you tried the preview or early access? Because the early access release used software acceleration and had more limitations than the preview release which has hardware acceleration.
I have tried it all, epic is taking serious hits from everything (particularly cry engine, it makes you cry, but the performance and look are far superior when you do things right).
But still, more curious about any similar limitations or issues with Unity, since i haven’t had 2 years to mess around with it as I have with Unreal and cryengine (yet).
Fear you will have to reduce your quality requirements a bit if you want to use a game engine. The thing is no game engine will be aiming directly at the Rtx 3090 setup and put maximum effort on that quality level because it would be a waste when only 0.01% of all gamers would experience it at all…
So instead techniques like DLSS and Lumen are being established because they achieve easier scalability.
Cannot speak from personal experience with HDRP/DXR but thing is in Unity’s case not only the RTX implementation is relatively new, the whole rendering pipeline is fairly new. On one hand it can be good to have something fresh without the burden of backwards compatibility, on the other it is not battlehardened for ages like what UE and Cry have.
Possibly, if some mod thinks it appropriate and moves the thread I wouldn’t mind at all.
Maybe true, maybe not.
There are a few games out there that manage very good performance while using ray trace.
Now, obviously it’s usually a mix and smoke/mirrors to where you can barely tell - as developers and tech artists have gotten really good on working with tools to fake effects instead of just truly rendering it the ray traced way…
Still, I don’t think that necessarily means that because something is a “game” engine, it has to perform poorly when rendering 4k via ray tracing.
Yes - the issue becomes the fact that between 1080p and 4k you have 4 times the amount of rays (or more), so lower performance is expected somewhat.
It was/is just the same for any rendering pipeline due to the amount of tris on screen textures etc.
I have yet to see a brilliantly good looking published Unity game to be honest - that said, many Unity titles are exceptional in other ways, which is why I’m starting to bark up (or try to) the Unity rendering tree…
Also, the fact something is new and not battle hardened doesn’t necessarily mean it’s bad - Epic is the exception to this rule
(But they have consistently done this on purpose, so they can’t even be considered an exception really…)
Shouldn’t matter though. CryEngine’s benchmark demo from 2 years ago manages over 65fps consistently at 4k.
I just have to get whatever I use to do the same and/or keep using cryengine…
You literally make 0 sense.
A benchmark is precisely used to determine Performance across different machines/configurations.
It is PRECISELY a real world application…
The gripes I have with cryengine itself is that I need to go in and re-code some of it to get render targets and writing into textures from a probe. And that I’ll need to make my own world partitioning streaming system to get the world size this project needs to run with. All things that Unity seems to do already.
CryEngine’s Neon Noir is not using the same technology that you are asking about here. It’s a software-based solution very similar in concept to Lumen.
Neon Noir: Crytek’s Software Ray Tracing! How Does It Work? How Well Does It Run?
The last time I tried HDRP RTX raytracing features (20xx not 30xx) they were extremely slow and often relied on accumulation over multiple frames - so you have to use TXAA to blur the mess but I found TXAA kind of soft… etc.
I hope true raytracing becomes a thing in future but for now there are many shortcuts that must be made to achieve some level of ray-tracing
Yea, Epic wishes. Lumen comparatively is trash. Not just performance wise, but code wise too. Something they are fully aware of apparently. Claiming it will be fixed at some point.
To date, chaos wasn’t fixed. So don’t go holding your breath…
As far as cryengine goes, in 2020 they added dx12 ray tracing support.
With an actual scene I’m getting far faster render times (to the point I thought I had it disabled!).
I guess what I need to do is to re-create the same exact scene in all 3 engines so as to have some sort of a comparison point.
Even for looks, since in my case at least it will drive what I use in the end more-so than the engine features (as much as it’s painful to have to rewrite basic stuff)…
Makes sense though, the drivers for 10x and 20x were retroactively patched.
My 1080ti does great at most things too, but comparatively the 3090 doesn’t even blink when rendering 4k plus scenes.
Blender render times aren’t as good as you’d expect though - but all timings are much faster in general.
Being that the number of rays being traced is insane, hoping to get decent performance on older models doesn’t really seem feasible.
If you want to argue that for the amount of money paid a 3090 should render 16K in ray tracing at 144hz, I’m with you on that.
The cards themselves are as overhyped as it gets - and definitely nowhere near what the hype makes you believe they should be.
At the same time, compared to a 3080ti, the 3090 allows me to keep 4 different editor “things” with stuff running in the vram going concurtently - not that that’s a selling point for people…