Faster build times possible at some point?

As someone who’s recently got into WebXR and React, being able to get a workable updated result in 3D for testing online the moment I click the refresh button almost feels like a dream. However, that dream is also great for clients and management because I now spend much less time waiting for builds and can iterate faster.

For one project in WebXR using js-libraries I could do with zero prior knowledge in hours what I would need to do in days in Unity even with years of experience, in part due to having to build and deploy and test on the relevant devices. I know it is possible (and I even use it) to use Unity remotely on HoloLenses and similar, but when the client needs builds to test on their own devices I have to comply.

Then there’s the Mono-backend that was also on average faster for what we are doing (XR) than IL2CPP is now. I am aware that IL2CPP has been “experimental” for a while, but even now it feels too slow for its own good. I haven’t also noticed much speedup using incremental builds either. Maybe it’s a second or two, but that’s not much when the build takes on average half a minute.

So the question is will DOTS help speed up build times?

How would it?

How should I know? I just hope it has more benefits beyond game performance. Compile times are not something I can tell a client about with confidence when they look at where the hours go…

Oh it was not meant offensive. Sorry if it did sound like that. I was curious about if you had something specific in mind. I know absolutely nothing about you and your team/company, so please don’t understand this as explaining something you might know, or already do anyway. However, the things I teach everyone working with Unity in any non-hobbyist scenario quickly is to move as much of those parts out of your time. Nothing of this is obviously new or particular fancy, it’s the very basics you should follow.

  • If any developer in your team still waits for a unity build or light baking to be finished because his hardware is blocked: Move building, packaging, compiling to one or even multiple build servers. It is almost impossible that this investment does not pretty much immediatly nets a cost reduction.
  • If you need to switch platforms on your machine, keep full sized copies of your project per platform, s.t. you never let unity to perform an actual platform switch. Simply pull in the code changes and assets via a VCS of your choice. SSDs are too cheap nowadays to allow a dev to do a platform switch in time you pay him and have to bill a client for it.

As neither the c# compiler nor burst are really the bottleneck here (and yes, dear javascript, dart, … programmers, I know that running c# still feels like an eternity for you), but the many steps to get to the native code it’s in the IL2CPP pipeline. For WebGL stuff is horrible² because it first runs through the incredibly slow IL2CPP pipeline to get to C++, because that is a valid input for emscripten, which is used to get to webassembly, and as that is available that made it much more easy for unity to support that platform without having to reinvent the wheel, at the cost of adding the full cost of conversion and therefore time.

The solution in such scenarios most of the time is not to make the full build faster, but supporting hot-reload and reloading only what you need. As an example this wonderful tool made by an as wonderful person Live++ does support this for C++ and is what Unreal has integrated to support this feature. However, obviosuly for this you need both: The fast build and an incremential build to not having to hot-reload everything but only the part of the code that changed. I have not seen Unity moving forward in that direction any further than the experimental in-editor non-domain reload settings (so more or less: nothing happening on this front - at least I am not aware of any improvements).

So in a way Dots can help you to iterate faster by making artistic changes in the editor and see them in a build player. To understand what I mean see this Part of the 2019 Copenhagen Keynote.

To help reduce the shader variant build time I think somewhere in 2020 the shader variant caching will work better? Sadly I have no link to back that up.

Edit: Out of curiosity and if any unity dev stumbles upon this. Is Unity IL2CPP based on and/or has anything to do with this project: https://github.com/anydream/il2cpp? The github project is from 2018, so clearly years after unity started to do Il2CPP, but I have not followed this long enough to know if there was any predecessor.

3 Likes

Is this really still an issue these days? The biggest advantage of asset database v2 is that it finally keeps a seperate cache for each build target. Gone are the days of having to reimport everything when switching targets. And accelerater helps with completely fresh changes.

2 Likes

Honest answer: Absolutely no idea :smile: I have not re-run the test benchmark with Asset Database v2, but thanks for pointing it out. I’ll definetly forward and test that :slight_smile:

For the time being adb2 make fill that it work but in many cases it dont :slight_smile:
I mostly never change target but really often change branch, and have new blocking import every time on more or less big changes in branch, reimport of all fbx and textures all the time :frowning:

Looking forward for rest of Unity improvement in this area :slight_smile:

1 Like

As has been pointed out, DOTS itself can’t really improve build times.
As a whole Unity is working on a lot of changes to improve build times however.

  1. We are rewriting our compilation & build pipeline to be fully incremental. We are making our code based pipeline fully incremental, IL2CPP / Mono / Burst. Internally we use a project called Bee for this. It is how we compile the Unity codebase, a C# graph based incremental build pipeline frontend using tundra on the backend GitHub - deplinenoise/tundra: Tundra is a code build system that tries to be accurate and fast for incremental builds . This gives us very fast detection of what dlls need to be rebuilt / caches them etc.

Some of these pieces are more complex than others. Eg. IL2CPP requires a lot of refactoring related to how generics sharing in order to make builds fully incremental. The incremental IL2CPP work is still not complete and still needs a lot of work.

  1. We are moving deployment to a fully incremental device deployment pipeline called PRAM. (Copying gigabytes of data to devices quickly becomes the bottleneck once everything is incremental)

  2. We added a low level build api to unity to separate build manifest, code, data so each can be independently incrementally deployed. This enables the incremental build pipeline. This functionality has landed in 2020.2

  3. In dots we are using asset bundles for all Graphics assets. Next dots release supports asset bundle deduplication between subscenes and the resulting asset bundles are cached / incrementally built. Converted entity binary scene files are already cached / incrementally built by the asset pipeline.

In the next coming months we will be able to ship a preview of the first phase of the incremental build pipeline as part of dots. It will give big wins compared to what we have now and will work well for mid-sized projects. This will be an opt-in checkbox in the build config assets. First preview will be focused on DOTS, but we will also bring this functionality to GameObject based projects later on.

This is definitely a very active area of investment right now.

Beyond this we have big plans with more optimised data formats for asset bundles and ensuring we can truly support massive scale projects with a fully incremental pipeline as well as full featured device live link.

All this is part of the DOTS principles of scale & iteration speed, but we are also building this in a way so that existing MonoBehaviour based projects get all the wins on performance.

18 Likes

Is there any new progress for incremental script(code) compilation too?

You mean in in editor incremental script compilation?

Unity 2020.2 ships with support for reference assemblies. Meaning if you split your project into many asmdef files and change only method bodies without changing public APIs, it will only recompile those assemblies and skip over re-compiling dependent dlls (Because the API it uses has not changed). This is often very big win for iteration speed.

Separate from that specific to DOTS a large amount of the script compilation time in a DOTS project is due to all the ILPostprocessing we are doing, eg. Entities.ForEach / authoring component code-gen etc. We are looking at using roslyn source generators to make the generated code more transparent but most importantly get significant gains in compilation time.

Skipping Domain reload when entering playmode shipped in 19.3 and DOTS fully supports it. Enter playmode on dots shooter is < 1 second. Including booting up server & client in same process.

All our internal DOTS productions / samples have this enabled. For game object based projects it often requires a lot of refactoring because MonoBehaviour based game code often has a tendency to use static variables. Since they will not get reset on enter play mode that can result in your game not working correctly. It is simply a question of refactoring your own game code to adjust.

While Entities game code generally is written without static variables mostly because systems are explicit / multi-world support etc. So for DOTS projects it is usually very little work to support it. If you start a new project, you should obviously enable it from the start.

6 Likes

Ya. I would like to change one line of code and only compile the code I changed instead of reload the whole domain. From the info I get last time, to enable this feature it requires upgrading Mono to the latest version which will not be completed for the 2020.x release cycle. Is there even more improvement to come besides reference assemblies?

1 Like

I’m very happy to hear that there’s work going on in this area. Iteration times can be unbearably slow sometimes - sometimes a tiny script change can cause 30 seconds to a minute of reloading in a tiny project! It feels like it’s been getting slower, with ADB2 making things worse. Are you aware of this thread where many people have documented this problem? The iteration speed significantly hampers my productivity these days and it can be infuriating to use.

You’ve talked before about aiming for 500ms iteration speeds, but we’re a couple of orders of magnitude away from that at the moment. Is that still a viable target?

1 Like

Thank you. I am on a team with 3-4 (++ depending on load) Unity developers working primarily with B2B XR UWP-projects for various purposes (mostly ARM).

Would remote/cloud build also be a possibility, or just a build to platform function that works separately from the editor?

Sadly I personally cannot recommend cloud build if your project has any normal size. Just buying a cheap machine for the office is so much faster as it can cache all required data. Sadly, even a cheap machine in any developer home if you do not have an office is likely a better solution. (Dear security team, do not hit me with that huge baseball bat!)

Teamcity is free and super easy to configure (which does not mean one of the other many alternatives isn’t, it is just the one I have the most experience with and therefore can recommend it).

Our projects tend to get quite huge, so anything to offload build times so we can keep using Unity is a bonus.

Does anyone have a good source of information on how to build a Unity Build farm of some sort. I am running Teamcity on AWS with physical Dell machines at the office building console builds and an AWS machine, building a windows build. But I need more “power” before I go crazy watching slow-building progress bars!
On my own machine, I tried boosting memory from16GB-32GB and saw little build speed difference. So it seems that boosting memory is not the answer. I have the option of adding a graphics processor to the AWS machine. But there is a cost associated with it. I can ofcourse increase CPU, but all that “experimentation” cost money in AWS, so I was wondering if someone might have already gone through that experimentation and found a nice compromise on how much power and were exactly to throw it at a system if you don’t have unlimited money.

1 Like

@fwalker you need to find out what is the main source of your build sloweness. If it is lightbaking, you want to go GPU lightbaking with RTX support or try alternatives (e.g. Bakery). If it is any kind of translation (e.g. Il2CPP, WebGL) it is single core speed (as far as I know the transcriptions tools do not really multi thread most of the crucial work), is it just that you have to build for so many platforms, then you need more build agents with one setup per platform to build in parallel, if it is switching platforms, you need to re-configure your agents to keep the platform checked out e.g. in separate folders per platform, s.t. no agent ever switches platform (make sure to specify the platform when you launch the editor), if it is assets in general, you want to move to addressables to avoid having to build them when not necessary, if it is script compilation times, you might want to experiment with asmdef (to be honest, there are still quite some problems that make your build times slower instead of faster but at least in theory it can help, and some times it does).

Definetly read the logs, try any of the build report scripts and assets and first understand what part of your builds takes the most time.

Not sure how to properly do the caching on aws to be honest, so the answer was more targeted towards your physical machines.

1 Like