Assert.That After delay not possible?

Hey Community!

Our app has a lengthy App StartUp routine which initialises all core systems and fetches a bunch of data. I wanted to write a test with Unitys Test Framework v1.4.2 to check if the StartUp is able to finish in less than 5 seconds without errors.

I was able to achieve this like that:

[UnityTest]
public IEnumerator StartUp()
{
    var startTime = Time.time;
    while (!App.Instance.StartUp.IsStarted)
    {
        if (Time.time - startTime > 5)
            Assert.Fail("StartUp didn't finish in under 5 seconds.");
        yield return null;
    }

    Assert.Pass($"StartupTimeWas: {Time.time - startTime}");
}

but I found a seemingly much better solution, which would write like this:

[Test]
public void StartUp() => Assert.That(App.Instance.StartUp.IsStarted, Is.True.After(5000), "StartUp didn't finish in under 5 seconds.");

however this doesn’t work, because this method is blocking and Unity can’t progress. I tried multiple things like

  • Using async Task signature

  • Using [UnityTest] and add a arbitrary yield return null somewhere

  • Using the Assert.That(() => IsStarted, Is.True.After(5000)); – which does polling

but nothing seems to work. The worst part is: I’m 99% sure that I had something implemented like that a few hours ago and it worked like a charm, but I begin to believe that I’m just gearing delusional, since I tried anything for it to work again, but it doesn’t seem to be possible.

Appreciate any suggestions to clean up this coroutine mess! We are just getting started and this delayed signature is exactly what we need to get clean and readable tests :wink:
Other approaches that may improve code & readability to achieve the same goal are welcome too!

Best,
Nils

This “After” is some NUnit timer. It does not progress together with Unity’s time. Pretty certain you cannot use that, at least not in this way because Unity needs the yielding IEnumerator to progress to the next frame. Without that, nothing happens on Unity’s end.

The test that worked … well, works. But it’s a terrible test. It makes the assumption that startup will finish in a given time, so it will fail anytime it is run on a machine that isn’t as fast, or whenever the machine happens to be tasked with other things (eg virus scan, update installation, etc).

Instead, turn to the Performance Testing package and simply measure how long it takes for the app to start up. That is way more meaningful.

1 Like

It’s totally fine that it will fail after 5 seconds. We will pick minimum supported low end devices for our tests and will run all tests on these, creating our own minimum benchmark. Surely it would be nice to test ob a variety of devices, but lets be honest, probably no-one does that :wink:

For now, I’m trying to accept that the tests need to be written very primitive with this coroutine style code. Maybe we can come up with some neat wrappers for common problems, it’s a lot of experimentation for me now.
Anyway, I wasn’t aware of the performance testing extension, will definitely take a look at that too, thanks!

Again, this test, if it fails, tells you nothing. Perhaps the machine was just tasked to pull a repository at the same time. Test fails, someone looks into it, wasting time. Worse: test gets re-run once, twice. All seems fine. Next day it’s red again. Rinse, repeat. Not a good idea, trust me, you don’t want devs hunting ghosts. :wink:

This isn’t something you test and fail, it’s something you measure and provided that the measurement API has the necessary callbacks you could flag this as a warning or post it on the wiki or something.

It’s also not really primitive code, it’s just not “functional” in style. You can easily wrap that in nicer methods if you absolutely insist on having it on a single line.

For readable and composable asynchronous tests and test utilities, you might want to take a look at Responsible.

Disclaimer: I’m the author. Also, Responsible was written before the test runner had async test support. I’m not sure if I’d approach the issue in a similar way these days, but I also haven’t taken time to really think what I might do differently now.

1 Like

Well, if I define no timeout, the StartUp could never complete and therefore the test would never succeed and never fail, being stuck in playmode forever. Tracking these issues down, especially by devs that didn’t write the testing code is much worse in my opinion.

The 5 seconds number is maybe questionable, I could set it to 30 seconds to exclude the indirect “performance test”, but having it fail if the StartUp is stuck for some reason is an essential part, not sure how you would approach that differently?