I have the following scenario that I use for creating randomized levels. Each overall level is made up of N sub levels. Each sub level can be generated on demand from a stored seed. Each sub level uses a different seed. The seed for the sub level is stored as an int on the object and is settable in the inspector (this is all in prototyping at the moment). The important thing here is I know the seed is not changing.
Now what I am seeing is this:
Generate sub level 1 from a known seed.
Regenerate sub level 1 from the same seed. Same result as expected. Regenerate over and over always same result. Good.
Generate sub level 2 from a different known seed.
Regenerate sub level 2 over and over and again all is working as expected.
Go back to sub level 1 again and regenerate it from its seed. Wait! It came out different this time! But regenerating more will give me this same (new) result
Go back to Sub level 2 and regenerate and again a new result, which is now consistent
Basically the way it acts is as if the seed on a previous sub level had been altered to a new seed each time a new sub level is loaded. But they are completely separate data objects, separate game object controllers, and UnityEngine.Random is being initialized with the proper seed each time just before generation of the level. I’ve confirmed that the seed isn’t changing unexpectedly by stepping through the code in the debugger.
There is a lot of code behind this but the gist of it is, for any given sub level generation run:
Random.InitState(seed);
// Do randomy things including:
Random.Value and Random.Range(0, blah);
The only thing I can think of here is I am somehow doing something that is causing the results from UnityEngine.Random to be non deterministic. Now I know that changing the seed essentially resets the random sequence back to 0, but that is what I expect to happen. However it almost seems as though setting it to a new seed, then setting it back to a previous seed causes the sequence to be different the 2nd run through.
When I go back to a previous sublevel and regenerate it, the first “regeneration” will be different from what it was originally, but each regeneration after that will be consistent with this “new” version if that makes any sense. Not sure it does. Basically it will change when revisited, and then not change again until a different sublevel is visited and then you go back.
The more I mess with random numbers in controlled tests - and fail to reproduce my problem - the more I am forced to concede it may just be a bug somewhere in the complex chain of level generation / regeneration.
If you were able to repeatedly generate the same level 1 before switching to level 2 and back, that implies that your fixed-seed generation is basically working. You just have some input into the process that you somehow haven’t controlled for.
Some thoughts about what that might be:
Some other process running in parallel might also make use of UnityEngine.Random, which changes all the results you get afterward. (Does your generator use asynchronous functions, coroutines, etc.)
Your level generator might depend in some way on player data, like changing the monsters depending on the player’s level, changing a door based on whether the player has a key, or changing the quests that spawn based on whether the player has done them before.
Your level generator might depend in some way on history of other levels. For instance if it uses the level number (and the level number turns out to depend on the number of levels played rather than which level you’re loading), or if it cares about which direction you’re “coming from” when you exited the previous level.
You could have some shared data structure that gets modified, maybe for something as simple as caching.
Basically, look at every variable that could potentially affect your level generation algorithm, and look at any other code that could use the random generator.
I think I might have an idea. Part of the generation code involves picking a random map cell from the entire map. This data element stores ALL cells in the map not just those for a given sub level. I think this code is falling apart, or has the potential to do so, when the overall map data structure changes:
public MapCell RandomCellInSector(int sector)
{
IEnumerable<MapCell> expression = from cell in mapData.Values
where cell.sector == sector && cell.cellUsed && cell.cellValid
select cell;
List<MapCell> filteredList = expression.ToList();
return filteredList.ElementAt(Random.Range(0, filteredList.Count));
}
I think what is happenign here is after the Dictionary gets over some internal size, it reallocates and after that the order elements are returned in changes. This is probably a perfect example of why you should never rely on order in a non-ordered structure even if it appears that they never change order.
Sounds like that could plausibly be your culprit. You could probably test it really quickly by adding an “orderby” clause to your linq query (just make sure you order by something that won’t result in ties).
Incidentally, if you are picking a lot of random cells and you execute that function every single time, that’s probably pretty inefficient. Filtering the list once and saving the result would probably be faster. (If you have to remove elements as they get “used up”, swap them to the end of the list before removing them, since removing from the end of a list is O(1) but removing from the middle is O(N).)
If your data changes unpredictably such that you really do need to recalculate the filter every time, you should try to select directly from the IEnumerable instead of converting it to a List (since that will allocate a bunch of memory). Here’s an extension method I wrote to do that (it uses my own random interface as a wrapper for cross-compatibility, but it should be easy to modify it to use a RNG of your choice):
// Chooses count items at random from the enumeration and returns them in an array
// The order of selected items within the array is also random
// If the collection is smaller than count, the entire collection is returned (in random order)
// This only allocates memory based on the number of values selected,
// rather than the size of the entire collection, so it's more memory-efficient
// than converting the collection to a list if the collection is large
public static T[] SelectRandom<T>(this IEnumerable<T> collection, IRand rand, int count = 1)
{
if (count <= 0) return new T[0]; // Optimization for trivial case
T[] keep = new T[count];
int found = 0;
foreach (T item in collection)
{
if (found < count)
{
// Take the first #count items, in case that's all there are
// Move a random item of those found so far (including the new one)
// to the end of the array, and insert the new one in its place
int r = rand.RandInt(found + 1);
keep[found++] = keep[r];
keep[r] = item;
}
else
{
// Random chance to replace one of our previously-selected elements
++found;
if (rand.RandInt(found) < count) // probability desired/found
{
// Replace a random previously-selected element
int r = rand.RandInt(count);
keep[r] = item;
}
}
}
if (found < count)
{
// The collection was too small to get everything requested;
// Make a new, smaller array containing everything in the collection
T[] all = new T[found];
Array.Copy(keep, all, found);
return all;
}
return keep;
}
Yes this is very inefficient. It was written as part of a “game jam” style crunch session with the intention to purposefully NOT stop and make things better. I find I tend to get too bogged down in making things perfect so this was a bit of a trial.
I very much appreciate the code example. I was struggling with learning how to do complex linq lookups and converting to list first was the only way I could figure out hot to make it work. I shall dissect what you provided!