Itâs not a real math problem, as you could just have solved this using nested loops - like already mentioned in this thread. Using nested loops would allow more control, better readability (IMO) and doesnât need to do divisions.
Agreed, it may be a clever mathematical trick, but if a piece of code makes you go WTF when you read it, thatâs code I will change to make more readable.
haha ye but this actually doesnât make me go wtf cause it litterly just divides the number and floors it to a Int so its always going through X amount times fun fact itâs something i want to use for checking if this A is within B, if it is then generate a new position. and obviously i have to make some trick to make sure it wont happen often otherwise it will be alot of looping until it finds a possible spot
so a little update for people who wants to know i have both solved and got my script to work. it will now check each position that has been added to a list and if X position is to close to another position it will remove that position and add a new one. this happens as many times as there are positions in the list. aka if there is 16 possible positions it will go through each position from each iteration so 0-15 and it does that 15 times per iterate. however this also means that it will possibily add and remove more then once. as it checks for each one.
protected void CalculateRandomPosition()
{
float minX = spawnPoint.bounds.min.x; // sets minX to be the value of the bounds of the collider box
float maxX = spawnPoint.bounds.max.x; // sets maxX to be the value of the bounds of the collider box
float minY = spawnPoint.bounds.min.y; // sets minY to be the value of the bounds of the collider box
float maxY = spawnPoint.bounds.max.y; // sets maxY to be the value of the bounds of the collider box
float incrementValue = fixedArray.Count;
int index;
int runThrough = 0;
for (float i = 0; i < incrementValue; i += (1 / incrementValue))
{
index = Mathf.FloorToInt(i);
runThrough++;
if (runThrough == fixedArray.Count)
{
runThrough = 0;
}
if (runThrough == index)
{
continue;
}
// Debug.Log(" Index " + index + " Distance " + Mathf.Abs(fixedArray[index].x - fixedArray[runThrough].x));
if (Mathf.Abs(fixedArray[index].x - fixedArray[runThrough].x) <= 12f)
{
Debug.Log(" Removing Position " + fixedArray[runThrough]);
fixedArray.RemoveAt(runThrough);
spawnPoints = new Vector3(Random.Range(minX, maxX), Random.Range(minY, maxY));
fixedArray.Add(spawnPoints);
Debug.Log(" Adding Position " + spawnPoints);
Debug.Log(" Index " + index + " CurrentRun " + runThrough);
}
}
}
this is my first time doing something like this. obviously i am looking to optimize it and make it better.
atm it seems 1000 possible spawn points is the max. atleast for a 16x16 area. obviously i want to make it able to calculate 1000 possible spawn points and take into consideration that it only can add the position if that position is outside of the within x range.
however as your area increases the more possible spawn points is the limit. currently its exponentially in resource requirement i think. as each position is checked = to amount of position that exist in the list.
Itâs not the kind of âwtf - what happens thereâ, but rather âwtf - why did you choose to do it that wayâ. And itâs not only that division, but also the implications that follow by choosing this way:
So if you look at your implementation now, itâs âliterally just a divisionâ
that requires additional helper variables to take care of
that is followed by a call to Mathf.FloorToInt
that is followed by an if statement to test runThrough against fixedArray.Count incl.
=> if it evaluates to true, followed by a reset
followed by an if statement to test runThrough against index,
=>followed by a continue statement if it equals
Nested loops would make this clearer, shorter, more maintainable.
Additionally your implementation is bugged.
Bug #1: You remove elements at index i when i != count -1 which messes up the order of elements, allowing to skip a following element (as you do not take the removal into account for the runThrough variable).
Bug #2: Due to #1, you do now potentially have positions in the array that arenât allowed by your requirements.
Bug #3: Even if the algorithm was implemented without #1 and #2, thereâs a finite amount of distributed spawn positions that you can find. You can determine the absolute maximum - which is not likely to occur using randomized distribution of spawn points.
Setting an array that is too large (too many spawn positions to find) and/or a collider with its bounds being too small for that number of required positions, you can either run into short, medium or long blocking behaviour - or worst case scenario, end up looping infinitely.
You may now say that last bug is something to ignore and just choose appropriate values, but youâd be better off to prevent this from happening in the first place.
thats not a bug. it will remove the items within 12f of eachother. but i am making it different.
but obviously this is not the finished version. but as it is now it does what i want it do to.i still plan to maximize it in some way. and do error checking, checking if it exceeds possible spawn areas. etc. i got quite a few plans to make it better
You do now iterate it, get to index i=2 for example and decide to remove it.
RemoveAt(2) removes the value 3, as itâs placed at index 2, no matter what you add afterwards (adding will put it at the end of the list), the values in the list will be shifted and the list does now look like
1,2,4,5, and some other ones added.
However, your index is still 2, but the current iteration is done (at least in your algorithm). Youâll continue to index 3, which means that the new value at index 2 (which is value 4 now) is skipped and not properly checked.
The other following values might still be checked against it due to the way you programmed the logic, but everything before it cannot be checked against it anymore, as they are already considered to be valid values.
that i am clear about. but i am going to refer to what i said before.atm itâs far from being complete. i still plan to do alot of things to really make it faster and more optimal, and eaiser to understand
i am thinking of doing something like this
and fyi when i said i solved it i only talked about the problem of going through a for loop x amount of times on the same position with the amount of items inside a list. so again Checks 0 x amount of times. and then 1. and so on.
and i will use Insert instead since it will allow me to replace the removed item on the correct placement
First of all, Iâm only trying to help you. Iâm not trying to be rude or something.
For replacement, direct assignment will be much faster than RemoveAt + Insert, as both RemoveAt and Insert will copy the elements around.
But be aware, here comes the next problem. Before you replace the invalid one with a newly generated, you have to run all previous checks against the newly generated position. If that one is invalid again, you have to generate another one, and this continues until you finally generate a valid one.
This issue has previously not occured since the newly generated was added at the end and thus was a candidate to check later, but as you have read, that implementation had its very own bugs.
If order does not matter, a more straightforward implementation would be to
either just generate them from scratch so you check the newly generated position at the time itâs generated against the existing ones (that are valid if you follow this correctly).
or iterate the list once to save all valid positions to a second list (with capacity of the original list), and fill it up afterwards by generating + checking new ones.
The 2) approach has the great benefit that you do not invalidate already existing positions that were originally valid.
These both would eliminate all the head-scratching problems and workarounds youâd need to add in order to make your algorithm run correctly (correctness of algorithms is an important property).
Now thereâs only one problem left, which is the âBug #3â mentioned in a previous post. This is a big thing on its own, as many factors can contribute to an efficient solution.