I find myself using this type of code a lot in my random functions when I want to weight against the extremes, and wanted to see if this was over processing. Is this the best way to get a float that has the potential of reaching the extremes, without it being a likely chance?

```
float random = Random.Range(-1f, 1f);
if (random > 0.2f || random < -0.2f)
random = Random.Range(-1f, 1f);
if (random > 0.2f || random < -0.2f)
random = Random.Range(-1f, 1f);
if (random > 0.3f || random < -0.3f)
random = Random.Range(-1f, 1f);
if (random > 0.4f || random < -0.4f)
random = Random.Range(-1f, 1f);
```

That does look a bit cheesy.

There’s a bunch of different ways to generate bell-curvey distributions, depending on how fast you want them vs. how accurate vs. the specific sort of distribution you want. I googled and I found one:

```
Random rand = new Random(); //reuse this if you are generating many
double u1 = 1.0-rand.NextDouble(); //uniform(0,1] random doubles
double u2 = 1.0-rand.NextDouble();
double randStdNormal = Math.Sqrt(-2.0 * Math.Log(u1)) *
Math.Sin(2.0 * Math.PI * u2); //random normal(0,1)
double randNormal =
mean + stdDev * randStdNormal; //random normal(mean,stdDev^2)
```

Where mean is the center of your bell curve (your most likely value) and stdDev is your standard deviation. You may want to convert that to use Mathf and floats.

Very cool, the ones I originally found were way too complex. The standard dev is useful too, since I want to make extreme events more rare in some cases. Thanks!