Code Flattening

Does anyone know any tool that can help me flatten out relatively complex geometric operations?
Yes I know I can compile the code into JIT ASM, but that doesn’t help me much.

Here’s a concrete example (a and b are Vector2)

public int GetSide(Vector2 pt, bool thickZero = false, float tolerance = 1E-4f)
  => thickZero && Contains(pt, tolerance)? 0 : pt.SideTest(a, b);

// I want to lower this to improve its performance for the method above
// preferrably without any intermediate variables, but I'm aware that might not be possible
public bool Contains(Vector2 pt, float tolerance = 1E-4f)
  => (NearestPoint(pt) - pt).sqrMagnitude <= tolerance * tolerance;

// where
public Vector2 NearestPoint(Vector2 pt) => a + Projected(pt).Clamp(0f, _len) * Dir;
public float Projected(Vector2 vec) => Vector2.Dot(vec - a, Dir);
public Vector2 Dir => _invlen * (b - a);

Apart from being able to tell that direction and some deltas can be cached to intermediate variables, it is incredibly hard for me to untangle the rest of it, even though it looks simple. The code is perfect the way it is, I just want to be able to squeeze that little extra out of it, by removing redundant computation. But manually I need days to do this properly because, as with anything manual, I’ll swap an X with a Y somewhere and it’s just too stupid to fight with this and I’m on the verge of leaving it as is.

It’s not that I’m optimizing this blindly for the sake of it, this is already performing very well, it’s just that I know this is so low in my codebase that it’s fated to end up in numerous hot paths. Why not shave off the zero useful work while I’m already at it? It just generates heat anyway.

Edit:
Oh and another huge issue for me is that if I run some CIL decompiler online (CIL is likely very similar to the above code, but for the sake of argument), I would have to migrate (or recreate) portions of Unity API only for it to be able to tell what the heck a Vector2 is. So the solution would have to be something I can bundle with Unity (or some already existing tool I’m not aware of).

Maybe use ref and in. in - if its readonly, ref if its readwrite.

@Trindenberg Thanks, I get what you mean, but it’s not that I’m worried about the stack speed too much, it’s not that I’m working with matrices here.

Well I managed to come this far, and it seems to be working ok

public bool Contains(Vector2 pt, float tolerance = 1E-4f) {
  var atp = pt - a;
  var dir = _invlen * (b - a);
  var dot = atp.x * dir.x + atp.y * dir.y;
  //dot = (dot < _len)? (dot > 0f)? dot : 0f : _len;
  var xy = dot * dir - atp;
  return xy.x * xy.x + xy.y * xy.y <= tolerance * tolerance;
}

I’ve eradicated all named function calls (apart from the implicit vector multiplication and subtraction), and I’ve also discovered that I don’t actually need or want that clamp inside the original test and that’s a major buff to performance.

So maybe I should just call it a day and move on, idk.

Edit:
Now I’m annoyed for not being able to easily tell how much this actually helped. I’m supposed to write a whole benchmark suite in a full hour of full concentration, for a change that took me less than one hour with much less concentration, because an error introduced in the latter case would be obvious, while a benchmark error would not. God I hate software development in the 21st century. We all just pretend that things have improved from the 80’s.

What we all basically NEED is a performance meter that is running the function constantly in the background, and when you change it, it updates on its own. If a breakpoint can be made, this CAN be made as well and should be the staple part of any IDE debugging suite.

If I’m too stupid or ignorant to acknowledge some technology or debugging standard and there is something I should know about, please let me know.

All that’s happening though are some inlined method calls, so I doubt there would be much of a performance increase really. If this code is really mission critical it’s also a possibility to use Burst etc.

4 Likes

@hippocoder nah Burst would be an overkill

If you haven’t already considered this, I’d take a serious look at the Unity.Mathematics package, especially because the Burst compiler understands it and can vectorise/SIMD where appropriate. Obviously this’ll only become a significant advantage if you can also change how you access the data.

https://www.youtube.com/watch?v=u9DzbBHNwtc

This is what the performance testing API is for, we use it internally too.

https://docs.unity3d.com/Packages/com.unity.test-framework.performance@2.4/manual/index.html

Here’s an example of a performance test:

    [Test, Performance]
    public void SyncColliderTransformChanges_NoRigidbody()
    {
        using (var factory = new SyncTransformController())
        {
            for (var i = 0; i < k_ColliderCount; ++i)
            {
                var pos = Vector3.right * i * 2f;
                factory.CreateCollider<CircleCollider2D>(pos);
            }

            Measure.Method(() =>
            {              
                Physics2D.SyncTransforms();
            })
            .WarmupCount(k_WarmupCount)
            .MeasurementCount(k_MeasurementCount)
            .IterationsPerMeasurement(k_IterationsPerMeasure)
            .SetUp(() =>
            {
                factory.DirtyAllTransforms();
            })
            .Run();
        }
    }
3 Likes

For anyone interested, the real crux of the issue is that the heart of that method GetSide is very fast
it boils down to

=> Sign((b.y - a.y) * (c.x - a.x) - (b.x - a.x) * (c.y - a.y))

but, as with all floating-point determinism, it tends to be unstable around zero
And so I’ve introduced this workaround to “bloat” the zero, which is a neat trick without having to dig through Jon Shewchuk’s 100 pages of fast robust predicates

public int GetSide(Vector2 pt, bool thickZero = false, float tolerance = 1E-4f)
  => thickZero && Contains(pt, tolerance)? 0 : pt.SideTest(a, b);

However, although my primary issue with this is of technical nature, there is an ethical and philosophical component to it, as a personal convention: A safeguard must not be ten (or more) times more cumbersome than the core evaluator, even though its usage is disclaimed as more expensive. That’s a quick way to hell in my opinion, because the flag is so enticing – why wouldn’t anyone want to keep it constantly on? But it then completely destroys the performance benefit of the underlying test.

It’s technically just a badly-designed supplement standing in place of manually calling Contains, which is already advertised as more expensive.

And that’s what got me into this spiral of trying to make a leaner version out of it, that’s like a get-out-of-jail free card in this situation. But then, what does leaner mean? Is it just inlining? Is it passing by ref? Or should you invent some radical shortcut, cache your values, and in the end, reinvent fast robust predicates?

Grrr, as much as I love coding, I should go read Hegel or something.

I know about it, but haven’t actually considered it yet. Maybe I should.

I’ll check this out as well, thanks. I hoped there was something to help with spelunking down the rabbit holes. It’s much needed.

You could always mock it to your own Vector2 struct to accomplish such a thing… pretty sure that the Vector2 class is even available in source form somewhere too.

As Hippo said you’re unlikely to squeeze much more out of this unless you can lift invariant computations out of an inner loop.

But I still always prefer one computation or assignment per source code line, rather then clever CS101 types trying to pack everything into one line to show off their “clebberness.” They forget that sometimes I need to extend or even debug their “clebberness.”

Sure, but that already exceeds the effort that went into the method in the first place.

Rule #1 of low-level coding is NEVER leave a necessary optimization (or unhandled case) for later. I’m not after premature optimization here, but low-level misbehavior is the worst thing in existence. I mean I wouldn’t be here typing all this if my keyboard driver glitched or slowed down whenever I would hit a G.

But at the same time I have much better things to do, and I can’t bother with boosting a service that is supposed to boost me instead, so that’s why I didn’t do that, but that’s a sensible piece of advice, thanks regardless.

This is the lemonade I got (it’s quite beautifully shoelaced)

private bool lowered_contains(Vector2 pt, float tolerance = 1E-4f) {
  var atp = pt - a;
  var dir = _invlen * (b - a);
  var xyz = new Vector3(dir.x * dir.x, dir.y * dir.y, dir.x * dir.y);

  return new Vector2(
    atp.y * xyz.z + atp.x * (xyz.x - 1f),
    atp.x * xyz.z + atp.y * (xyz.y - 1f)
  ).sqrMagnitude <= tolerance * tolerance;
}

As you can see, I’ve squeezed a lot out of it, and I’m ok with this, it’s just, well, the process itself is tedious and prone to catastrophic errors. I hoped I wouldn’t have to do it manually.

It doesn’t reach out to heap, it has no function calls (minus the vector operators and that sqrMagnitude in the end, but I’m not after purity here), and I believe it can’t physically be any better because it’s 2D anyways – SIMD doesn’t help at all with such punctual data.

Now I’m having a tool such as this on my TODO list. It’s just a symbolic parser, similar to assembly compiler, but one that can immediately bubble up, and spit out a flat solution while keeping the expressions with unique values intact or the multi-use values themselves cached. I need that several times a year or so. It would be so nice to have such a tool online, and the best thing about it is that it can be largely language-agnostic, if it can grasp things such as vectors or float2/3/4 I don’t care. I made a BNF grammar parser ages ago, this might be a good use case for it.

Btw if anyone needs it, the method returns whether some point is contained by the arbitrary line in 2D space (edit: I’m doing this for line segments defined by a and b, but this particular method works with lines in general), and includes a tolerance to combat numeric imprecision. I’m probably the only guy (below the genuine computer sciences) who makes sure such things look and feel sturdy (aka suffers from a very special programming OCD, still unrecognized by DSM), so it’s quite a gem (aka anthropological curiosity).

1 Like

I know what you mean, and I’m happy (and sad at the same time, it’s like the Greek theatric masks in a superposition so what you’re left with is just a flat line with fuzzy cheeks) that I don’t have to look or work with anyone else’s code.

We had this conversation before, and if that’s any solace to you, my philosophy is that one-liners, and especially ternary expressions MUST work, otherwise they completely ruin or neglect the point that you’ve highlighted. If you’re supposed to pick and probe such code to debug it properly, it’s a majestic hassle to reformat everything and I am very mindful of that. Clebberness indeed.

with a line such as

public Vector2 NearestPoint(Vector2 pt) => a + Projected(pt).Clamp(0f, _len) * Dir;

it is remarkably easy to pinpoint the potential points of failure and move on to the next thing.

In this specific scenario, I am building a low level geometric struct (an immutable line Segment), where each feature is axiomatically correct and part of some analytical whole. Its API is supposed to be low level (low from the usual vantage point in C#; I don’t mean this from the hardware perspective) that I absolutely need to be able to stress test the whole of it before I can use it in production code. It also depends on and facades the line 2D intersector I wrote the other day (if you saw it by accident). On top of that, I’ve made an Ellipse object as well (because that’s the combination I need at the moment) and the API I’m writing should feel shared and consolidated between the two, so there are very rigid standards and mathematical notions I’ve committed myself into.

Especially because I might release this into public domain one day. I doubt there is a repository that has a better defined and/or more readable code for these two geometric entities on the internet, and I’ve seen at least a dozen of libraries. If they are better defined, they tend to be incredibly dense and opaque and vv.

tl;dr This is why I’m picky about my choices.
I’m also aware this might be one of those things succinctly explained by this XKCD comic.
But if that’s the case, at least I own the code that is bringing me pain. To hell with dependencies. In fact that logic is what made Unity in the first place. If only the founders were smart and built their game engine with Unreal … I know, I know, I’m too rhetorical today.

@orionsyndrome You obviously like a bit of math, me too, although I’m no whizz! I was thinking about what you were trying to achieve, and while I’m lazy to do a bit of programming today to look at it, wondering if this algorithm would work and where I’ve gone wrong? Just tried to figure it in my head, what’s the worse that can happen! Ps. I know mag here is not real magnitude. Pseudo form:

pt     = point;
a     = line start;
b     = line end;

l_vec     = b - a;                 // Line offset to 0,0 by line start
p_vec     = pt - a;                 // Point offset by line start;
l_mag     = l_vec.x * l_vec.y;       // Line magnitude
p_mag     = p_vec.x * p_vec.y;       // Point magnitude

if ((abs)p_mag <= (abs)l_mag)     // Within boundaries
{
   ratio_diff =
      l_mag * p_mag < 0 ?        // One of magnitudes on +- axis
        l_mag < 0 ?                    // Line magnitude on +- axis
        l_vec.x / -l_vec.y - p_vec.x / p_vec.y :
        l_vec.x / l_vec.y - p_vec.x / -p_vec.y :
        l_vec.x / l_vec.y - p_vec.x / p_vec.y :     // Both --/++ axis
      return abs(ratio_diff) <= tolerance;
}
return false;

With math-centered functions like this, make it a static method, use the Unity.Mathematics methods and slam the [BurstCompile] attribute in - this will give a speed boost you wouldn’t expect, like potentially dozens of times faster.

On top of that you’re not wasting time writing out vector math just to gain minimal performance (maybe 20% if you’re lucky) over terrible readability and possibly causing hard to debug issues.

2 Likes

The core of this is of course static, SideTest(Vector2, Vector2, Vector2) method as shown in the original post.
However some of the methods are instance based, although I might indeed do what you’re suggesting and move the lowered variant into the static library, as that class is a very good candidate for Burst.

But that’s not what I had an issue with.

If you take a look at the original method, that one liner is all there is to it, so it’s pretty much self-documenting. I don’t care about the readability if it’s self-evident.

Here it is again.

public bool Contains(Vector2 pt, float tolerance = 1E-4f)
  => (NearestPoint(pt) - pt).sqrMagnitude <= tolerance * tolerance;

If anyone needs excessive documentation to be able to grasp what this does, that’s a huge problem for software development in general.

Look, I don’t know if I sound like I’m struggling or if I’m perceived as a nutcrack by the community, my the problem isn’t that this is somehow complicated or that the compilation is too slow. There is no Burst in this world that will completely fix bad programming, or much worse than that, fix missing context and computational redundancy. These are all NP-complete or NP-hard problems (or an outright fantasy).

My thinking is that the programmer is and should be responsible for writing sensible code, which a) is correct, b) is testable, c) is either well-documented or self-evident, and d) does it as fast as possible, in that respective order. Unfortunately to be able to solve (d) I have to sacrifice a bit of (c), and I don’t mind that because I already have a snippet that is self-evident and works as intended. What I do mind is that the industry does not encourage lowering. Even you have become so dependent on someone else’s code, that we’ve all turned into grumpy cable guys essentially, constantly scolding others for violating some taboos.

In the meantime, everything is solved with a faster machine and a better compiler that will magically vectorize/parallelize and turn things into mush most likely. The actual programming, as in “entering a program into a computer” has taken a back seat.

You can watch that video MelvMay linked above, where the presenter explains exactly what happens with the matrix multiplication in IL2CPP due to missing contextualization, and brags about the inter-link between Mathematics package and the Burst itself, which removes all kinds of friction (and nonsense really) by producing a continuous mulling process with a clear start and end (whoa such magic) that ends up being less memory-intensive and SIMD-friendly. Ok, very nice if we can do that in 2022, but what happens when something isn’t contextually meaningful to Burst and cannot be run through the paved boulevards such as matrix multiplication surely must be at this point in time? I am frankly astonished that matrix.mul and inverse haven’t become a part of the hardware instruction set by this point. Or is something akin to that what Mathematics actually exploits? Tbh, I don’t know much about modern GPUs, mine is still three digits.

Anyway, the lowered method I produced above emits 20+ less ASM instructions (an improvement of ~30%; 62 vs 41), has no branches, and everything is done on the stack. In the end I did a benchmark, I don’t like staying in the dark about anything. The performance improvement is around 50%, which is huge, and this is true even when I remove the clamp from the original (I get only 5ms which around 4% on 1M iterations with constantly shifting arguments).

I believe the speed gain was worth my time, but that’s just my opinion. Here’s the test, give it a try.
Lowering Benchmark

using System;
using System.Diagnostics;
using System.Runtime.CompilerServices;

public class C {
 
  static public void Main() {
    var sw = new Stopwatch();
  
    a = new Vector2(0f, 0f);
    b = new Vector2(5f, 5f);
    _len = MathF.Sqrt(b.SqrMag());
    _invlen = 1f / _len;

    Console.WriteLine(Lowered_Contains(new Vector2(2.1f, 2f), 1E-1f)? "yes" : "no");
    Console.WriteLine(Contains(new Vector2(2.1f, 2f), 1E-1f)? "yes" : "no");
  
    const int C = 1_000_000;
    sw.Start();
    for(int i = 0; i < C; i++) {
      Lowered_Contains(new Vector2(i / 100_000f - 5f, 1f), 1E-1f);
    }
    sw.Stop();
    Console.WriteLine($"lowered: {sw.ElapsedMilliseconds}");

    sw.Restart();
    for(int i = 0; i < C; i++) {
      Contains(new Vector2(i / 100_000f - 5f, 1f), 1E-1f);
    }
    sw.Stop();
    Console.WriteLine($"non-lowered: {sw.ElapsedMilliseconds}");
  }
 
  static Vector2 a, b;
  static float _len, _invlen;

  static public bool Contains(Vector2 pt, float tolerance = 1E-4f)
    => (NearestPoint(pt) - pt).SqrMag() <= tolerance * tolerance;
 
  static public bool Lowered_Contains(Vector2 pt, float tolerance = 1E-4f) {
    var atp = pt - a;
    var dir = _invlen * (b - a);
    var z = dir.x * dir.y;
    var xyz = new Vector2(dir.x * dir.x, dir.y * dir.y);
    var res = new Vector2(
      atp.y * z + atp.x * (xyz.x - 1f),
      atp.x * z + atp.y * (xyz.y - 1f)
    );
    return res.x * res.x + res.y * res.y <= tolerance * tolerance;
  }

  [MethodImpl(MethodImplOptions.AggressiveInlining)]
  static public Vector2 NearestPoint(Vector2 pt) => a + Clamp(Projected(pt), 0f, _len) * Dir;

  [MethodImpl(MethodImplOptions.AggressiveInlining)]
  static public Vector2 NearestPointUnclamped(Vector2 pt) => a + Projected(pt) * Dir;
 
  [MethodImpl(MethodImplOptions.AggressiveInlining)]
  static public float Projected(Vector2 vec) => (vec - a).Dot(Dir);
 
  static public Vector2 Dir => _invlen * (b - a);
 
  [MethodImpl(MethodImplOptions.AggressiveInlining)]
  static public float Clamp(float v, float min, float max)
    => (v < max)? (v > min)? v : min : max;
 
  public struct Vector2 {

    public float x;
    public float y;

    public Vector2(float x, float y)
      => (this.x, this.y) = (x,y);

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public float SqrMag() => Dot(this);

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public float Dot(Vector2 other)
      => x * other.x + y * other.y;

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public Vector2 To(Vector2 other)
      => new Vector2(other.x - x, other.y - y);

    static public Vector2 operator -(Vector2 v)
      => new Vector2(-v.x, -v.y);

    static public Vector2 operator +(Vector2 l, Vector2 r)
      => new Vector2(l.x + r.x, l.y + r.y);

    static public Vector2 operator -(Vector2 l, Vector2 r)
      => new Vector2(l.x - r.x, l.y - r.y);

    static public Vector2 operator *(float l, Vector2 r)
      => new Vector2(l * r.x, l * r.y);

  }
 
}

Thank you so much for the advice, for any advice, but I really just wanted someone to recommend me a tool for code lowering, so that I don’t have to do it manually. You know like how in shaders, manually unrolling loops was a thing not so long ago, yeah, well, such tools are really useful in general programming, when you’re so low you want to keep your code as DRY and as primitive as possible, even at the expense of readability. Maybe people typically come to C# from the OOP plane of existence, but this line of thinking I’m after is more like pure C instead of Java, and it really makes sense in some cases, it’s not something completely unheard of.

Just imagine the performance of it if I’d ALSO employ Burst.

Well, for starters, I don’t like dividing that much if I can help it, but it’s more of a legacy thing than a true plague that should be avoided at all costs. When things are high enough, I don’t care about it at all, everything’s fine, but if your code is supposed to be heavy duty, and this kind of computation tends to be like that, it is well-known that floating-point division is nearing the bottom half of a ranking list, performance-wise. It’s not the-end-of-the-world-slow, but what would the point of squeezing performance be if you went against the common sense.

Here’s a test you can try on your own. I did this in dotnetfiddle.net, but you can rearrange it for Unity.
Division speed test

using System;
using System.Diagnostics;
                 
public class Program {
    public static void Main() {
    const int C = 500_000_000;
    const int M = 17;
    var array = new float[M];
     
    var sw = new Stopwatch();
    sw.Start();
    for(int i = 1; i <= C; i++) {
      float x = i / 0.123f;
      float y = 1000.123f / i;
      array[i%M] = x / y;
    }
    sw.Stop();
    Console.WriteLine($"Time spent on division: {sw.ElapsedMilliseconds}");
    sw.Restart();
    for(int i = 1; i <= C; i++) {
      float x = i / 0.123f;
      float y = 1000.123f / i;
      array[i%M] = x * y;
    }
    sw.Stop();
    Console.WriteLine($"Time spent on multiplication: {sw.ElapsedMilliseconds}");
  }
}

The difference isn’t really that great, it used to be much worse, but it’s still there, around 7% slower. Albeit in 500 million iterations. Soon enough this optimization trick will too turn meaningless, for sure.

Regarding whether the algorithm itself is good, I don’t know, looks ok, but it’s hard to tell just by staring at it, I’m sure you can check it against some parameters.

The original method of mine works by computing the determinant of a 2D cross product to find whether some point is left or right of the line (that was the whole point of this exercise), but then I superimpose the distance to the nearest point check with the ‘tolerance’ threshold, which allows me to ignore the inevitable floating-point flickering in the marginal case and to detect a point being on the line, even though that’s mathematically incorrect, strictly-speaking.

The nearest point itself is computed by projecting one vector onto another (via dot product), and this produces a very stable result, and usually you don’t need or want this behavior, though, for example in triangulation, and this is why it is optional in the first place. However, when this is used for a GUI, that’s something else, and you want something that feels good – I personally hate janky and glitchy interface and similarly flimsy behavior – and I can see myself using this all over the place.

I’m no whizz either. Don’t worry. I used to suck at it. At least according to my schools.

personal rant

Where the professors would exploit my knowledge of computers in secrecy, with promises of high grades. I didn’t get any high grades in the end, I was actually intentionally delayed so that I’d have a hard time getting onto university. I have no university degree because of that, and also maybe because my country was senselessly bombed by NATO. You kind of stop caring about that kind of stuff when you’re literally gazing into Tomahawk missiles striking your city’s landmarks in a sunset, and then you don’t have electricity for a week.

And when I think about it maybe those schools deserved to be run by napalm or agent orange, but I am very much a decent person that definitely doesn’t hold any grudge against any institution of fallacy whatsoever. I just wish I was born someplace else, because my life is pretty much fucked up, and my body and soul belong somewhere else.

I do all this, we ruminate over some implementation details, yet I’ll live on the street in less than 10 days, because I have nowhere to live, I have no income to pay for my ++rent, and my city is being flooded with wealthy Russians and Ukrainians completely taking over. All of this here, this is just a little piece of heaven I set myself up so I can distract my thoughts while I work on whatever means to me. To keep my head high, to keep my self-esteem straight, to keep my heart from exploding.

I’m not a whizz my boy, I’m just doing this shit for far too long, and every single day has been a struggle just to survive, and to keep my mind open, and so I occupy myself with wizardry, and cats, because they fit thematically, and I fully intend to beat this game; this ain’t my language nor my final form. I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Hogwarts here I come.

MelvMay likes these rants of mine, because they’re quite emotional, and I’m pretty good at writing. People on the internet told me too many times they would buy my book if I ever wrote one (here’s one such crazy example, that post with 250 likes). You have no idea how far I am removed from your reality, but I respect your reality nevertheless.

2 Likes

I am not the smartest person in the world. Yet:

but the bad programming is everywhere. For example today I discovered that blender has something called Orphan Vertices. And what happens here is that every model you import into the blender scene adds its vertices to a list ready to be saved. But if you delete an object from that scene the vert data is not deleted with it. So, if you designed a model and then imported it to scale another model, the vert data is stacking up and not being removed. Even if said object is removed. It led me to the conclusion that blender has to be the worst 3D modelling software going. Excuse my language but I don’t really understand what those bastards are getting out of it.

Anyhow I wont pretend I know what you are talking about but I’ve been following along the whole time and watching vids in the links.

The truth it is the unsensible programming of others that is biggest hurdle for game developers.

i am no story writer, and often by the time it comes to write a story for a game, aka by the time I reach the relevant point as the mathematician of the job I am generally exhausted to the point of not caring. My new strategy is in fact enabling somebody with no programming experience yet who is desperate to make a game and write and tell stories, to enable them to have a comfortable development environment. And the work required to achieve this sadly will be by large mostly unappreciated. But that is the way the cookie crumbles these days.

1 Like

There is nothing wrong with being human or being real, being honest to yourself. On one side I love working with music because there is an emotional element to it. On the other side I love maths/computers/logical things, apart from the cold logic of it. Yes there is emotion in a formulated result, but the actual creation is often cold and people to say your way is wrong because they don’t understand your way of thinking. An artist will never understand another artist. In programming some people seem to know everything and are quick to tell you, your way is wrong. But making mistakes is also a way of learning, rather than not making a mistake you don’t know is a mistake. I totally get your point though about wanting to see what you are doing rather than rely on something you don’t know what it’s make up is (checks, extras). And also, this is often faster, but less sense-making. Shaders certainly aren’t written the same way C# is written, now why is that? Different languages, different coding rules defined by; but we don’t have to follow rules and that’s ok! :slight_smile:

2 Likes

As for my code, I should really give it a go. The focus was also lowest ops, but I’m probably missing something with so little (24ish ops?). I remember coming up with an algorithm in my head for drawing uniform points on a circle with only 1 dimension. I love working on mad ideas, even if they end up wrong!

1 Like