IBM Z series… series appear to be mainframe cpu computer series. They are not really for sale. The cost of a single machine with this cpu can easily be between $800k and $33 million. http://www.tech-news.com/publib/pl2827.html
The cost of Z14 mainframe (with java support) is “contact us”. Meaning infinite dollars.
The idea apparently similar to what lisp machines once were.
All kind of things can be done on hardware, but given that making a single chip can easily cost over a billion, expecting those kind of features is … a really bad idea.
I’m surprised you wouldn’t have thought it possible. Modern processors are practically complete systems with their own processors, their own memory, their own peripherals, etc. Practically every processor from the past couple of decades has had some form of garbage collection because they need to manage their own caches marking what is and isn’t being used.
Another note, is that the GC used by Unity, is one of the worst. If they wanted to optimize that, it would make more sense to simply upgrade to a more modern approach. Even the approaches to making a pauseless GC are well known. It’s a lot of work, but it’s a straight forward path.
Code is just an inter face to virtually “rewire” hardware, so you can bake any code in hardware to begin with, and you can optimize that wiring too. Chip architecture are organize around convenient standard to abstract and hide the complexity. Next you will discoverer micro code and have your mind blown … ie you can actually code the instruction set of some chip, then go have your mind blown by FGPA… Chip design is a trade off between flexibility and efficiency, current chip design is a balance between the two!
That’s a pretty smart thing that people need without them realising it which can be used for os upgrading, sandboxing etc, it’s been a baseline part of CPUs for many years. It’s not quite the same league as doing a bit of garbage collection for a single language, and has long since been cost absorbed.
No, GC is not a straight forward path. But Unity is working on improvements. You just said something like “It’s a lot of work but yeah pretty straight forward doing holiday trips to mars.”
Or maybe: It’s pretty straight forward making a successful business. Everyone knows how it’s done. So why not? Because it’s not straight forward in anything but theory.
No amount of garbage collection improvements in C# will ever completely kill off the need for C and C++. I still use C (yes in 2017!) for doing bare metal code in tiny embedded microcontrollers. For example, some of these microcontrollers have as little as 2KB of RAM, so I need the tight code I can create using C. Similarly, I also need to be able to directly access hardware using pointers. For embedded coding, the only decent options for C, C++, or assembly. I love C# overall, but it is not the answer to everything.
By straight forward I mean’t there are several known, proven approaches to it. So it’s straight forward like creating a new RDBMS system is straight forward. Almost all of the hard problems have been solved, but that doesn’t mean it’s not a lot of work.
Except there are not several known, proven approaches to Unity’s garbage problem, which is very different to say, the garbage problem enterprise servers may face.
To get decent performance, Unity has to know a lot more about the problem, for example is it instantiate and destroy? Scripts? General purpose has a lot of known good solutions, but general purpose is usually not performing well enough for what we all want.
A shame really because if it were so, this thread wouldn’t exist
Yeah, IBM can build custom, ultra high performance enterprise CPUs for narrow enterprise only use cases. That is a completely different challenge from Unity needing to “improve garbage collection” in a very wide variety of platforms and use cases.
When you say server enterprise, industry matters. Financial industry is actually where most of the low latency GC research was done and where pauseless GC’s were born. There are no new solutions to GC that haven’t been around for at least a decade or more that I know of.
So whatever Unity does will not be anything new. It will just be a variation on something existing.
On the low end you have concurrent heap collectors. These work with the existing language VM’s. The next generation was to move objects to a completely separate process. After that it was using tech like infiniband to allow objects to exist in a distributed environment on a different machine. What the OP posted was just another optimization layer. A cool one but using known approaches.
I don’t really see replacing the mono GC as being all that important. Unity can’t leverage that on other platforms, they need to solve the problem at a higher level. Not that it wouldn’t be a bad idea to replace Boehm with a good concurrent collector, but it only gets them so much.
Maybe intel should think about doing it and actually add some value to there increasingly shit consumer cpu lineup instead of gimping it while adding useless crap no one but big gov protected monopolies and think s-tank groups etc asked for like backdoor security flaws enhancements.
I still program with relays, pneumatic valves and mechanical links. You know, actual metal, not just close to the metal.
Fortunately it’s getting rarer. Most stuff these days is PLC driven then converted to pneumatic and mechanical signals. But for reliability, it’s hard to beat using a metal bar as an assignment operator.