Computing after Moore’s Law

Scientific American article Computing after Moore’s Law

Awsome

Ya I heard about this. That’s why I hate using the term “law” for anything. Even the “laws” of physics don’t always end up being the case in every situation as we discover new things – albeit they’re right like 99.99999% of the time though lol.

It will be really interesting to see how people innovate given tough obstacles and problems though.

Good read. :slight_smile:

@Velo222 it’s the same fault other laws have. They are laws for common sense scenarios. The laws of physics and moore’s law break down when you go to extreme scales. It is common sense that you have to stop eventually, but there are unconsidered things like quantum cloud computing and building processors on a small scale not thought possible before.

1 Like

So I take it the 3 page thread you created last time wasn’t comprehensive enough on the subject?

3 Likes

Could possibly have been concerned about a warning for thread necromancy. The article was interesting at least. I like the part about cognitive computers because we’re going to enter an amusing new era of machines that oing to be more organic than ever in how they operate :smile:

@Arowx likes starting new threads about new technologies that might never effect us, or might completely change the way we work. Nothing wrong with being a dreamer. Someone’s got to do the job.

Got the impression that people didn’t take my point of view seriously so it’s nice to be vindicated by a scientific journal.

Why do you think I keep harping on about multi-threading the Unity API, single core CPU’s are maxed out the only way to get more done is to go parallel*. I’m really hoping that Unity can do this and take full advantage of DirectX 12 and Vulkan (or even work better with Metal).

And think about it if Unity was fully parallel it would be scalable, you could have Unity cloud servers to run your game universe, which would be handy for making VR worlds.

*Mind you there is High Bandwidth Memory and Memristor or RRAM technology both of which could massively impact the speed of memory subsystems and bandwidth of systems. So probably even more reasons to go parallel.

Which is fine, but surely that would be clearer if it was a reply to the other thread?

I like these posts because they make you say “what if”, and yes, “lets start working on making this a reality”.

I just want Unity to be the best game engine out there and I think Arowx does too. Because it’s my game engine of choice. But for the most part, it’s Unity’s engineers that are doing the leg-work of making “wouldn’t this be cool?” into “it’s finally here!”.

Once you start down the road of game development, anything that helps your game perform better, easier, or on more of an epic scale is really exciting :slight_smile:

People have been saying we were about to run out of ways to continue Moore’s Law for quite a while. I used to have a Pentium 200MHz CPU that scientists were pretty sure was the fastest CPU we might ever see, because we were running up against the end of Moore’s Law unless other scientists found ways around some serious technical hurdles. Scientists found a way around the technical hurdles, and the CPUs available today are vastly superior to the CPUs available back then. Scientists are saying similar things now, and I am pretty confident we will continue to find new solutions to extend Moore’s Law.

It may or may not be silicon related solutions, but users don’t care if it is silicon related. If we find better ways to deliver computing power using graphene, carbon nanotubes, quantum computing, or even biological computers, then Moore’s Law can continue as long as their is a massive industry that can profit from it. Don’t confuse the end of silicon based Moore’s Law with the end of Moore’s Law in general. And remember, someday you will read an article about the upcoming end of Moore’s Law from your 100 core 50 GHz computer. The article will be as moot then as it is now.

2 Likes

Well that makes me wonder something, because I feel like we have hit somewhat of a brick wall in terms of pure “Hz” speed on processors. So, instead of increasing Hz on processors, they’re simply adding more processors and figuring out clever architecture to make them work together better.

Do you think it’s fair to say we have kind of hit a wall in terms of pure mhz or ghz?

That being said. I’m pretty confident whatever obstacles arise, someone will find a way to solve it.

You might want to check out this graph it shows the trend in processing speed for CPU’s!

Unless they get a breakthrough somewhere it looks like we maxed out around the 3 Ghz mark around 2005.

Or you could go for liquid Nitrogen cooling, the record is about 8 Ghz.

But look at the number of cores we have now and the parallel power we have available on the average GPU.

I would not say we hit the clock speed wall. Scientists thought 200MHz was the speed limit for the tech, and today customers running between 3GHz and 4GHz are quite common. If you overclock, you can easily play with speeds beyond 4GHz.

I suspect Intel could easily release a 5GHz single core CPU if they wanted to, but it would deliver far less performance than a 3GHz quad core. So no, I don’t think we hit a clock speed wall. The issue is more about focusing on what will deliver the best performance at various price points and thermal limits.

The current focus on adding cores makes sense. Adding more cores (instead of just clock speed) has huge benefits in workstations and servers. If you have a server running hundreds of threads, a dozen cores will handily outperform a single core. Even in a typical desktop system, a quad core handily outperforms a single core.

Good thing then that Moore’s Law has absolutely nothing to do with clock speeds.

I think next thing could be graphene transistors. It could replace silicon easily and since it doesn’t get so much heat as silicon chips and smaller things can be made out of it, clock race could pick up again and it’d be core race too.

You are ignoring the fact that we switched from single to multiple cores during that same time. Processing speed is only one metric when determining performance.

Compare CPUs to car engines. Clock speed is like RPM. Cores is like displacement. By your logic, Honda Civics are better than Corvettes because the little engines rev higher. Yet the Corvette is clearly faster because displacement matters a lot as well.

Compare two CPUs as an example. You can have an Intel Pentium IV 3GHz, and I will take an Intel Xeon E5-2699 v3 2.3GHz. The Intel Xeon E5-2699 v3 2.3GHz has 18 modern CPU cores, each running at 2.3GHz. The Pentium IV 3GHz has one old CPU core running at 3GHz. The Intel Xeon E5-2699 v3 2.3GHz outperforms the Pentium IV 3GHz by a massive margin.

Your graph ignores that obvious fact. If you graph total system performance, instead of just clock speed, then you would see total system performance got a lot faster in recent years thanks to additional cores.

@ShilohGames Yes, I am just pointing out that you probably won’t get that 50 Ghz processor!

Although if you have 25 x 2 Ghz cores on a chip, could you class that as a 50 Ghz chip?

Yeah, I doubt we will see a 50 GHz silicon based CPU. But we could (and probably will) see something like that using other technology. I am pretty optimistic about graphene as a possible replacement for silicon. I realize there are some tech hurdles, but graphene is awesome stuff and well worth the R&D.

We could easily see a 25 core 2 GHz silicon based CPU in a couple years, since Intel already sells an 18 core 2.3 GHz CPU. But that was not what I was referring to in my comment regarding 50 GHz. With different tech, we will likely see a sudden jump in clock speed. If graphene does not deliver, then something else will.

But even if all efforts to increase clock speed fail, there will still be massive overall performance gains in computing through additional cores and eventually through quantum computing. It is very premature to assume Moore’s Law is coming to an end.

To take advantage of a 25 x 2 Ghz core, your program needs to be written to perform tasks in multiple threads. 25 cores would do awesome as a web server for example, each request can be served by a different core. But it would not be much better than a single core processor for a spreadsheet or web browsing. For an average person the 50 Ghz chip would work much better on their laptop.

That said, the main bottleneck today is memory access. Chips spend most of their time stalling waiting for something from memory. Neither approach will solve that.

Beside what the article says, there are other possible sources of innovation. Off the top of my head:

  • 3d chips. Right now the chips are mostly 2d, but more and more, they are making them in multiple layers, so they can cram more transistors into the same space.
  • Architecture changes. The dominant general purpose processors today can be described as super-scalar, out of order, and they have remained like this for many years. One of the most fascinating developments I have seen in a while is the Mill CPU. Whether it works remains to be seen, but it definitely shows that there is plenty of room for innovation in processor design.
  • Optical computing. Maybe ways off in the future, but in theory we could use light instead of electricity to perform computation.
  • Quantum computing. The holy grail. This would completely blow away tasks that require large amount of similar computations, like graphics, encryption and physics.
  • MRAM, FRAM, PRAM, others. These are potential candidates to replace both flash and DRAM, which would be a massive performance gain for any IO task.
  • who knows what else.

I don’t think we need to worry about chips not getting faster any time soon.