With the way everything is progressing - both hardware and software - it won’t be for too much longer. A few of the tech reviewers I watch on YouTube have started seeing performance problems with 4C/4T with some games. It isn’t until they start using 4C/8T or 6C/6T that the problems disappear.
Within the next year or two we may reach the point that you can’t play current games with 4C/4T.
Dont start throwing phrases like ‘Hollywood level raytracing’ around like that, because you are way, way off when you do.
If you watch that entire nvidia presentation then you will see that there is a section that actually deals with the demands of Hollywood-type rendering. And it is miles away from realtime, and miles away from one x $10,000 Quadro card.
Rather, in the example he gave they looked at using one rack of 4 x RTX servers. Each RTX server has 8 Quadro GPUs. The total cost of this rack is $500,000.
And with that setup, he speaks of 3 seconds of footage being able to be rendered in 1 hour.
The realtime raytracing era is exciting, but it involves a hybrid approach and should not be confused with Hollywood level applications whatsoever. That is not to say that people wont be able to use the affordable, realtime hybrid approach for certain cinematic purposes, but the scale of the numbers on display should make it clear that there are still two different worlds here, and blurring the lines between them can only get you so far.
3 secs of footage in an hour on that budget is really great obviously it’s not the dream realtime gaming thing. I think that’ll be whenever it’s all affordable, few years off.
Or about 9-10 years away from a desktop GPU, although ARM has been pushing raytracing demos and there is the new RTX gaming GPU line from Nvidia due out this year?
Didn’t Intel work on a raytracing GPU not long ago?
Depends exactly what dream people have in mind. Much like the reaction to the initial RTX & DirectX Raytracing stuff at GDC, its clear that peoples imaginations have been captured by this stuff, but its far from clear that people have limited their dream to what the hybrid approach will actually offer with this looming generation of cards.
If we consider the upcoming RTX 2080 that may be announced in a matter of days to be affordable, and peoples dreams are safely within the boundaries of a hybrid approach, then we will probably start to get a much better sense of things as they pertain to game engines by the end of this year/start of next year. The hybrid approach consisting of plenty of stuff still being rasterised, and with raytracing & de-noising & low res combined with new anti-aliasing tricks, giving us nice area lights with soft shadows and nice reflections. It seems reasonable to expect we will get useable implementations of those things for realtime in game engines with higher end cards. Whether we can also squeeze decent realtime ray-traced GI on top of that with these cards remains to be seen, maybe we will be able to with certain kinds of games/scenes but this is the sort of area where peoples dreams and expectations might start to diverge from this new generation of cards, unclear to me at present. I’m certainly excited to even get one of these features at performant speeds in game engines I have access to, so I’m looking forward to the next 2 years a lot
I should probably know better than to engage with your constant failures to grasp the detail of technology, but in any case:
a) price/performance will not evolve in such a simplistic manner.
b) even if it did, thats still only to be able to render 3 seconds of ‘Hollywood’ footage in 1 hour, so nothing like realtime.
You are barking up the wrong tree. Nothing that has been announced is going to magically cause the world of realtime graphics to completely leave behind the ‘rasterise and fake various things’ approach in the next 10 years. What is on offer, soon, is the ability to fake slightly less things, including a number of things that can be very pretty indeed, and that people have every right to get excited about. But its way too easy to use this exciting starting point to leap off into utter nonsense.
It will also depend on what AMD get up to. AMD for a lot of years remained second fiddle because it was really cost effective. After winning several consoles worth of supply, it’s evident they have enough room to compete hard and are happy to do so.
Even if we could predict the price changes in that class of hardware, it would still not give us a good picture about the future of the realtime side of things. There are a number of reasons for this but since I already said a lot already, I will just pick one factor for now:
For the ‘Hollywood type rendering’ in the nvidia presentation, a big factor was how much framebuffer memory is available to render a complex scene. In this case he makes a big deal about the new bridge that enables two 48GB cards to share their memory, giving 96GB total.
I’m reasonably confident that when we get a range of consumer realtime ray-tracing games that really start to live up to peoples dreams, enough ‘faking it’ will still be employed that we dont need 96GB of videocard memory to get results that make people happy. So thats one reason I’m not going to try to extrapolate an affordability timescale using a very expensive RTX render farm as the starting point!
If competition goes well on that front, I can see it affecting what percentage of PC gamers end up with a capable enough card to do some of this hybrid stuff at various stages. But where I think the action really is, as far as practical realtime techniques for the upcoming generation of cards goes, is what talented people on the graphics programming side of things are able to manage. Given that the techniques which will now be do-able in realtime are mostly only really becoming practical now because of things like AI-based de-noising, it would probably be foolish of me to ignore the possibility that all manner of clever stuff may be done by developers in the next few years which deliver plenty of lovely results, maybe beyond the handful of possibilities that have been shown off so far. Glossy presentations that like to throw out one-liners about the end of ‘fake it’ techniques are a bit misleading because I think a lot of what is going to be on offer are new forms of faking it, and there is nothing wrong with that, it will yield some great stuff despite not really being the true, pure dream of realtime-raytracing that people have hankered for over many decades already.
Technically speaking, nobody mines bitcoin with a GPU. Any direct bitcoin mining currently taking place is being done using ASICs instead of GPUs. There are lots of people mining altcoins using GPUs. Luckily for gamers, altcoin values have been relatively weak this summer, so a lot of the crypto mining demand for GPUs is currently greatly decreased.
This. Between the next generation of cards coming out now (it’s all but confirmed at this point) and mining situation you mentioned manufacturers are finding themselves with a large number of cards they suddenly need to be rid of.
I happened to be checking a subreddit dedicated to hardware sales (/r/buildapcsales) and picked up a new 1080 directly from the manufacturer for hundreds of USD less than the price they had months ago and over a hundred below the MSRP.
My 780 Ti got replaced by a 980 GTX because the Ti basically died, One of the 980’s fans stopped spinning a few months ago but it’s a pretty cool card so it doesn’t need all that much cooling.
Ultimately I’m probably just going to leave it and buy a new GPU when I get a new rig. I tend to prefer to replace the whole shindig at once.
I checked my hardware survey page at steamworks. It was exactly 1 procent of my customers that have done the survey. Not alot in a small market like VR.
Anyway, here it is
It doesnt say if its i5 without HT or not. But I also got this statics, though its grouped into different CPU categories
edit: haha, 50% of those customers have a VR headset, for a VR only game. I think you should take lightly on steam statistics
Honestly dunno why everyone wants realtime raytracing. It may sound like a dream, but honestly as we approach that goal the gains we’ve made should enable better algorithms that will allow for better approximations… I don’t know if we’ll ever get it but I can imagine we will get pretty close. If there’s anything I’ve learned, its that nobody can predict the future.