It seems like useful hardware, i mean wouldn’t it be a good idea to have a card that can be dedicated to physics, AI or other stuff? Possibilities are endless rite?
Don’t most people have too many cores that aren’t doing anything?
It’s been done (see PhysX); hardly anybody was interested.
–Eric
I thought they were too slow compared to dedicated hardware to be worth the effort? And then multi-core came along and, well… what RockoDyne said.
You can buy a board for power vr raytracing if you want to knock yourself out. There’s plenty of these custom boards around, just reasonably pointless to support.
Wow that sounds awesome, im checking it out.
FPGA seem like a great prototyping tool toward ASIC, but I think that if you want to push the envelop with AI, physics, etc… Modern CPU / GPU have plenty to offer.
I haven’t seen any recent study, but you need some pretty intense logic to replicate modern GPU / CPU design.
The biggest issue would be gathering a high performance FPGA core logic that can beat a GPU floating point unit, or even a CPU with AVX2. (FGPA are slower then ASIC, so for plain vector math you might have a slower solution)
Then you have the issue of cache logic, branching, memory controller, etc… For Intel, those took decades and billions of R&D to perfect.
Also FPGA are another beast to design for… dont expect to write c++ with “Edit and continue” in a visual studio 2013 debugger environment.
Read about AVX512 Foundation to see what gem Intel will finally deliver for people in need of raw power.
Where does the FPGA plug into my iPhone? Won’t it affect battery life? (I was programming FGPAs and designing ASICs in VHDL 20 years ago - was what my PhD was all about.)
Wasn’t there a big wave of change looming on the CPU horizon, in the form of the Memristor?
NB: It’s the 4th circuit type, it can remember it’s last state without power!
It was going to replace flash memory, and allow for FPGA style processing.
Why move the data from the RAM all the way across the mother board to the CPU when you could reprogram the ram to process the data.
So it could massively increase the size of a systems memory (a memristor takes up less space), reduce the power requirements or boost battery life.
But don’t hold your breath it’s still just over the horizon.
First Person Gaming Apparatus?
Forensic Palindrome Generating Algorithm?
Fomenting Pathetically Grandiloquent Arrogance?
Flight Path Grappling Anomalies?
Flippantly Processed Goonsquad Aggression?
Fascist Planning Group Agglomeration?
Fallacious Paradigmatically Generated Alphabetization?
Facetiously Presented General Acronyms?
Yuppers…them FPGA’s certainly have their place in the gaming hardware paradigm. WTFFIGGN is an FPGA?
field-programmable gate array. Basically a customisable chip, which you can configure dynamically to do whatever you want.
Don’t forget the biggest wow factor of the Memristor… Instant boot-times
That’s nice but did I mention that a Memristor is very similar to a neuron in your brain! Currently if you want to simulate neurons you need a whole lot of logic and memory circuits to do it in hardware with a memristor you almost have a neuron in a single chip element.
It could revolutionise Artificial Intelligence, think of a bank of AI chips that are self configurable and can learn, neural networks on a chip that can be linked together into nodes.
Sounds interesting
Cuda physics on a GPU? yes please.
We
This is basically what people are using compute cores for already.
Commodore 64 was on a FPGA Chip
OK. I get the idea. You can structure the chip to be various logic gates on the fly. How would this be implemented for example into the Unity SDK? This seems to be near machine level coding to my immediate sensibiliities. Make a coding error and the chip would no longer work as advertised? I would like to understand the theory behind this.