Cool Tech, write C# code that is cross compiled to the GPU!
OK Unity would need to make it so you could just tag a region of code for parallelism, for me at least but imagine what people with real programming skills could do once they can Untap the power of their GPU’s.
That’s unless Microsoft is developing a .Net to GPU technology???
What would you code in Unity if you could unleash the power of your GPU?
Good point but a modern gaming device has a CPU and GPU and why not take advantage of both.
A simple scenario would be you give your CPU a lot of processing to do and your GPU is left waiting on the CPU.
But if the task can be done faster in parallel on you GPU. Then you’re CPU could load the task onto your GPU, while it works out what is needed for the next frame. Your GPU finishes and the CPU passes it the rendering task and picks up the results.
A bit simplistic, but isn’t this the direction the industry is going.
Don’t take my word for it check the Dice Frostbite game engine industry lectures.
What could be better for game engines and game developers than having two processors, one great for serial tasks and small multi-threading jobs and the other good at small massively parallel tasks. Or on Mobile or APU in a single chip.
And being able to access both with a single language.
Expanding on this, talking mostly about PC, most gamers and/or developers max out GPU time before they max out CPU time. Where possible, visual stuff is typically cranked up to the point where the system is only just managing an appropriate frame rate, and this usually puts more pressure on the GPU than the CPU. So, in the use case of a game or highly visual application, moving more stuff to the GPU when it’s already under high pressure in order to reduce CPU load which is usually under less pressure doesn’t make sense.
Exceptions to this are stuff that work really well on the GPU that would bog down a CPU, or less visual apps where the GPU isn’t under particularly high load.
Because they’re tech demos. They’re meant to look fancy. Would they have got you this excited if they showed you a number crunching benchmark which just printed a few lines of text on the screen? I suspect not.
How many people buy GPUs based on their ability to crunch data for reports or compress video quickly? Some, but not nearly as many as gamers buy to push more pixels on bigger screens for newer games.
I could be wrong, but I think that’s more about bus bandwidth (“draw calls”) than computational speed. No one number tells the whole story of a system’s performance.
I don’t think so as CUDAfy, also needs the Visual Studio C++ as it converts your C# code to CUDA or OpenCL code. For Unity this would probably be more like how Unity builds for IOS where Unity generates an IOS/Mac project that is then compiled for Mac or IOS.
There are also additional dependencies, e.g. CUDA SDK that Unity Games would need to include.
@Imbarns Nice but what if with Unity developed a CUDAfy like technology where you could instead write something like more like this.
using UnityEngine;
using System.Collections;
using UnityEngine.GPU; // ideal GPU enabler
public class BufferExample : MonoBehaviour
{
public Material material;
const int count = 350000; //number of vertices to generate
const float size = 5.0f;
Vert[] points;
[GPU DATA]
Vert[] gpuPoints;
struct Vert
{
public Vector3 position; //self explanatory
public Vector3 color;
}
void Start ()
{
points = new Vert[count];
Random.seed = 0;
for (int i = 0; i < count; i++) //make 350,000 verts with random color and position
{
points[i] = new Vert();
points[i].position = new Vector3();
points[i].position.x = Random.Range (-size, size);
points[i].position.y = Random.Range (-size, size);
points[i].position.z = Random.Range (-size, size);
points[i].color = new Vector3();
points[i].color.x = Random.value > 0.5f ? 0.0f : 1.0f;
points[i].color.y = Random.value > 0.5f ? 0.0f : 1.0f;
points[i].color.z = Random.value > 0.5f ? 0.0f : 1.0f;
}
[GPU]
gpuPoints = points; // triggers a loading of the data onto the GPU
[END GPU]
}
void FixedUpdate(){
for (int i = 0; i < count; i++)
{
points[i].position.x = Random.Range (-size, size); //slow to do random in update, just example
points[i].position.y = Random.Range (-size, size);
}
[GPU] // triggers the generation of GPU code that is triggered from fixed Update
for (int i = 0; i < count; i++)
{
gpuPoints[i].position *= points[i].position;
}
[END GPU]
}
void OnPostRender (){
Graphics.DrawProcedural (MeshTopology.Points, count, 1);
}
}
This is only pseudo code to give you an idea of what could be developed by Unity.
Note: You would probably still need your shader to draw the data.
The big problem I see is that it only targets CUDA capable cards. If it were to happen, and I’m not sure if it ever would as the benefit to Unity likely wouldn’t outweigh the cost of development, they’d need to build it as an agnostic API that supported CUDA / Mantle / Metal and would still fall back to CPU only if the capabilities didn’t exist on the target hardware.