How to init a graphic buffer of a mesh without calling SetXXXData from CPU?

I tried to call mesh.SetIndexBufferParams to declare a mesh’s index buffer’s length, but the manual says that the index buffer will be uninitialized and the SetIndexBufferData should be used.

But I don’t want this step’s communication between C/GPU and prefer to send a more simplified job config array to the compute shader and let it gen the mesh’s graphic buffers.

So I wonder is there a CPU method to only allocate a piece of memory in GPU for the mesh or I can change the mesh’s index buffer id to another graphic buffer’s id defined by myself?

Solved, so firstly the buffer can be accessed after setting the buffer_params.

Then the key step is that when you finish using the graphic buffer gotten from a mesh, you should release it at that time, and next time you call get_buffer the mesh will return a new buffer.

Otherwise, the old buffer with the old length which should have been released will be returned to you, which can cause the index out-of-range exception.