Modifying a texture in Native code

Hey !
I’m attempting to build a C DLL that can be called in order to fill a texture with the main monitor’s contents, using GDI (yes, the program will run on Windows only).

The idea is to pass a pointer to a Unity Texture2D, set its underlying pixel color memory within the native code using the “captured” main monitor view in order to then use that Texture2D on a TV within the app that shows my / the user’s main monitor.

Here’s the native C code, located within a DLL dragged inside the Assets folder for integration.

#include <Windows.h>
#include <string>

struct PIXEL_DATA
{
    unsigned int r;
    unsigned int g;
    unsigned int b;
};


extern "C"  __declspec(dllexport) int DesktopCaptureToTexture(void* textureMem, int width, int height)
{
    if (textureMem == NULL) return 1;

    HDC displayDeviceContext = GetDC(NULL);

    BITMAPINFO bitmapCreateInfo = {};
    bitmapCreateInfo.bmiHeader = {};
    bitmapCreateInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
    bitmapCreateInfo.bmiHeader.biWidth = width;
    bitmapCreateInfo.bmiHeader.biHeight = height;
    bitmapCreateInfo.bmiHeader.biPlanes = 1;
    bitmapCreateInfo.bmiHeader.biBitCount = 24;
    bitmapCreateInfo.bmiHeader.biCompression = BI_RGB;
    bitmapCreateInfo.bmiHeader.biSizeImage = 0;
    bitmapCreateInfo.bmiHeader.biXPelsPerMeter = 0;
    bitmapCreateInfo.bmiHeader.biClrUsed = 0;
    bitmapCreateInfo.bmiHeader.biClrImportant = 0;

    PIXEL_DATA* pixelBits = (PIXEL_DATA*)malloc(width * height * 3);

    HDC displayDeviceContextMemory = CreateCompatibleDC(displayDeviceContext);
    HBITMAP ddb = CreateCompatibleBitmap(displayDeviceContext, width, height);
    SelectObject(displayDeviceContextMemory, ddb);

    BitBlt(displayDeviceContextMemory, 0, 0, width, height, displayDeviceContext, 0, 0, SRCCOPY | CAPTUREBLT);

    GetDIBits(displayDeviceContext, ddb, 0, height, &pixelBits, &bitmapCreateInfo, DIB_RGB_COLORS);

    // Now that the data is loaded, we can fill in the texture memory.

    memcpy_s(textureMem, width * height * 3, pixelBits, width * height * 3);

    DeleteObject(ddb);
    ReleaseDC(NULL, displayDeviceContextMemory);

    free(pixelBits);
    return 0;
}

The managed code side of things isn’t worth putting here, it just determines width & height, calls GetNativeTexturePtr() on the target Texture2D to get a pointer and passes it all to the native function.

Here’s the issue : I get a crash that I’m 99% sure happens on the memcpy_s line. My thinking is, I mistakenly assumed the texture pointer led right to the pixel data (set to be in RGB 24 bits format). However, I came to realize that’s likely not the case as the texture should be loaded in video memory.

My question then is : what does the pointer actually point to ? An OpenGL texture handle ? I can’t seem to figure it out so help would be appreciated !

Thanks in advance.

PS : I’m aware that the native code’s design isn’t the best, and that I should probably have the pixel memory made to hold the newest “screenshot” of the main monitor persist and only get freed when the DLL gets unloaded, among other things… Just trying to get this to work for now.

Also, I DID attempt to go look at the example present in the Documentation here : http://39.104.106.62/Manual/NativePluginInterface.html but the link to the demo is broken.

This is totally doable and I do it in my KurtMaster2D games, which are native C filling a RGB32 block.

The key is on the C# side you pin the texture memory down, hand it into the DLL and it blasts all over it, then you unpin it and marshal it back into the Texture2D.

In fact, I wrote about it once! With code!

Pinning a texture for manipulation via native code:

Pinning the pixels themselves instead of trying to pass the texture itself… can’t believe I didn’t think of that myself ! Thanks a lot !

1 Like

But that’S not really a performant way of manipulating the image. Keep in mind that the texture itself is located in native memory. The managed wrapper is does just copy the image data from managed memory over to the native side when you call SetPixels.

The method GetNativeTexturePtr returns a pointer to a graphics API specific pointer / handle to the native texture resource. As you can read in the documentation, it’s a pointer to a IDirect3DBaseTexture9 for DX9, a pointer to a ID3D11Resource for DX11 and a texture name for OpenGL (so just the int handle).

So depending on your platform / used API you have to interpret the pointer accordingly and use the API specific mechanism to access the texture data.

A possible alternative could be the relatively new GetRawTextureData method. It provides a direct view / pointer to the native texture array as an unsafe pointer. Unfortunately the NativeArray struct does not give you direct access to the internal pointer. The NativeArray struct is a wrapper to access the data through a indexer… However it may be possible to get your hands on the actual pointer through reflection. Though this would be quite hacky ^^. You also have to be careful when you handle this pointer. Once you call Apply on the texture (has to be done from the main thread) any changes to the native texture array will be applied and the pointer is no longer valid.

1 Like

Hey ! I have indeed tried to use GDI and filling in the pixels, and as you said it works BUT the performances are atrocious.

I have spent hours today trying to wrap my head around Direct3D 11 (that Unity uses for Windows) and I did manage to do some primitive texture manipulation with the actual texture native pointer as you suggested, but then I found out screen capturing isn’t possible with Direct3D either (unless the screen is showing a fullscreen app at that moment).

So I’m quite lost as to what to do. Would you (or anyone) have any ideas ? Point me in the right direction ? What I’m looking to make is anything that will capture my main screen’s current front buffer (back maybe ?) and dump it to a Unity texture to embed within my app. It needs to be able to reach 30FPS at the very least since I want to use it for my streaming interface.

Well, this part is no longer relevant to Unity and would be very specific depending on the windows version and DX version. If you can trust some of the statements in the answers and comments on this SO question the GDI approach should work fast enough. However you shouldn’t allocate a new pixel buffer, create a new device context and a new bitmap for every frame. This is most likely what is killing your performance, I guess calling “BitBlt and GetDIBits” should be enough. The rest should be initialization / deconstruction. So you would need to keep all your native objects and buffers.

The last time I played around with screen capturing was probably 17 years in some Delphi program, so I haven’t really looked at the current state of technology. However if you want to develop windows applications, the first thing to look at is the MSDN and more general any microsoft related pages like this blog for example.

Aaah, I remember those days. :slight_smile: Nice to be in Unity.

Any reason you don’t just reach for something like OBS?

I’m guessing they’re trying to create something along the lines of CodeMiko?