I’m trying to render a camera at half resolution and then upsample the resulting rendertarget to fullresolution.
So if Screen.width = 640 and Screen.height = 480, the camera should render at 320x240 and then upsample the image to 640x480 using a custom shader (hq2x, xBR, etc).
I tried different methods, but feel like all of them are wrong and very hacky. Basically, i always have to display the resulting, upscaled image through the gui (which seems really unelegant. I’d rather use OnRenderImage / Graphics.Blit ).
Also, none of the methods i tried work on the Ouya (and on other tegra/mobile devices if i remember correctly).
Here’s one of my current solutions
RenderTexture rtFull; //full size rendertarget
int upsampleFactor = 2; //render at half res
Camera downSampleCamera;
Rect orgCameraRect
void Start () {
downSampleCamera = GetComponent<Camera>();
orgCameraRect = new Rect(downSampleCamera.rect);
rtFull = new RenderTexture((int)(Screen.width), (int)(Screen.height), 0, RenderTextureFormat.Default, RenderTextureReadWrite.Linear);
downSampleCamera.rect = new Rect(orgCameraRect.x/upsampleFactor, orgCameraRect.y/upsampleFactor, orgCameraRect.width/upsampleFactor, orgCameraRect.height/upsampleFactor);
downSampleCamera.targetTexture = rtFull;
mtlUpscale = new Material( Shader.Find("Hidden/Upsampler/hq2x") );
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
src.filterMode = FilterMode.Point; //Set filtering of the source image to point for hq2x to work
Graphics.Blit(src, dest, mtlUpscale); //Upscale the image
}
void OnGUI()
{
//Display the image on screen...this is an ugly solution...
GUI.DrawTexture(new Rect(orgCameraRect.x*Screen.width,
orgCameraRect.y*Screen.height,
rtFull.width,
rtFull.height),
rtFull);
}
So what’s the correct way of doing this? The method from above is ugly and (as stated) doesn’t work on the Ouya.
We really need this to increase performance of our game In Between on mobile devices / Ouya.
If i don’t use a separate render texture, then the destination rendertarget in OnRenderImage will only be half res (as the camera is half res, too). In order to upsample from half res to full res i need a full res rendertarget.
No separate rendertarget. Final Image stays at half res:
Another option is to write custom shaders so that you build-in the upscaling algorithm into the original rendering pass, straight to the backbuffer, removing the need for the render texture altogether.
Hmm… one would think setting Camera.rect wouldn’t shrink the render target to the rect’s size. Have you tried overriding Camera.rect in OnPreRender and back OnPostRender? That might trick it into using the full target size.
If that doesn’t work, you could try fiddling with the camera’s projection matrix manually…
How would i do that? Replacing (or adding) the actual model shader with an upscaling filter won’t help me. I need to apply the upscaling in screenspace, not per model. The idea behind all this is to reduce the overall pixelcount the camera has to render. So rendering at fullscreen, with an upscale filter on each model wouldn’t help me here, except killing the fps even more.
Or did i get you wrong and you mean something completly different?
Yes, i already tried setting Camera.rect in OnPreRender and OnPostRender. This doesn’t work (And it makes sense that it doesn’t, but once you are desperate you try lots of things that shouldn’t work. )
Changing the camera matrix might work…i’ll give that a try. I’m not sure if this will lead to a performance increase…but i’ll definitely give it a try. Thanks for the idea!
I want to upscale the image with an upscale shader like hq2x, xBR, SaI, etc. Using Screen.SetResolution just scales down everything and the resulting image is then scaled back to full size on the monitor, resulting in a very blurred image. You can try that for yourself: Play a game, set it’s resolution to the lowest resolution available. The resulting image on your screen will be very blurry. Using an upscale shader will get rid of most of the blurriness. Also, using Screen.SetResolution will scale down the GUI, too, resulting in a blurred GUI. Using an upscale filter, the GUI is rendered at full size and only the camera view needs to be upscaled. This way you get a crisp GUI and an acceptable game view.
Why is your original render target full resolution? Isn’t that one supposed to be half resolution?
I’d say these are the steps:
Render whole scene at half resolution. This could be achieved by simply adding a camera that’s a copy of the main camera and putting a half resolution RenderTexture as target.
The main camera now only needs to upscale the half size texture. It doesn’t need to render anything else. I think you can set it to render an unused layer, so it skips rendering everything. Then you can do a little hack in a post processing script. Instead of reading the given source, you read your half size texture. (This does break the intention of the post process pipeline a bit, but it keeps thing fairly simple.)
public RenderTexture half;
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// We are completely ignoring src
half.filterMode = FilterMode.Point; //Set filtering of the source image to point for hq2x to work
Graphics.Blit(half, dest, mtlUpscale); //Upscale the image
}
Of course you must make sure your “half” camera renders before the main camera. And any additional post process effects need to be applied after the upscale post process.
A better way would be to not use a second camera of course. In that case you have to crop the rendering into a quarter of the backbuffer and the post process that quarter back to a full screen. That probably does require some adjustments to any of the shelve hq2x shader. (The input texture coordinates will need some adjustment.) I guess that’s also pretty much what Dolkar already said.
Isn’t it hardware linear upsampling that happens automatically when using a non-native screen resolution and hence works only in full screen and only on hardware that supports that resolution?
It’s honestly not required for mobile hardware including mini consoles like Ouya. You make your game normally as you do, with resolution independence - then just change the res in Start and Unity will handle it all internally for you, for a free speed bump.
Why would you want to supply the shader for it? I thought your problem here was one of speed.
I don’t foresee any point in using a (not-bilinear) scaling algorithm; it’ll be slower than rendering at native res. It would be useful for pixel art, but if you don’t want pixel art, then don’t make pixel art. You aren’t making pixel art.
That said, I would like to know how we should handle this kind of thing. In an older thread on the same topic , Eric advised me to put the render texture into a GUI Texture, which works, but I don’t know if that is the cleanest or fastest solution. Here’s my current process:
Set low-res point filtered texture as render target for cameras.
Blit those into larger, bilinearly-filtered textures, of an integral multiple size.
Assign those textures to the GUI Textures’ texture properties.
When do I do step 2 and 3??? OnGUI is all that I can get to reliably work, but I can’t believe that’s what I should be using.
The cost of upscaling is constant with screen size, whereas rendering to the original target has a variable cost based on what and how much you’re rendering. That said, you’re probably right in this case based on screen shots and the resolutions in question in those screen shots.
I agree, it’s ridiculous. However, I haven’t found a better way of doing it. There are a lot of inconsistencies and brokennesses in Unity that prevent various other methods from working reliably on all platforms and in the editor.
If I look at the screenshots, it does resemble pixel art a bit. So, I assume that’sbthe logic behind the hq2x upscale. In terms of performance it might not be the most logical thing to do, but it can still perform better than the full screen render. That is to say if the fill rate is a major bottle neck and the overdraw is high enough to justify another full screen upscale pass.
In general, a bilinear upscale looks like smudge, while a hq2x upscale looks like sweet retro times, so that choice I can understand. Microsoft actually made the best looking retro upscale algorithm I’ve seen yet. Which I find surprising, because the retro feeling is a major selling point of large competitor in the game market. The microsoft paper is located here by the way. I don’t think you’ll be able to get that running on a mobile device, but it looks so nice you want to start rendering everything at thumbnail size.
That’s a very nice-looking vectorization algorithm, but seeing as the average computation time for their tiny examples was 790ms on a 2.4GHz CPU, it’s not likely that anyone has it running in real time on anything.
Pixel art scaling algorithms for real time are all raster-based.
Well, you are probably right. With DirectX 10+ compute shaders and random writes, it might be possible to get it running though. It’s not fully raster based like most of these algorithms, but the first steps still are. And with DirectX 10+ you can probably even run the spline optimization on the GPU. Granted it wouldn’t be the easiest task. 790ms on a CPU, could still be realtime on a GPU though.
I don’t think any of those vector versions look like what the pixel artists intended. The graphics go from looking like competent people made them, when technology was limited, to looking like they were made in Illustrator, by somebody who had never used it before, but had to make a deadline in an hour.
I think pixel art upscaling algorithms are curiosities that don’t actually solve problems. Pixel art rotation, however, I think is a worthy endeavor.