Do you render texture your pixel art?

So, presumably by using a render texture at the same pixel density of your pixel art you would get a pixel perfect game in regards to graphics, this would mean that characters can only move pixel by pixel and all animations would look like sprite movement even if they arnt… Is that how it works? I don’t have Unity pro so I can’t check, just wondering…

Damn, wrong section… Please move to gossip or something :slight_smile: Thank you.

You don’t need render textures at all. That doesn’t solve a single thing and has nothing to do with it. If you want pixel perfect movement, it has to be done in the shader, as this will reliably align to int pixel positions. Assuming pixel perfect is even desired :slight_smile:

Render texture is required for a lot of things if you want to do things like rotate sprites and planes while maintaining pixel perfect behaviour.

Are most modern games that have a retro look pixel perfect? I can’t seem to recall so I guess its not significant. Every way I have tried it ends up un smooth and not ideal(Moving in increments of the size of a pixel)

This is part of what I was thinking, I would think a render texture would save a large amount assets by just rotating a low resolution asset? Again maybe it wont matter and will sacrifice the possibility of extremely fluid animations…

I used a modified version of SimplePixelizer for this when I gave the pro version of Unity a go at home. All I had to do was add some code to where I was grabbing the texture. Thread here.

How did it turn out MarigoldFleur? Were the results good?

If you want your game to look exactly like Mode 7 stuff on the SNES? It’s actually perfect! Though you need to make extra sure that you don’t have any non-integer coordinates otherwise you can get some really awkward scaling issues.

What do you mean by non-integer coordinates? Like position of transforms or something?

x: 12, y: 45, z: 0 = GOOD
x 11.853, y: 45.441, z: 0.11 = BAD

But for what? As in the objects of the scene? If so, is that only because the colour is taken from the position and will leave some strange looking elements or is there another reason? I would have thought that whatever was the strongest colour within that pixel would have been taken as the colour?

It’ll do the latter in the original code, but this leads to a lot of colour blending and the destruction of sharp colour edges, which looks really gross in practice. I’m at home so this is just a photoshop mockup, but here’s some dork from Final Fantasy XII: Revenant Wings:


4x scale, integer values


4x scale, float values

This is exagerrated slightly for effect, but in the second image you can see how it’s averaging the pixel colour, which means the pixel scale is still 4x, but everything manages to still look blurry. By using point value sampling for when you capture the render texture, it looks a lot more like the first one than the second.

What does it look like when it’s not “exaggerated slightly for effect”? I’d have thought that with point filtering you’d effectively be rounding to ints as far as sampling is concerned (because sampling pixel 42.111 would get the same value as sampling pixel 42.0) and thus there wouldn’t be any difference?

Being a Unity Indie user, I don’t have access to Render Textures. What I have is a Sprite Animation script that knows how to get the frame I want to present, and replaces the sprite object’s material’s image with that texture. I usually use an Unlit/Transparent, but I’m experimenting with some others, at suggestion from others!

Certainly a render texture would be significantly more efficient than this approach. I’m working up to it…

I’m not sure why you’d say that or why this thread has even been brought up, RenderTextures are notoriously bad for performance.

I’m also not sure why you’d need a Render Texture for pixel-art. Though, I LOVED Marigold’s art depictions of non-integer positioning! Kudos!

Gigi

@Gigiwoo and Meltdown: The difference would primarily be convenience. As it stands, I wind up storing individual frames. When it is a frame’s showtime, I pretty much wind up performing SetPixels on a new texture, until that new texture is filled. Then, I assign that texture to the sprite object’s material’s texture property.

Again, I don’t own Pro edition, so I’m not fully sure that this would necessarily be the case, but instead of having all that code, I would ideally just be able to assign the frame to the sprite object’s material’s texture property and leave that as that…right?

Also, what is this about the performance being bad on rendertextures?

I haven’t noticed they were inherently slower than what I expect to see from doing several render passes.

As for needing an extra texture, does rendering with a quad not cut it? Seems that setting the texture coordinates appropriately and rendering as Pixel (ie no-sample blending) would do the trick without copying into another texture or needing a render texture. As I don’t use pixel art, I must be missing something fundamental.

Gigi

I am genuinely curious what on earth render textures have to do with pixel art, whether they be rotated or scaled or integer or floating point coordinates.
Can’t you just use your pixel art as a texture on a quad, with texture filtering turned off so it doesn’t get smoothed out?

My reasoning for using a render texture is that when my pixel art is on screen I have a virtual resolution and a physical resolution, my virtual resolution is usually much lower than the physical resolution, for example, with some 16x16 sprites a resolution might be 240x160 were my monitor is 1920x1080. The issue arises when I just scale the art up as while sometimes it is correct most of the time pixels can become out of sync with the art making it un authentic.