Can indies use this AI texture upscaling to boost their game content?

Upscaled 4K version of Metroid Prime from the Gamecube version.

The source article also mentions that this technology has been used in DOOM.

So any small team indie developers thinking about or actually using AI texture upscaling in their development pipeline?

If game has strong art, it has strong art. Upres the textures, sure why not?

If it has weak art, and you upres the textures, it will multiply the weakness of the art.

6 Likes

I’m going to say that soon an AI will be smart enough to be trained on good artist and it won’t just upres, but also fix your art based on what it was trained on. A primitive version of that is style transfer. Then there is also semantic inpainting, which take just the colored silouhette indicating objects and fills in.

Ai will be next programmer art but good enough this time.

right now the main benefit of AI on art I’ve seen is rapid and high quantity of iterations. So it boost design. Rather than 5 concept artist making 50 designs, you can spit out 500 designs in less time and without paying so many people. You just need one person with a designers eye to sift through the computer results and then take it from there.

So AI cleaning up and sharpening textures, blending colors to more pleasing degree or whatever, that might be a small boost – like a nice post process effect – but it cannot save weak art or be something to rely upon.

1 Like

Yeah but I mean full blown inpainting really. You can literallt paint a stick figure now and get out a realistic character (a bit fuzzy right now), and you can probably chain multiple specialize (existing) ai to compose a whole scene with great art. The fact ai can manipulate semantic visual is the very big game changer!

The recipe:

  • draw stuff, pose, location, action and semantic get parse by the ai (see the ai that guess drawing)
  • second ai is a composer ai, it correct location and pose based on composition, get a semantic heatmap out
  • third ai is inpainting ai, convert it into a full picture
  • fourth ai is the style transfer and make the picture in a given style.

I think it can be done right now as a proof of concept, though not at good enough level.

In fact you can probably just input text, and get a composition back.

yeah, but that’s totally different from the topic of this thread isn’t it? or is this the evolution it’s working towards?

1 Like

I think they kinda goes hand in hand, upscaler are actually inpainter, in that they guess details and make decision to try to stay consistent. I simply expended on addition you made (on the benefit).

1 Like

so this kind of technology will reduce labor and multiply design iterations, which will put grunts out of work, but the higher level of design still has to be there.

So vote for UBI, or be the boss. Or learn to live in the woods. :slight_smile:

1 Like

Well if pixel density = hours worked then reducing the number of pixel input to the number output is a massive potential gain in art pipeline.

Now if we had smart subdivision of model mesh geometry as well as texture resolution scaling then even basic low poly game developers could in theory produce AAA 4K+ games.

no. that doesn’t make sense at all. pixel density do not equal hours worked. You cannot break down a complex job like this the same way you write game logic. It’s vastly more complex. A AAA character is not just a low poly character subdivided.

Arowx, please talk to some game artists. I’m begging you.

Hell, please start talking to people who have practical experience in the game production pipeline instead of just looking at the latest tech and scrambling to your keyboard to make a thread about it.

I would imagine that an artist capable of having made the textures for Metroid Prime or Doom, could have just as easily made them hi-res to begin with, were it not for the technical limitations at the time.

1 Like

@Murgilod Well BIGTIMEMASTER is an artist so that’s done lol :smile:

@
We have a concrete use case in the term of fan remake, for example the ff7 remake had trouble finishing the 500+ screen to uprez due to the amount of works, but they use the upscaler to actually do it. It’s a proof of concept in a similar production set up.

Also it make sense with 3d render, start with a low rez then NN uprez it to get faster result?

The issue is that you have to have existing base artwork to extrapolate from. The technology takes that base artwork, and guesses at the details it needs to add in to make a higher-resolution, up-scaled texture. The problem starts to arise in that it can only guess, and at some points it will guess wrong. As long as the percentage of wrong guesses is low enough, it is still viable, but it can never really be perfect.

But at the end of the day, none of this is original, nor can it really be re-purposed to create original work. So you still have to have the original work, and that original work has to actually be good/decent art or the resulting higher-res version will simply be a higher-res version of bad art. So you still have to have an artist, and they still have to be good at their job. Also, most modern artists are already working at higher resolutions, thanks to the much more scalable nature of modern tech. So the need for such technology in modern games is actually quite low. It won’t magically make your low-res art good, just high-res. Unless you are very specifically focused on making low-res pixel-style art, and kind of want to try a high-res version, this doesn’t provide that much utility for modern game developers.

Obviously, it’s real advantage is in game preservation, and up-scaling older titles to run on modern systems. For a lot of games that came out in the late 90’s to mid 00’s, this kind of tech would be fantastic for automatically updating them.

Kind of like Nvidia’s DLSS?

You can bet that if its useful, the AAAs will probably be using it first. They are the ones with the time and money to dedicate into new experimental tech.

When you have one artist, saving half an hour of their time results in a minuscule difference to the bottom line. When you employ a hundred artists, saving half an hour for each results in significant cost improvements.

2 Likes

With Doom and Quake fans not too far behind them. :stuck_out_tongue:

https://www.pcgamer.com/this-doom-mod-uses-neural-network-image-upscaling-to-improve-on-a-classic/

1 Like

I would really depend, I think it benefit the extrem, not the middle, I’m technically poor, although I have some art eduction I can easily see where I can benefit from this because I literally can’t pay an artist at all. It won’t save me just half an hour.

I don’t agree with that, experiment in tech to be from lone wolf doing stuff and then trying to drum up awareness about the issue, AAA is all about efficient workflow, they tend to coopt tech when they are good enough and the lone wolf had made a proof of concept that work within a pipeline. Lone wolf being often indies.

Yeah but not necessarily real time, if you have fixed or offline render. That’s the low hanging fruit.

The main adventage of that tech is that it work through example, as long as you can provide example you can synthetize it. And that’s kinda how you do it with an artist already, they do stuff and you provide details so they can do a better job.

Well if we keep it to upscaller, that’s kinda true, however the same underlying tech can do much more. And most work aren’t original either. You can do so many variation of dragon in so many different style, they are still dragon.

/devil’s advocate

I have some assets that would benefit from higher resolution, I wonder how good it would look.

1 Like

Ah, right. Someone probably has done that already/is dabbling with it, but the more popular approach seems to be reducing the quality of raytracing and then using AI denoising. Blender for example has an AI denoiser that AFAIK also utilizes screen depth and normal data, which seems to work quite well.