I am having to relearn everything from my programming days 25 years ago. I am building a 2d top down strategy game with a huge map and many different terrain types and resources.
So I started looking for information on how to take a large bitmap of different terrains and pull a part of it to put on a quad to represent a hex. The terrain bitmap is 12x30 tiles with each tile 90x76 pixels. I can’t seem to find any information on which classes read a large texture, pull a piece of that texture, place it into a material, to be placed in a quad.
is there a class that reads the textures at an offset relative to its size and pulls at X,Y location for a size of width, height?
in .NET C# I’d load the large bitmap into memory and just pull the section of memory from the bitmap I want and draw it to the screen.
I don’t think that’s what I need? UV gives me a Vector2d from 0 to 1.
Look at the attached. Its a tile map including all the characters and terrain. I have a Quad per map coordinate. That whole image is my atlas for game pieces and terrain. I want to pull ONLY the terrain I squared out in RED from the texture within the material to be place on the mesh.
There aren’t many resources on how to do top down strategy game in simple 2d. I can’t find the class that says HEY mesh display this material but start @ X,Y for width, length on this bitmap.
This map has 10x8 tiles. Lets say each tile is 10 pixels across by 10 pixels down. In the RED outline example I would want to pull the area that is at (upper left corner) 30,40 and pull a 10 x 10 bitmap area and apply it to the quad as its texture covering 100% of the quad.
What class does that? There has to be one? In .NET I can simply refer to the memory location of the bitmap to do it.
Or is mesh.uv a conversion from an X,Y to a basic percent of the image itself? So I will have to manually convert the dimensions to percent?
mesh.uv just seems to distort my image and not do anything else. It just doesn’t seem like the right class to use.
This is the simplest thing to do when using memory bitmap copying. It seems overly complex in unity. I’m sorry I just don’t get it. I am using a simple quad experimenting with the mesh.uv with no results even remotely close to what I desire. The best I could do is reduce the image down to 1/4 its size in a corner.
This is 2d top down like an old 1980s strategy game model. I don’t need all this fancy 3d stuff.
Then you aren’t using it correctly. Look up some tutorials on UV mapping, because it is EXACTLY what you need. Each vertex on the quad will need it’s UV coordinate changed to map to the correct sub-image.
In your example (assuming an image* that is 100px wide by 70px tall, and [x,y]): the UVs on the quad would need to be multiplied by [1/10, 1/7] and need to be offset by [20, 40].
Graphics nowadays (generally speaking) map sprites to quads, and then render them with the GPU. You can blame whatever tech advancements you like, but the gpu deals with vertices and texture coordinates now. The days of CPU bound rendering are (quite thankfully) behind us for the most part.
*assumption part II: the image is 10 tiles wide by 7 tiles tall, rough estimate.
Ok that helps guys. Thanks both of you. I programmed 25 years ago. I backdoored into a company writing a new A.I. procedure for a mod. It became so successful I made to expansions for the main engine. Now I they want to do mobile implementation of a new game. So OOP, Unity, and all the new technology is new to me. I had to relearn everything.
This really does help. Unity is made for 3d, 2.5d, and 2d and all the tutorials focus on it. I had to piece together some 10 videos on how to setup perfect implementation of resolution by pixel placement to simulate 2d for a top down game.
Saw quill’s series. Thought he had a part 2 that showed a better way than setpixel. It also doesn’t do alpha. My map is hexes.
Its one quad holding one 90x76 pixel image on a pixel perfect screen. The main image consisting of 15x30 tiles each of 90x76 pixels. The whole material is 1350x2432 pixels.
Guess I’m old school I like manual control writing to memory directly.
You could also use XNA/Monogame (Monogame being an actively developed port of XNA for Mono), the 2d stuff will feel a lot more familiar that way. Basically it will handle the whole quad thing under the hood, and the source rectangles don’t use normalized coordinates.
Well the game being developed is suppose to be cross platform. The company is developing it for PC. My job is to take the .NET DLLs for the engine and create a GUI for mobile platforms (tablets). I could use Xamarin but it would cost me $4,000 just to get all the software I need. And I know from my research games don’t pay that much on mobile unless you hit a jackpot game like Angry Birds. It was quite depressing to see the research. While the company has a niche my investment in Xamarin isn’t worth the commissions I am getting paid for the project. With Unity I can do it with a free piece of software and once I get used to it I can do anything.
If this was XNA I would have probably been well on my way. But I have no 3d experience, 25 year old programming skills, and learning a system not designed for what I want to do for a game specifically designed in a non-standard format for Unity.
I think mesh.uv is what I am looking for but I am not skilled enough to use it.
Just a note, setting unique material properties result in new materials. Different materials break batching. So most performance benefits of using a single texture is lost. Using one material and editing mesh uvs is more optimized. And not overly more complex than setting offset and scale. Using these values you Can easily set the 4 uv corners of a quad.
Something that may help you out a bit: Best casino game app and web development company - Third Helix
I just did a quick glance at it, and I believe Unity has a built-in Quad primitive now, so you don’t need to make your own, but the section about UVs looks like it could help clear things up for you.
Couple tips regarding texture offset, atlases and 2D. Sorry if these are super obvious (or crappy). But perhaps these might help out anybody new to Unity. At least until 4.3 is released.
Modifing texture offsets in runtime increases your draw calls because it creates new instances of materials. Might not matter at all but it’s good to keep it in mind if you run into performance problems. Edit: Well, ThermalFusion beat me.
Anyhow, you can give negative texture offset values to mirror textures. Perhaps randomize it for greater visual variety! Values like 0.66666 sometimes give undesired results (a fraction of an adjacent texture tile becomes visible). Values like 0.8, 0.5, 0.25, 0.125 and so on are safer. So layout your atlases accordingly. Also turn off antialiasing to avoid similar distortion. And lastly if it isn’t too memory intensive then using truecolor format for textures can make a world of difference.
So your telling me if I create a scrolling map of quads that’s 20x20, use the same material for all the quads and set a scale and offset for each object it takes my huge 1000x1000 pixel image (for example sake) and makes 400 copies of it in memory?
That’s nuts. I think critically about how things should work and I can’t imagine there isn’t a setting NOT to make these copies. Traditional 2d games copy one image into memory and use placeholders for loading the tiles.
renderer.sharedmaterial wouldn’t work I assume since it says it modifies everything for every object using it.
The base image doesn’t allow me to do that and I’m trying not to modify them because there are several large tile maps. I’ve tested it this morning and it works without flaws. I might have been lucky.
No, it makes sense. You’re modifying the properties of the Material, which means that the GPU is going to have to perform different calculations on it.
Here’s a snippet that ought to work for setting the UVs on a quad to a given sub image:
// Given Mesh mesh and Rect sourceRect with normalized coordinates
foreach (Vector2 uv in mesh.uv) {
uv.x = uv.x * sourceRect.width + sourceRect.x;
uv.y = uv.y * sourceRect.height + sourceRect.y;
}
That’s what you want to do. It’ll use only one material, and your texture UVs will be offset and scaled to the sub image provided in sourceRect.
To get the normalized sourceRect:
// Given non-normalized Rect subImage and Rect sourceImage representing size of source image
Rect sourceRect = new Rect(subImage.x / sourceImage.width, subImage.y / sourceImage.height,
subImage.width / sourceImage.width, subImage.height / sourceImage.height);
Ok I will give that a try, still don’t understand it though from my experiments. So uv is the same as shifting the offset in the material in the renderer itself and functions the same way from 0 to 1. I notice your uv has an x and y component. I never knew it had that. When I read the Unity reference it didn’t mention it. Mind you I am new to OOP. I used to be a procedural programmer. I’m spoiled by Microsoft’s detailed reference guide.
30s later
so are you saying set the scale in the material and use UV in place of renderer.material.mainTextureOffset? Because the textureoffset is what makes multiple copies but the coordinate system for UV is stored just as 2 floats for the object and its renderer.
Nope. Don’t set anything in the material. Nada, zip, zero.
UVs are normalized coordinates that represent where on the texture each vertex is supposed to take it’s image data from. The GPU will use this data to then determine which portion of the image to render for a given face based on the UV coordinates of each vertex that makes up the face.
In Unity, UVs are stored as Vector2 objects (structs maybe? the documentation isn’t super clear on that) in an array in mesh.uv. Hence why each UV has an x and y component (because it represents a point on the texture).
You want to modify the UV coordinates of a mesh to no longer encompass the whole texture, but rather a portion of the texture. So, you want to scale down and offset the area which the UV coordinates reference…and you would do this by scaling down and offsetting the UV coordinates themselves (transitive property and all that).
Look up some 3d modeling tutorials on UV mapping/unwrapping, it should help you get your head around the concept of texture coordinates.