Understanding Tilemaps, the Grid and Coordinate Systems

I’m new to Unity (but not game programming) and I’m working on a 2d top-down tile based game.

I’ve found what I think is some benefit to having the center of each of my tiles to be an integer multiple of world space coordinates. I.e. the center of cell (1,1) is (1.0, 1.0, 0) in world coords.

To accomplish this I changed the transform of the tilemap in my hierarchy to (-.5,-.5, 0). My tilemap is the child of a Grid object with transform (0,0,0).

This worked great until I needed to capture mouse input so the player could click on tiles.

if (Input.GetMouseButtonDown(0))
            {
                Vector3 mouseWorldPos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
               
                //Vector3Int coordinate = _grid.WorldToCell(mouseWorldPos);

                Vector3 c = _grid.WorldToLocal(mouseWorldPos);
                c += new Vector3(.5f, .5f, 0);
                Vector3Int cc = _grid.LocalToCell(c);


                Debug.Log("Cursor pos: " + cc.ToString());

                App.world.baseMap.SetTile(cc.x, cc.y, TileDict.Wall);
            }

I was a little surprised that _grid.WorldToCell() didn’t apply the transform of the tilemap and that I needed to account for this myself in code. This made me think that I didn’t understand how/if the Grid is altering the coordinate system of my tilemap. I tried making the transform of the Grid match that of the tilemap, trying to make the need to do this manual adjustment go away, but none of the combinations I tried helped.

The code I have works but I’d like to not have to remember to do this extra translation set everytime I capture mouse input. What is the cleanest solution here?

The docs for Grid don’t make it clear whether it is a scene node (chaining transforms with child objects) or just some sort of container class. Since it has its own transform, I assumed it was the former. Again, new to Unity here.

Thanks for any insight!

I’m not 100% clear on the case you’re going for with the 0.5 offset, but let me just clarify some things that may help your understanding.

WorldToCell converts a world position into grid coordinates (Vector2Int), which is like a 2D index into the grid array, it’s not related to the cell’s world position at all, it’s more like a local position but with respect to the cell’s dimensions.

So you could take a position, pass it thru WorldToCell, then use GetCellCenterWorld using the cell coordinates as an input, or use CellToWorld to get the cell’s origin.

Take a look at the documentation for Grid and the functions it provides: Unity - Scripting API: Grid

The Grid is a component on a GameObject, and all GameObjects have a Transform component. The Grid is in local space, so the origin (0,0) coordinate is always at the Transform position.

Cells are not real objects and do not have Transforms, they are purely a logical representation for the integer coordinate system. The Grid is a tool to define the dimensions of the cells and provides functions to convert between coordinate systems (world, local, grid).

So perhaps you don’t need to do your 0.5 offset at all, but could instead use grid coordinates to work with integers?

Hope that helps.

That‘s why you have one utility function/class that handles this translation so you needn‘t worry about it. :wink:

Thanks for the background. I’m not 100% sure the offset will end up being net-helpful, but if you are drawing Sprites on top of cells of a tilemap, it is kind of nice when the Sprite that represents a tilemap actor at cell 5x5 has world coordinates (5.0,5.0,0)

Ask me if it was a good idea in 6 months lol.

Vector3Int cell = actor.transform.position.WorldToCell();

Create a WorldToCell extension method to translate from worldspace (transform) to int-rounded cellspace. That way you can conveniently get rounded coordinates without needing a weird grid.

Using _grid.WorldToCell will make use of the Transform of the _grid GameObject to do the conversion, as you are calling this from the _grid. Instead, you could use [child Tilemap].WorldToCell instead (not assuming App.world.baseMap is your child Tilemap). This would make use of the Transform of the [child Tilemap] to do the conversion which would account for the difference in position between _grid and [child Tilemap].

Also, you can change [child Tilemap].tileAnchor if you would like to keep your coordinate system. You will likely need to change the pivots of your Sprites to account for this as well.

1 Like

To anybody coming back to this later, because I just dealt with this madness and the 52343 different answers on various sites. Just change the x/y position of your grid to -0.5, -0.5 and call it a day. This does…

  1. Aligns your objects that are in integer x/y positions perfectly centered on all tilemaps (assuming you didn’t adjust them or the sprites from default)
  2. Visually fixes the grid at design-time in the Scene window to also match
  3. Automagically aligns the 0,0 cell and the rest of the grid perfectly with the mouse->window conversions, so you don’t have to do any additional translations to get the correct tile at a mouse position, for example.

EDIT: Jeesh, for what should be the default behavior this is a pain. Ok so above DOES align the grid correctly, but the tiles are still drawn 0.5 off…which makes no sense, since the TileSet is a CHILD of the GRID, adjusting the GRID position should also move the tiles the same amount, but it doesn’t. Ugh… So, the solution that now actually works is to offset the grid as above by -0.5,-0.5 but then also set the Tilemap’s Tile Anchor to 0.5,0.5 to offset the change in the grid (which again, is BS because they are children of the grid and their 0,0,0 should be relative to the grid).

If there’s some reason why the above ISN’T a good idea, I’d love to hear any potential drawbacks…I’m new to Unity still…but for me this solves all the issues I was having.

1 Like

I wonder if maybe the 2D Pixel Perfect Camera package helps or is actually needed for this?