Well, first of all, are you really experiencing a crash or just a hang?
There’s a lot confusing information about your issue. First of all if the object has a material and you are using a texture on the triangles, what exact role does the vertex color has in your case? If you use a shader that uses textures, the vertex color is usually ignored / not present. Maybe you wanted to get the color from the texture? In that case the color array is completely useless.
Though, apart from those uncertainties, your code just makes no sense ^^. The triangle indices you generate from the “triangleIndex” are indices and not 3d coordinates. Those are the indices into the vertex arrays that make up the 3 corners of the triangle that the ray has hit.
The reason why you most likely get a hang is because you have many nested for loops which probably take ages to complete. Further more: never ever use mesh.vertices or mesh.triangles in a loop. Those properties will create a new managed array each time you use it. If you want to work with the vertices of a mesh, you have to store that array in a local variable and use that variable instead. Otherwise you will create a huge number of arrays. Specifically your inner most loop iterates through all vertices in mesh and for each iteration you create / recreate the vertices array twice. Since that loop runs for every of your grid cells you can multiply that number by the columns and rows of your grid. You are most likely allocating gigabytes of memory with that loop ^^.
So, if the point of that script was to read / extract the color on the surface of the mesh at each grid position and the triangles in the mesh are actually textured, there are a few requirements for this to work. First of all the used texture need to be readable. For this you have to enable read / write in the texture importer. Otherwise you can not access any of the color information of the texture. Second, in order to read the right color from the texture you have to use
hit.textureCoord. With that texture coordinate you can now use GetPixelBilinear on the used texture. Note if the mesh uses more than one material (so if it has submeshes) it’s getting more complicated since you have to find the right submesh in order to know which material is the right one. Though if the mesh only has a single submesh and a single material, it’s kinda trivial.
Keep in mind that hit.triangleIndex as well as hit.textureCoord only work when hitting a mesh collider. Primitive colliders do not provide any mesh related information. Also the MeshCollider you hit has to belong to the mesh you’re looking for. So it’s generally better to get the mesh from the meshcollider you have hit.
Material mat = null;
for (int i = 0; i < grid.GetLength(0); i++)
for (int o = 0; o < grid.GetLength(1); o++)
if (Physics.Raycast(grid[i, o].transform.position, -Vector3.up, out hit))
if (mat == null)
var rend = hit.collider.GetComponent<MeshRenderer>();
if (rend != null)
mat = rend.sharedMaterial;
Texture2D tex = mat.mainTexture as Texture2D;
if (tex != null)
Color col = tex.GetPixelBilinear(hit.textureCoord.x, hit.textureCoord.y);
// here you have the color of the point on the surface of the mesh.
Note that this does only query the used texture at the uv coordinate of the mesh. If you use some fancy shader that dynamically assigns a different color / texture depending on the height, you can not read that out from the static geometry. You either have to implement the same logic you do in the shader here, or you actually have to render the topview into a rendertexture and extract the color from there. None of those approaches are that pretty.
It would be much easier to help if we know more about what this is actually about. What do you do with the color from each of those positions? Why and for what do you need them?