hello , when i export my 3d models from the blender to unity the polycount increased and that due to the GPU that creates extra vertices in order to handle hard-edges, and here my question is how can i import hard edges(flat) models and keeping it flat without being the polycount increased ?

First of all the polycount is not affected by hard edges, only the vertex count. The polycount might increase because blender might count a quad or a higher poly as one poly while Unity will work with triangle meshes. Each triangle is one poly. This confusion can be eliminated by triangulate your mesh before you export it (also avoids unwanted topography).

The duplication of the vertices can’t be prevented. The vertices have vertex attributes. This includes beside the position of the vertex also it’s normal and uv coordinates. If you want a flat shaded surface, all vertices of that surface has to have the same normal.

The prime example is a cube mesh. In theory a cube has 8 vertices and 6 quad (12 triangle) polys. However each face requires that the normal vectors of the involved vertices point in the same normal direction in order to get a flat shaded surface. So at each corner of the cube 3 different faces meet at the same point. Though each of those 3 faces requires a different normal vector. So you have to split the vertex into 3, one for each face. That’s why a cube mesh has 24 vertices and not 8.

Another thing are UV coordinates. One vertex has one UV coordinate per texture channel. So you can not assign different UV coordinates to the same vertex. The prime example is usually a sphere mesh. A sphere can otherwise have shared vertices since the normal vectors are all radially from the center and two adjacent triangles actually have the same normal vector at the shared vertices. Even when you want to map a texture onto the mesh you usually want a continuous mapping and the UV coordinates are continuous as well. However once you’ve gone around the sphere once you have to duplicate the last column of vectices since at the “UV seam” the left and the right side of the texture meet. So one vertex would map the right side while the other vertex would map to the left side of the texture.

In the past the built-in renderpipelines of GPUs had direct support for a flatshaded model. There was a “strange” convention that the GPU simply used the normal of the first vertex of a triangle the the whole triangle. This however required that you have to be really careful how to setup the mesh. The built-in flatshading did not allow to mix other shading models. Since we have shaders this is a thing of the past.

There are ways to achieve flatshading using only “shader magic”, but it can be quite expensive or require a high-end hardware that supports geometry shaders or “partial derivative functions” ddx / ddy. Note that ddx / ddy may not produce a perfect flat look (the documentation calls it wobbly) since it recalculates the surface tangents in screen space on the fly for each 2x2 fragment which can be used to calculate a normal for the surface.

If you can afford using a geometry shader you could write a flatshaded shader that way. However just having duplicated vertices is much simpler, better for performance and for hardware support of your product.