I would like it to turn to some AAA Wip thread, instead of AAA bla bla only.
Well I can go through my processes in making a “AA” game, show the workflow / methodology. Two conditions though, a lot of it is going to be from UE but that doesn’t matter if you have shaderforge and some bolt on’s from the asset store there’s near enough no reason you can’t follow along.
Secondly, a lot of it will be “warts and all” so if anyone doesn’t understand what “prototype” means and they complain saying that looks “crap” even though it needs another 20+ hours or several weeks work on it. I’ll delete the posts and not bother…!
If we’re cool with that, sure I’m up for it.
Sorry about that, didn’t really have time to digest that. That first post was a bit hard to decipher.
Well, I currently have “functional” obstacle avoidance, though I guess I’ve never tried testing it out on AI ships chasing a player through an asteroid field. Maybe what I have would actually handle that.
For a first pass at obstacle avoidance, I shoot out a raycast with a max length scaled to the speed of the AI ship. Fast ships need to start turning sooner to avoid stuff. All I do is go to a “pull up” state until I’m not getting a hit any more. I works surprisingly well, but I doubt it could handle anything too fancy.
The reason I find the topic interesting, is some stuff that @Billy4184 brought up in the thread on his space sim AI:
Stuff like this sounds especially interesting:
“creating such things as evasive maneuvres generating on-the-spot spline paths for the ship to follow (inspired by Kythera’s work on Star Citizen)”
Basically, I have functional obstacle avoidance and space combat AI now. But it’s a bit dull. Often it turns into boring turning battles and the AI tends to clump together. So anything on that topic gets my attention.
Some of it is just a matter of having more distinct behaviors between enemy types I think, while some of it is about more interesting motions.
Your comment on navmesh’s confused me though. Navmesh’s are usable in a space game?
Not my experience, but OK.
Moving spaceships around in an engaging way could potentially involve stuff like:
- PID controllers for movement and rotation using physics
- Interesting collision reactions
- Creating those on-the-spot spline paths I mentioned
- More complex decision making, possibly involving behavior trees and/or Utility AI (https://www.assetstore.unity3d.com/en/#!/content/56306)
I like more prorotypes to see what is behind the scenes instead of some polished level desmonstration.
Anyway it would like also @Billy4184 WIP notes on it’s progress as we should learn more related to Unity engine choice.
If it works, that’s all that really matters. I might have a lot of non static actors (sorry UE term there, gameobjects) in a scene. Therefore I tend to avoid raycasting, I do use raycasting (or tracing) for things like a weather system where lightning hits the ground for example and sets off a particle once it’s completed. But for player / AI / object avoidance I tend to not bother. Unless there’s a sparse amount of non-static actors…
Again, what I tend to do is make everything distance based on either Player / Object or AI / Object for example, from that one core element not only is it quick (small amount of math and an array pool). It’s also ridiculously expandable, for such duties as collision avoidance (maybe?), or a tag defined pool of interactive objects where the AI can do certain actions dependant on location). It works for enemies, NPC interactions pretty much anything really.
Yes, sorry I should of been more clear and not just shouted out acronyms expecting everyone to know. That BT/FST I was talking about is a behaviour tree / finite state tree https://docs.unrealengine.com/latest/INT/Engine/AI/BehaviorTrees/QuickStart/index.html
I’d feed the array pool into the BT, then do flow instructions from there. Hmm, maybe when I describe it, this sounds more complex than when I actually do it :)…
A navmesh doesn’t have to be just like a polygonal “recast” setup mapped to a floor, there’s many way to approach it dependant on your game. For level based games with chars it does just fine, but for other types of games you could consider things like avoidance bounding volumes (for objects / AI) known as blocking voumes in UE and navmesh bound’s (You may only fly around this segment via waypoints). Then making a spline / random point / waypoint system that taps into it…
After that from what I was saying earlier about using vectors to calculate distance, then in pseudo. If player is < 1000 meters away, go kick his ass. Using whatever follow system you’d prefer…
You could even make a 3D pathfinding solution if you have way too much time on your hands.
I’ve not a clue what Unity can actually do (I only ever made RPG’s and Topdowns) in it, not the first time I’ve done a space ship thingy though…
How do I make my game look ****.
Well, I want to work backwards. We talk a lot about how to make a game look good, but never much about why it looks good and what errors people seem to run into. How much impact does an engine have on the look / feel of a game, some are very split on that subject (I’m not) but let’s have a look into shall we?
Ok here is the infiltrator demo, pretty ain’t it?
Here, I’ve switched off the post processing…
Avert your eyes, ewww! Here’s picture with all the IES profiles / lighting / shadow information / particles (vfx) switched off. This is literally just the 3D art.
Yeah, doesn’t look so good does it? One of THE biggest factors is calibrating your lighting / materials and post correctly. If you don’t do that you’ve no chance, no matter how good your artwork is it’ll suck…
Luckily a lot of 3D art packages now include DX / GL based post processing, so at least you’re not completely blind going in. So the question, how much impact does an engine have on the look of your game? A lot…
Let’s just do one more example, where I’ve put everything back to the way it should be then stuck in an IBL (Image based lighting) component and not configured it right.
Yeah, I’ve tried to make it as apparent as possible… But this is the effect of only one misconfigured light…
Your artwork can be pretty simple and still look better than most “indie” games out there, if you understand the basic principles of how engine graphics technology works.
@Steve-Tack yeah I don’t really have much to say on the topic that’s relevant to this thread, but I’ve spent quite some time on spaceship AI/obstacle avoidance and my point of view is simply this: you can get a passable AI and obstacle avoidance using raycasts and simple state machines, but to get from passable to good (or very good) you need to go a lot deeper. I had a lot of problems making combat interesting and avoiding ‘clumping’.
For obstacle avoidance, if your AI is using raycasts and navigating in the vicinity of any concave obstacle profiles you’ll have problems with ships getting trapped. You can simply pre-build paths around known obstacles, but there are some problems:
-
You might have dynamic large scale structures such as capital ships that could combine with other obstacles to create difficult obstacle profiles that weren’t there at the start.
-
When you just use raycasts the AI tends to go around like a cockroach, zigzagging its way around things, because it’s always needing to keep track of where the obstacle is when going around it. It has no memory.
-
It’s difficult to create varied, complex and smooth manoeuvres during combat simply using angles and forces, it becomes a state machine porridge.
So anyway I spent a lot of time looking on the net and I came across a vid where the guy from AIGameDev interviews one of the people making AI for Star Citizen. And basically their approach is to dynamically generate spline segments in-game that the AI either constructs itself or creates out of pre-made splines. This gets around the problem of ‘obstacle memory’ over frames since it solves the avoidance problem once for the period of time that it is following that spline. And it makes the AI movement look very neat and smooth.
So I decided to implement that, using a force-based PID controller to follow the spline, and it seemed to work fine for a few test cases. And then I decided to do other stuff ![]()
Great stuff, thanks for sharing!
Blah blah blah, I’m still waiting to see the rough edges work in progress! ![]()
(although the examples are nice and all)
Well you wanted rough…! There’s a metric boat load of problems with this one, corridor too clean. Colour out on door pannels, messed up material I just slapped on the pipes… They need re-doing as NM’s can only get you so far… Although it’s not really the point at the moment, all I really need to do is get everything built and then I can obsess over textures and materials until I’m blue in the face.
This is what I spend wayy too much time on, so I had a concept… Which I’ve not followed, I used some pre-made materials to “test the waters” looked messed up of course because it wasn’t made for the meshes. I was supposed to use the “AAA” materials as a reference and it’s a good thing to do, but me being me… Did whatever anyway…!
Although, there’s tons of meshes / decals to add later so I’ll do a sweep and probably re-texture and / or sort out the issues. Edit I’m sure there was a point to this: I’ve probably got another 300 meshes to make, it’s fine doing pre-viz. Infact I enourage it, but there’s no point trying to finish this off this early as it’ll look completley different when all the rest of the clutter is in…
Do the blockout / simple texture / colour analysis and move straight on. Don’t let the fact it looks crap bother you too much.
Onto making the City!>. RARR!>

I agree.
Any chance you have the link to that Star Citizen interview?
I find that interesting, a couple months ago I did exactly the same thing. (Including the last part, getting it mostly working then moving on!) It was a bit difficult to get the prefab maneouvers aligned and scaled correctly, but for a time I even had tests where two ships were using real dogfighting maneouvers (granted, they’re aircraft maneouvers, but my tests were all about good looks). Pretty easy too, just set a series of position/orientation waypoints as a prefab, Catmull-Rom them, set a “close enough” factor for each waypoint, and turn the PID loose. It did take a whole bunch of fiddling to combine “fly towards” orientation with an attempt to match waypoint orientation, and in all honesty I lost interest in the exercise before I really nailed it. (I was doing all of that with the intent of learning other parts of Unity, the maneouvering was kind of a sideshow.)
Unfortunately, the link I accessed it with seems to have been a Livestream video that was available only for a certain time. The interview now seems to be only available by subscription on the AIGameDev website.
It’s a very interesting talk but it won’t go into the low-level details of implementation. If I remember correctly they mentioned that the main difficulty was transitioning from one spline to the other, and getting back on a spline after the AI had been removed from it for one reason or another, so kind of sounds like maybe you had the same sort of issues. I only ever got to the point of making it follow a single spline in perfect test conditions.
Wow this thread went quiet since me and billy started doing stuff.
Ok, next bit… So I procedurally placed a road layout, there’s a point where trying to do overhangs / arches etc. kind of becomes silly and it’d take more time to code up a solution that it would be just to model the damn thing.
So, I need this for one purpose and one purpose only… You guessed it, BLOCKOUT! Way to start with everything, then let it grow… I’ll test navmesh, create a surrounding terrain and work on a water shader.!
I’m liking the darker sort of more batman(ish) thing going on, so I shall remain on that path.
+1
Are these roads modeled by hand in Maya ? Generated in Maya with some scripting ? or imported outside of a 3D engine into Maya ?
Generated in Maya, I bought SceneCity for Blender ages ago. Which was great in principle, but UE had a fit every time I tried to import the city to do basic testing. So I gave up on that, although it was interesting to see how they did it…
Speaking of city generation here is some current technique and highlight by indie:
the most famous and versatile layout technique out there (with in browser demo)
http://www.tmwhere.com/city_generation.html
It’s base on an academic paper that study formation of cities.
A set of technique to transform and manipulate the layout to mesh (with code source, see last link)
http://martindevans.me/game-development/2016/03/30/Procedural-Generation-For-Dummies-Half-Edge-Geometry/
http://martindevans.me/game-development/2015/12/27/Procedural-Generation-For-Dummies-Lots/
http://martindevans.me/game-development/2016/05/07/Procedural-Generation-For-Dummies-Footprints/
http://martindevans.me/game-development/2015/12/11/Procedural-Generation-For-Dummies-Roads/
https://bitbucket.org/martindevans/base-citygeneration/src/a65800862b60?at=default
Actual indie using it (delacian) not the only one








He had implemented it on unity and unreal too
It the same tech introversion used too
This could become like terrain editors in some near future who knows ?
You’ll paint some color on the terrain to tell where will be the hightest buildings or the lower ones. Another color to tell the green areas or with some small simple houses , like you paint terrain with tools to auto generate trees and vegetation.
Take Unity Gaia plugin and modify it to paint and auto generate buildings, houses, roads, bridges , depending on the stamps you use ![]()
I’m surprise it hasn’t been done yet because that was surely out in the wild for some times, the workflow is known but everybody is doing custom implementation that does the same things over and over again. Same for building generation.
Maybe there is an asset store idea somewhere.




