Yes, but it’s immediate benefit is limited. You have to learn to write good prompts which includes learning when and how to change wording slightly to greatly increase the quality of your result, and how to identify hallucinations.
Additionally the free tiers have very limited usefulness. In most cases if you don’t want to pay for or are in a country where you can’t access the paid AIs you shouldn’t even bother trying to use it.
Short answer is no, but it’s fun to try and once in a blue moon it’ll actually be useful.
I’ve given it a bunch of tries. UIToolkit, general math, rotations, rendering TMP manually, mostly when I was stuck and desperate enough. Unity and math related questions are terrible, it’s hallucinating too much. Total trash, a 100%.
BUT, one time I asked chatgpt a very specific simd question and it gave perfect code.
prompt was:
“can you give me a simd version of packing a 128 length byte array into a 128 length bit array?”
“i need to set the resulting bit to 1 only when the byte is 255”
not even that great of a prompt but perfect simd code. that saved me quite some time. no idea what happened here, maybe someone asked the same thing on SO.
I’ve only tried Google’s Gemini. It seems to me that experienced programmers don’t need AI and inexperienced programmers can’t really trust it.
Personally I like very compact code which the AI doesn’t seem to be very good at writing. I get the impression that they take publicly available code and then modify it. I’ve seen some code from Gemini that looks very familiar but with the variable and class names changed. Or sometimes the code is just plain wonky.
I recommend inexperienced developers stick to learning from the code that’s on GitHub.
No, and a lot of that is because it’s been getting progressively worse. Unity’s offerings are laughable already, but a lot of existing options have been outputting worse and worse code, the kind where I’d have to spend more either doing loads of prompt massaging or just fixing it myself for it to be at all useful. It’s still only truly good for boilerplate but the thing about boilerplate is that once I write it once I just save it and reuse it forever. I don’t need AI to do that for me.
I had no issue with generative tools for code up to recent.
For example ChatGPT I use only sometimes, rather rarely, for short snippets, of syntax, use cases, or methods name I forgot. For me faster than using search engine and sifting through forums, or stack overflow.
Typically 1-15 lines max. Beyond that it is debugging story and is easier to write own code.
For example I found solutions to specific Wwise implementaiton, which hasn’t been clearly documented. I still don’t know where information been taken from for training. But solution worked. There was a bit of halucination, but gave good pointers.
In my opinion, you should not use AI chatbots to do your work. Because that is a borrowed skills, and while a chatbot is doing the coding, you do not grow and do not improve. Also if the service is ever shuts down, you’ll be royally screwed.
However, LLMs work fairly well as teachers, talking encyclopedia, temporary discussion partner, or to explain things. That is a valid usecase.
Am using ChatGPT regularly to code smaller segments I’m too lazy for or for “out of the book” algorithms.
I’d likely not be successful with this approach if I didn’t have the skill to debug and understand the code however.
Occasionally it also helps me find specific algorithms (and their names) after explaining a problem.
It is not a perfect solution as others have mentioned, but 70% AI plus 30% brain are a powerful combo
Tip: When it is stuck on editing a faulty piece of code without apparent progress, tell it: “Let’s start fresh with a new approach please”.
That’s quite succesful sometimes.
Github Copilot is sometimes nice as a fancier auto-complete, but other times gives complete nonsense, hard the evaluate if it it’s a net positive since it does add to the mental load by having to read and verify snippets it provides.
Paid ChatGPT is great for I have this piece of code doing a thing, but I also need it to do this other thing or two type of stuff. It’s also decent at refactoring very messy code if you ask it to retain the same exact functionality.
Just for fun I tried Bing chat about a year ago (I’d heard that ChatGPT was the best, but the ChatGPT sign-up website didn’t accept my email at the time). Bing gave me some code that superficially resembled what the code that I asked for might reasonably look like. The code it gave me didn’t actually do anything and a lot of the details made no sense on closer inspection. After that I kind-of lost interest in the whole thing.
I’ve found the Unity code results mixed, and typically not worth the time to debug. Using Bing and ChatGPT. What it really has been good for is very specific questions, for example regex requests to do something and it’s spot-on.
I also found it good at generating Gatsby GraphiQL code, which is very niche, but it seems to just work. Probably because it has fewer examples to sift through and fewer versions, compared to Unity.
Stopped using ChatGPT ages ago. BingGPT, which also has a precise mode with CoPilot built in (also built into Win11 now) works very well. It’s silly if you want it to think mathematically, its not a mathematician, but logic and structure it excels as. Optimizing, probably not it’s best point either, but it can point you in the right direction, or show ways/functions you can use.
I’ve found that Poe’s Claude-instant is better at Unity, the popular LLM’s are way too chatty. AI’s provide that missing doc page or missing API function so we don’t have to spend a day tinkering with small code, and then be able to focus on the zoomed out development of the project.
I would split the use into 2 categories A) chatbot B) code completion. First - I do from time to time. Usually for simple ‘throw away’ type of scripts (e.g., go through all prefabs in selected dir and remove script that I forgot to remove; or add triangulate modifier to all selected objects in Blender). Good LLMs (like gpt4) are really good for such stuff. I can only guess that’s because there is a lot of code that does this thing I want or something very similar.
For B) I use Codeium. It is really great help in speeding up development. Surprisingly in my experience it’s even more helpful in C++ (for Unreal) and PyQT code. I think the more boilerplate there is needed, the more such context-aware code completion is useful.
One big disclaimer though: you have to know what you are doing in both cases.
At the moment I use Github Copilot. Its much better than Chat-Only AIs, because it has the context of your project and your code. It can auto-complete code directly while typing or even suggest blocks of code to finish your current programming step. Also can implement simple methods of its own when having some appropriate method signature and comments. It excels at refactoring of code, because it knows the former code at makes very good suggestions when you rewrite it. Repetitiv code and boiler plate generation is also quite good. But the more complex the programming task, the more failure. But sometime you get an idea even when the AI provided broken code and correct/complete it yourself.
Github Copilot also has a chat where you can ask questions. In my opinion it is better for general programming questions than generalized ChatGPT/Bing Copilot, because specialized on coding, but still also fully able to hallucinate. What I like most about the chat is asking question about your project, like review class X please, can method Y be performance optimzed or ask about an error in a method and possibilties to fix it. These interactions can be quite awkward with Chat-Only AIs, because of posting code and character size limitations.
So after all I find Github Copilot a useful tool with its pros and cons. It makes developers live more comfortable and easier, still you have to do the main work.
I find that Github Copilot to be beneficial overall - enough to pay the small fee to use it, but hit and miss in specific instances.
Occasionally its suggestions are spot on, and save me time. Sometimes they are close, and I use them and modify them. Usually I ignore them completely.
I have found prompt generated code to be on the whole pretty poor quality, typically lacking any edge case handling, and often with small errors that are difficult to spot. But again, sometimes I am surprised.
Can’t trust it, and will slow their own progression by using it. If they’re skipping practice by getting a machine to provide answers for them, then they’re not developing the skills to come up with the answers for themselves.
Would we?
If my understanding is correct, the way that these LLM-based systems work means that they’re going to give modified versions of existing, published solutions. That’s useful in a lot of contexts, but means that they’re not going to be solving new problems for us. If an AI saves someone time elsewhere so they can put it into better problem solving, awesome. But I can’t see these systems putting good programmers out of a job, because a good programmer can solve new problems for themselves.
It’s entirely possible that AI could reach the point where it does solve new problems, but my understanding is that this would require development in a different area to what we’ve seen in the last couple of years.