I have found that bundling your own code into DLLs only serves to slow down your workflow. I base this on several years of experience dealing with other teams well-intentioned packaging mishaps. With games, it is almost always useful to reach into library code, make a temporary tweak to isolate a problem, and then revert it. With DLLs this is impossible, making workflow harder and more brittle.
I prefer to share source code at the C# level. That way I can reach into libraries trivially and inject whatever instrumentation I need to in order to track down the issues I am having. You know this will be necessary, so denying it is just setting your team up for extra unnecessary work trying to track down bugs.
Here’s some of the observations I’ve had with source code-sharing methods in Unity3D. “Library” refers to the shared code, “client” refers to the game client using the code.
METHOD 1: Clone the code (from library repo into client code repo)
PROS:
- pros: all changes to your code are recorded clearly in the client source control DAG, allowing easy forensics with little confusion, no other repositories to study.
- the client can fork the code if it needs to (i.e., make local changes to library code, perhaps to later “upflow” to the library if warranted)
- good version control discipline will help preserve functionality when you do drop newer versions of the library code into the client code.
CONS:
- duplication of code
- easy to lose customizations that you actually wanted to “upflow” from client code to the library. This just requires tracking and extra discipline.
METHOD 2: Using source control submodules:
PROS:
- supported and enforced by version control
- even works for Editor/ and Resources/ and other “special” subfolders in Unity3D
- gets even better if you use good C# namespacing
CONS:
- depending on your branch strategy (git flow, etc.) you have to keep both primary repo and all subrepos “merged in sync” with your branching strategy. This is VERY tricky to visualize and get right, despite it appearing simple on the surface. With git-flow a simple merge from rel → develop is actually multiple separate merges, and the chances of errors skyrocket.
METHOD 3: using symbolic links from client to shared code library
PROS:
- extremely slick and fast iteration for any given team member
CONS:
- scales poorly to many team members
- it can be mysterious why something stopped working when a library file changed and the client repository doesn’t reflect it
- every developer installation requires symlinks to be setup (either Win32 or MacOSX), making it impossible to “clone the repo and go”
Personally I favor method #1 above using the “Clone the code” method, along with strategic upflow and downflow of shared code. I generally try to keep all the library code in a single folder in a reference “library test project,” and then copy just that directory down to client projects. This is how I manage my datasacks repo, which I share between a lot of my games.
EDIT from June 2021: I favor #1 only up until a certain scale of project size. For most commercial projects, approach #2 of properly configured submodules is a clear winner, and scales very nicely to CI/CD and builds and even for inter-team library shares.
Again though, I would NEVER reach for DLLs. It’s been a complete disaster every time I’ve seen it tried. It gives you zero benefit and nothing but headaches for the engineers. And if you think “Oh my IAP library is final,” you are most likely incorrect. It will have bugs, there will be changes to its requirements. No software is final. Software is soft. And you already know that Apple and Google WILL change their IAP.
And as for dependencies, if library X needs library Y, either put them both in your project, or make one a sub-library of the other.
I also would NOT reach for NuGet. I have wasted far more time tracking down weird dependency problems than I’ve saved by Nuget’s automatic dependency handling.