Your techniques to share code between projects?

How do you distribute code between projects? We have multiple projects with multiple programmers. These are some of the problems we’ve been having.

  • How to ensure bug fixes to a package in one project make it to a second project?
  • How to distribute internal packages?
  • How to handle dependencies? Eg: Our IAP package should require our Analytics package to be installed.

We are talking about setting up an internal NuGet server to distribute DLLs via NuGet for Unity. It seems the Package Manager will be the way to do it in the future, but it does not seem to be ready for custom packages yet. We attempted to use git submodules, but having loose cs files caused code to diverge pretty quickly.

Or is there some other solution other people are using?

1 Like

I have found that bundling your own code into DLLs only serves to slow down your workflow. I base this on several years of experience dealing with other teams well-intentioned packaging mishaps. With games, it is almost always useful to reach into library code, make a temporary tweak to isolate a problem, and then revert it. With DLLs this is impossible, making workflow harder and more brittle.

I prefer to share source code at the C# level. That way I can reach into libraries trivially and inject whatever instrumentation I need to in order to track down the issues I am having. You know this will be necessary, so denying it is just setting your team up for extra unnecessary work trying to track down bugs.

Here’s some of the observations I’ve had with source code-sharing methods in Unity3D. “Library” refers to the shared code, “client” refers to the game client using the code.

METHOD 1: Clone the code (from library repo into client code repo)

PROS:

  • pros: all changes to your code are recorded clearly in the client source control DAG, allowing easy forensics with little confusion, no other repositories to study.
  • the client can fork the code if it needs to (i.e., make local changes to library code, perhaps to later “upflow” to the library if warranted)
  • good version control discipline will help preserve functionality when you do drop newer versions of the library code into the client code.

CONS:

  • duplication of code
  • easy to lose customizations that you actually wanted to “upflow” from client code to the library. This just requires tracking and extra discipline.

METHOD 2: Using source control submodules:

PROS:

  • supported and enforced by version control
  • even works for Editor/ and Resources/ and other “special” subfolders in Unity3D
  • gets even better if you use good C# namespacing

CONS:

  • depending on your branch strategy (git flow, etc.) you have to keep both primary repo and all subrepos “merged in sync” with your branching strategy. This is VERY tricky to visualize and get right, despite it appearing simple on the surface. With git-flow a simple merge from rel → develop is actually multiple separate merges, and the chances of errors skyrocket.

METHOD 3: using symbolic links from client to shared code library

PROS:

  • extremely slick and fast iteration for any given team member

CONS:

  • scales poorly to many team members
  • it can be mysterious why something stopped working when a library file changed and the client repository doesn’t reflect it
  • every developer installation requires symlinks to be setup (either Win32 or MacOSX), making it impossible to “clone the repo and go”

Personally I favor method #1 above using the “Clone the code” method, along with strategic upflow and downflow of shared code. I generally try to keep all the library code in a single folder in a reference “library test project,” and then copy just that directory down to client projects. This is how I manage my datasacks repo, which I share between a lot of my games.

EDIT from June 2021: I favor #1 only up until a certain scale of project size. For most commercial projects, approach #2 of properly configured submodules is a clear winner, and scales very nicely to CI/CD and builds and even for inter-team library shares.

Again though, I would NEVER reach for DLLs. It’s been a complete disaster every time I’ve seen it tried. It gives you zero benefit and nothing but headaches for the engineers. And if you think “Oh my IAP library is final,” you are most likely incorrect. It will have bugs, there will be changes to its requirements. No software is final. Software is soft. And you already know that Apple and Google WILL change their IAP.

And as for dependencies, if library X needs library Y, either put them both in your project, or make one a sub-library of the other.

I also would NOT reach for NuGet. I have wasted far more time tracking down weird dependency problems than I’ve saved by Nuget’s automatic dependency handling.

22 Likes

In short: We use dlls in the Unity projects, but each dev has access to the (source controlled) projects of the dll and can change/build the dll code when needed.

We use separate (Visual Studio / C#) projects for “common code” for example let’s say a “Example” project, they are source controlled and independent from any Unity project. If a Unity (Game) project wants to use the “Example” code we just get the newest “Example”.dll and copy it to the Unity project.

If you want to debug code in there, then you can still debug “into” the dll code as long as put the needed debug files into the Unity folders (same folder where you put the .dll itself).

Now the workflow if you actually want to change the code (maybe you found a bug or you want to add a new feature) in that case we get actual (latest) source code for the “Example” (Visual Studio) project and then change the (debug) build output path to the current Unity project folder location (This part is optional, but otherwise you would have to copy/paste the newest .dll after each change/build from the build folder to your Unity project folder). Now if we change the code and build the project it will overwrite the “Example”.dll inside your current Unity project where you want to test the code changes.

Then you can either checkin the changes to the source control or revert it if you don’t need the changes. (If you fixed a bug and want to fix the bug in your other projects as well, then they would have to replace their current “Example”.dll with the newer “Example”.dll which contains the bug fix)
→ This depends on your company size if you have many teams working on different project at the same time and they all modify the code of the “Example” project, then you might need different branches etc, but that’s a different topic and in that case your company would most likely already have a strategy to share code

The downside with this method is that you have to rebuild the “Example”.dll after you change the code and overwrite the “old” dll in your Unity project (see comment above), but in my opinion it’s still better then duplicating the whole code into each Unity project

There is one important part about this: While working on a new Unity project we build the dlls in debug mode, so don’t forget to rebuild the dlls in release mode before shipping the game or running performance tests :wink:

In addition to the 3 methods from @Kurt-Dekker , we utilize two different methods:

METHOD 4: UPM git packages/custom UPM server (this was not possible in 2018)
Good for modular, project independent code (e.g. custom httpclient wrapper for custom backend)

PROS:

  • all the advantages of UPM (easy distribution, versioning, dependencies)
  • code in separate folder (Packages)

CONS:

  • not mutable, every change must be done in the original repo

METHOD 5: Branching + git worktrees
Good for projects with common codebase with only a few variations (e.g. game reskins, AB tests etc.). This combo is better than forking - less space on disk, faster manipulation, faster setup. Common codebase is master branch, variations have branches of their own and with worktrees you have all the advantages of separate repos.

PROS:

  • all in one repo
  • fast sharing and distribution of code (git merge strategy is great)
  • quick setup (one repo)
  • simple serialized data and assets other than code (graphic, audio…) may vary greatly.
  • once the differences are too big, you can fork the branch to separate repo.

CONS:

  • really only useful when the common codebase has only a few variations/additions, otherwise can get complicated. This is also true for complicated serialized objects like scenes and prefabs, merging can become overwhelming even with git. You can solve this, by having less in common stuff in master branch, but than you have more work to implement changes on each separate branch - this goes against this workflow.

There is another way with git, similar to @Kurt-Dekker METHOD 2:

METHOD 6: git subtrees

I have used submodules but never subtrees, so if anybody has experience I would like to read about it.

3 Likes

Sharing some success/failures sharing files between projects on the one machine.

My problem: I am trying to do animation (a hobby project) with Unity using Cinemachine, Timeline, Sequences, etc. I also purchase assets from the asset store for different locations. I then have a set of characters (created in VRoid Studio) and props (bags, hats, etc) that I want to use in different locations (characters and props are not limited to a single location). Unity staff recommended I start a new project per location to avoid the project getting too big (it got crazy slow after a while). That means I need a “common” project and 10 to 20 “location projects”.

I have:

  • Versions of Unity packages (like Cinemachine, Timeline, and Sequences) that must be kept in sync across projects
  • Assets from the asset store that must be kept in sync across projects
  • Models and scripts I create that must be kept in sync across projects.

In my Common project I created a custom package under Packages/MyCommonStuff with a package.json file that listed the Unity modules I depend on and their version numbers. Other projects I used “Add Package From Disk” to point to the package.json file directly in the “common” project. That works. Changes made reflect automatically across all projects. (It can be hard to work out the package names to use in the package.json file as the docs only list the human friendly names.)

I tried moving assets downloaded from the asset store into the Assets/ directory into the Package/MyCommonStuff directory as well. There were problems getting things to compile, separation of Editor vs Runtime vs Test (separation wanted by custom packages), etc. This was a pain as asset store purchased were not designed for this 3-way split into different directories. And then updates from the asset store were also a pain as new files did not go into the package directory so I had to keep rearranging the files. So I gave up on this and wrote a script to create symlinks for Assets/ for around 20 asset directories that are common to all projects.

I also created a Assets/_COMMON directory for my scripts and models to be shared across projects (the characters and their props). Again, I used a symlink as putting this in the package directory with dependencies on the asset directory was not going that well (and is logically weird too).

Not very elegant using symlinks, but it is working so far, guaranteeing all projects are in sync without having to convert unity asset store assets into the unity custom package format (and redoing the work for asset store updates). I would love to hear a better solution, but asset store purchase that are not packages and getting them updated throws a bit of a wrench in the mix, especially as my assets (the characters) depend on the non-package format asset store purchases.

2 Likes

I’m 2 years too late, but wanted to say that I second this :arrow_up::arrow_up: specially method #2. I’ve been using it with Fork and it’s as easy as double-clicking on the submodule icon to open the submodule repo panel, and it’s just like normal repo, you can do all the branching etc right there and ctrl+shift+enter → push to cloud!

edit: users who don’t use Fork can also very easily use submodules in terminal by cding into the submodule directory and use normal git commands there.

3 Likes

As for method #2, which is what I wanted to go for… how do you deal with Unity’s .meta files?

In my specific use case I’ve written an editor extension and now want to add in a bitmap tileset loader and a custom level map importer/exporter. Both are standalone projects with no connectoin to Unity at all.

To my understanding, Unity would clog all the submodule folders with its .meta files, giving me a permanent delta to the original master branches, unless I push to the standalone projects. Branching/forking should solve the issue with the permanent delta, but still I’d have Unity specific modifications in the external repos - which I ideally do not want.

Going the DLL way seems appropriate for this specific use case at first glance, however I dislike it for the reasons stated above - and because the projects were built with different Visual Studio and C# versions. Dropping such DLLs into Unity without rebuilding for the target version just feels like a horrible approach.

Any experience in how to takle this?

Thanks for your insights.

My preference is the git packages, you have records then of which version it was, etc.

1 Like

Since my extensions don’t use any Unity specifics, I have decided to simply gitignore .meta files in the submodule as a first step.
Unity will happily regenerate them whenever needed.

For git packages it feels a tad bit early. None of these projects have left the “exploration chaos” phase yet :slight_smile:

Edit: rephrased post to make clear ignoring of .meta is done in submodule only.

Ok but ignoring all meta files will lead to issues. If you go to a new machine or just clone that repo to a new folder all the links of references will be broken

Why? Unity will regenerate the meta files and git won’t care. I have no customization specific to Unity (which would land in .meta) for these subprojects.

The references in .gitmodules are to GitHub, not local.

edit: Ah, you mean I have to update the submodules manually after clone. Ok, valid point. Not a breaking one for now, tho.

Yeah I don’t understand why this even remotely matters but if it works for you that’s great. I had thought they were necessary but then I never bothered to look since as far as I’m able to determine they don’t have an impact on the version control. Granted I’m on a more powerful computer than most people so they could be and I’ve not noticed.

1 Like

I’d say this in regards to the .meta files… if you need to also share assets between projects you’re going to want to keep them.

For example here’s one of my packages:

This package is for input (written before the new input system from Unity, and was my way of dealing with the jankiness of the legacy input system).

In it I have ScriptableObjects that I can define a input layout and apply them to the global input system. I also have a build pipeline that allows me to define an input layout on a per build configuration:


note the ‘Input Settings’ towards the bottom.

Here’s the thing as you can see in the package at the top of this post I have a bunch of predefined input layouts all labeled “InputSettings_…”. Since these are assets that are instances of my ‘InputSettings’ SO… well they’re associated via guid. And that guid is in the meta file.

If I were to drag this package into a new project without the meta files and allowed Unity to auto-regen all the meta files. My InputSettings_… would all break. Because unity doesn’t know what script is associated with that asset. Since the meta file is how that is determined.

So point is, you want the meta files if you want to distribute assets and not just scripts.

Thing is from a git perspective… it’s not costly. The meta files don’t change a lot. This is all they basically are:

fileFormatVersion: 2
guid: 8a589d1624987684f9c15a618e5fd5ab
MonoImporter:
  externalObjects: {}
  serializedVersion: 2
  defaultReferences: []
  executionOrder: 0
  icon: {instanceID: 0}
  userData: 
  assetBundleName: 
  assetBundleVariant:

The guid should never change, and those other properties only change if well… you change that about them (which isn’t common). So it’s a one time commit of all of 250 or some-odd bytes? If this is clogging up git in some manner… y’all must be using git in way different ways than I have been for years now.

3 Likes

why not add a “*.meta” to the .gitignore of the submodule? Seems like the perfect solution here

1 Like

That is exactly what I did - guess I failed explaining that :smile:


For Unity related code of course .meta does matter. Just thinking of script execution order… but that was not the use case I described above - so in short: Don’t blindly ignore .meta, unless you know what you do.

That submodule I was referring to is plain C# which I happen to use in Unity - and therefore don’t want Unity specific files like .meta in it. It won’t have any serialized data, no execution order and the like - .meta will be “empty” for this submodule and therefore can be regenerated by Unity on the fly without loss.


After some time working with this setup I can say I’m very happy with it. Since the submodules change less frequently than the main project with its Unity files, that extra effort for updating the submodule references comes at a small cost only.

Not to negate your meta file statement at all. Totally agree with your “don’t blindly ignore meta unless you know what you’re doing” sentiment.

With that said, I much prefer using the ‘DefaultExecutionOrder’ attribute over defining the execution order through the inspector (which is what gets put in the meta file). Especially when dealing with my packages.

It’s technically undocumented… or limited documented. I don’t know… it’s one of those weird random things that the unity api docs don’t have a lot of info on. But it works, and I love it.

Here’s a thread discussing it from a few years back.
https://discussions.unity.com/t/701051

Side note: I highly dislike taking care of script execution order by inspector since it makes debugging weird issues a pain - it just served as an example