Asset Serialization: Mixed vs Force Text

I had originally posted my question here but worry that it was immediately swallowed up by all the other questions asked in that forum.

If our team is currently waist-deep in our project, would there be any benefits of converting the entire project from Mixed asset serialization mode to Force Text in Unity Pro?

Text files would seem to be CONSIDERABLY smaller to transmit and store on remote repositories than their binary counterparts, but the time Unity took to convert and re-import all of the files and assets on my local machine alone seemed very prohibitive.

I would find that text files would benefit those of us who like to see what was modified in a scene/prefab/animation by our less-experienced Unity developers. There appears to be surprisingly LITTLE information available on the unity3d.com site regarding this topic, though, so would like to ask for help, feedback, tutorials, FAQs… Even the available tags don’t seem to have any reference to Asset Serialization, other than “project settings”. :slight_smile:

I guess I owe a follow-up to my original question, which I posted over a year ago. I prefer a kind of postmortem style (good and bad points).

The Good:

  • Converting to Forced Text was definitely worth it. The time it saved us in the long run paid off a hundredfold, even though it ended up taking 7 hours for each developer to reimport all the assets on their machines after the conversion.
  • Programmers and artists would occasionally move files around in Finder or SVN without their .meta file counterparts. The files that moved would get new .meta files generated for them (which had different GUIDs), which means that all the prefabs, scenes, and animations that relied upon them would break. Since the update, we could inspect the broken files, find the GUIDs they used to reference, and find that GUID in a previously-revisioned .meta file. That immediately fixed all the broken references.
  • Interns and new Developers would scroll around a scene without actually modifying anything in it. That simple act would cause the scene to show that it was modified. When they committed the modified scene files to SVN, it used to require an entire binary to be pushed up. Now the commits are in the low bytes or kilobytes for text differences, not megabytes.
  • Multiple developers and programmers could work on the same scene or prefab, and when an SVN conflict occurred, we could see who made the most significant changes. That let us know whether it was easier for someone who changed a few buttons around a screen to redo his/her work, or someone who added significant portions to redo theirs. And in several cases, the files merged without any conflicts at all when two people changed positions and scales of various items in different portions of those files.

The Bad:

  • Unity would run out of memory and crash multiple times during the conversion/reimport process
  • Prefabs would get “unhooked” from their components, animations, and links, which required us to re-reimport individual prefabs
  • 3rd-party tools like Toolkit2D, SmoothMoves, and RageTools seemed to have more problems on the reimport than Unity-native assets (they got “unhooked” much more frequently)

What we have learned to do for all our future projects going forward:

  • Create a template project. It should have Force Text enabled, your company’s name, default sound settings, common Tags and Layers, cameras, 3rd party tools that you use across projects, etc. Once created, save this entire template into SVN somewhere for future use.
  • After the project has been converted from binary to text, and everyone else has to pull it down, make sure each developer deletes their Library/ and Temp/ folders. This significantly boosts reimport time, since it the entire project needs to be reimported anyway and it doesn’t have to try to match the new vs. old cached files.
  • Reimport files S-L-O-W-L-Y, a few dozen at a time, from SVN. Basically, too many files makes Unity run out of memory and crash.
  • Restore assets before restoring the 3rd-party tools that creates/manages them. Toolkit2D would reimport images as they were brought back in, but Unity was trying to reimport the same image simultaneously for its own internal cache. This significantly increased reimport time (4x and higher). When we’ve deleted all the 3rd-party tools, reimported just the assets, and then imported the 3rd-party tools last, the entire project reimport takes much much less time.

Having being using “force text” since it came out, in our big project here, I can safely say it isn’t much better than having binary files.

The files do get bigger in the working dir, obviously. GIT gets smaller and faster. It’s dealing with text, so commits only get true diffs and text is much better compacted.

Reimporting / library rebuilding isn’t really an issue with using “force text”. It’s an issue with Unity and how it works in general. I get library reimports all the time when dealing with checkouts. In cases such as in yours, it’s just a migration issue, and very punctual. Hardly a reason to not do it. You could even let it doing overnight and cost you no working time at all.

Unlike many may imagine, merge isn’t a benefit. At least not currently. You still can’t do scene merge. Not even by hand. I mean, unless you do know how all indexing works inside the crazy YAML format it generates and want to go nuts on it. I never did. But it might still be needed to make true merging work with plugins, such as UniMerge.

Sometimes you can see what was modified in history, if it’s small enough. This is a plus. In theory, sometimes merge may even work - when it’s small enough in both ends. Can’t rely on it, though, and I don’t even recall if ever happened for good.

I believe the biggest advantage is being able to at least have something in the diff. When it’s small enough, it’s very clear what have been done. In other case, at least you know a lot have been done there.

I just found an annoying bug with using “force text”, but it’s a minor bug. It should be fixed soon and all it does is not being able to convert a big binary file to text - so the file stays as binary and we get error messages about it on the console.

The big point here is: versioning binary files provides not a single reasonable advantage.

I would think that most of the benefits that would come are from the transparency in changes made on assets, that and you should be able to merge changes made by you with those made by others while working on the same objects simultaneously. I’d love to use the text format with a few of the others guys I’m working with right now–but we’re just starting out and aren’t shelling out for pro licenses yet. As it seems to be that you need to have a pro license in order to use the text mode serialization feature.

Not having experience with the text mode conversion, I would say that you and anyone else just now beginning to utilize it are going to be the pioneers in best practices. It really depends on how long is “long”, an hour or two might suck but if it makes your life easier from that point out then I’d say its worth it.