New Unity WebGL Embedding API White Paper

Currently we are working on a new embedding scheme for WebGL which should make embedding more convenient, provide namespacing for all the WebGL code in order to avoid interference with the rest of the page, and allow simultaneous embedding of multiple WebGL content on the same page without using iframes.

Instantiation

WebGL content instantiation function is provided with a DOM element (a container for the WebGL content, which can be a usual div and may serve as a placeholder prior to the content instantiation, the element can also be created dynamically), or a DOM element id (this should work even if the element with the provided id has not been parsed yet, i.e. when instantiation is performed in the html header).

After you include UnityLoader.js in your html:<script src="UnityLoaderFolder/UnityLoader.js"></script>you can use the UnityLoader object to instantiate your game, using for example:var myGame = UnityLoader.instantiate(gameContainer, "http://mydomain/myfolder/Build/mygame.json");orvar myGame = UnityLoader.instantiate(gameContainer, "http://mydomain/myfolder/Build/mygame.json", {width: 800, height: 600});orvar myGame = UnityLoader.instantiate("gameContainerId", "http://mydomain/myfolder/Build/mygame.json", {onProgress: myProgress, Module: {TOTAL_MEMORY: 0x20000000}});

UnityLoader.js will not depend on a specific build and generally should be the same for any WebGL build (means it can be shared on a public domain). UnityLoader already contains the default Unity logo and a progress bar, though it can be overridden via additional instantiation parameters. Additional parameters can be also used to override most of the module variables and handlers.

mygame.json will contain all the necessary information to instantiate the build (including parameters and links that have been previously specified in the html). The default module parameters can be overridden from the .json, and the .json parameters can be overridden from the embedding html*.*

The minimal setup should look something like this:

<html>
  <head>
    <script src="UnityLoader.js"></script>
    <script>
      var myGame = UnityLoader.instantiate("gameContainerId", "http://mydomain/myfolder/Build/mygame.json");
    </script>
  </head>
  <body>
    <div id="gameContainerId" style="width: 960px; height: 600px; margin: auto"></div>
  </body>
</html>

All the internal game variables will be wrapped inside the loader functions and therefore will not interfere with the rest of the page or other embedded modules.

The loader will provide the overall build download progress value to the onProgress callback (while the current loader only monitors the download of the .data file). The server should provide Content-Length headers in order for the loading progress to be calculated precisely, otherwise the loading progress will be approximated.

Interaction

Interaction with the instantiated game module is performed through the UnityLoader API, for example:myGame.SetFullscreen(1);ormyGame.SendMessage("myObject", "myFunction", "foobar");

Distribution

All the WebGL build files (except the UnityLoader.js and .json) will have the same file extension, which should simplify the server setup (i.e. when specifying MIME types for IIS). Compressed and uncompressed content will have the same file extension (no more suffixes required) and will be distributed in the same way, the loader will autodetect the used compression method and automatically perform decompression in JavaScript when necessary. The developer will still be able to provide appropriate Content-Encoding headers in order to speed up decompression of the content. File links will be relative to the .json file location, which should simplify embedding of the WebGL content from another document location or even from another domain (given that CORS headers are provided).

Hash based distribution (optional)

We are also considering a distribution scheme where file names are generated based on the content hashsum.

Consider the following example of the Build folder on the server:

/Build/2e06b6c4670ab40d15e1dcb62436c9b2.json
/Build/c31a82de01db52817dd87d4f1858ee67.json
/Build/57b5a31d2566cf227c47819eb3e5acfa.unityweb
/Build/88a7f58a1038e96586d84b27c3354d5c.unityweb
/Build/8c9889fd3f9272b942d4868a9c1b094c.unityweb
/Build/accdec8ac01fc3e501da10f0fd8cfaec.unityweb
/Build/c05b5450212180ed3d9960c1a050f87d.unityweb

where /Build/2e06b6c4670ab40d15e1dcb62436c9b2.json is:

{
  "TOTAL_MEMORY": 268435456,
  "dataUrl": "88a7f58a1038e96586d84b27c3354d5c.unityweb",
  "codeUrl": "57b5a31d2566cf227c47819eb3e5acfa.unityweb",
  "asmUrl": "8c9889fd3f9272b942d4868a9c1b094c.unityweb"
}

In this case, after building a new version of the game, the developer can just copy the new build directly into the same folder on the server, skipping copy of the duplicate filenames. If some files have not changed between builds (for example the updated build has some assets changed but the code remains the same), then all those unchanged build files will remain unchanged on the server and will have the same url, which means they can be served directly from the browser cache, given that the user has already played the previous version of the game. All the previous builds can be simultaneously available from the same server folder, while you will only have to provide the appropriate .json link.

Another reason to use hash based filenames is that this way you can guarantee (up to the degree of the hash collision probability) that urls will not be reused for different content. This is especially important for CDNs and other cached environments, as it guarantees that there will be no version mismatch between the different files in the build (which could happen before if the content from different reused urls got different lifetime in the intermediate caches). This also means that you will be able to set appropriate Cache-Control header for this content to explicitly prevent its revalidation, which should speed up the loading of the cached content.

In addition, we are currently also working on a new caching mechanism for Unity WebGL, which will cache the downloaded content in the indexedDB along with the server response headers. This caching mechanism will be able to emulate the browser cache functionality without having its limitations for the cached content lifetime and size (the Data caching build option will then become deprecated, as you will be able to cache all the build files and not just data, while the versioning will be based on the Last-Modified and/or ETag headers provided by the server). Having hashed filenames will allow developers to avoid content revalidation (particularly important when using a lot of small asset bundles) and cache the content more reliably, even without having any access to the server response headers (i.e. when using public drives and storages).

Feel free to provide any feedback and share additional ideas regarding the future WebGL embedding API.

7 Likes

Looking forward to having Microphone support in WebGL. In the meantime I’ll need to create an HTML5 microphone and interact with it using SendMessage. I need access to the wave data so I suspect I’ll be passing a lot of data across the data bridge. Is there a way to have a shared array or something? Using SendMessage will be slow when passing 44100 sample rate worth of data.

Hello theylovegames.

Please note that this thread is specific to the WebGL embedding API, which is mostly related to the build distribution, downloading, caching and instantiation. So you might want to create a separate thread specific to the microphone support. Answering your question, yes, you may try the following solution which should let you share a typed array between JavaScript and managed code: http://forum.unity3d.com/threads/trying-to-share-float-array-between-c-and-webgl-browser-javascript.394861/#post-2576106

Sounds like awesome changes! Especially interested in the hash based distribution and new caching mechanism. Do you have a timeline when those would be available?

Hello JyriKilpelainen.

There are no specific release dates at the moment, however most of this functionality has already been implemented. There still are some open questions which might affect the future design, for example, a question about the IndexedDB cache cleanup. At some point you will have to clean up the cached data in order not to run out of storage space. Currently we are considering two main approaches:

a) Application can explicitly set the maximum indexedDB cache size, in which case the least recently used files will be discarded from the indexedDB cache in case of overflow.

b) At launch time the application can provide a list of all the urls it might potentially use (including asset bundles, streaming assets etc.), in which case all the urls outside of this list can be removed from the indexedDB cache. This approach is more sophisticated as there might be multiple WebGL applications hosted on the same domain, which would then require separate caches in order to not clean up each other. Alternatively, the url whitelist can be treated as folder specific (and will cause cache cleanup only for urls under a specific subfolder).

There might also be some other approaches to this which should be considered.

Hi Alex, this will be included in 5.5f1? I made some custom WordPress plugin and if possible I would love to know more.

Hi,

This is very good news. Quite excited to get my hands on the new API :slight_smile:

On that note, is there more recent documentation / other information available or is this all for now? I know this was mentioned at the Unite in LA so I was just wondering if there was further information somewhere.

Thanks, and looking forwards to testing this.

This will not be in 5.5, sorry. We are aiming for 5.6 with this which will enter beta in a few weeks.

Hello Foriero.

The new API is targeting Unity 5.6.
Currently the recommended way to embed WebGL content on WordPress and similar websites is to use iframe. This way JavaScript environments get naturally isolated and build content paths are relative to the embedded index.html file. Moreover, iframe solution should also work fine with future builds using the new API.

Contrary to the popular belief, there is nothing wrong with embedding your game in an iframe (as has been already mentioned in other threads). However, in some cases you might want to have higher integration with the embedding page. For example:

  • The embedded WebGL content is hosted on another domain.
    When embedding cross-origin content in an iframe, you have to use postMessage for communication, which brings some complications (i.e. the call is asynchronous, you can not handle exceptions etc.). While embedding without iframes lets you call the parent window functions from the game directly (and vice versa).

  • You have multiple WebGL games which can be run from the same embedding page on user click.
    In this scenario you would normally create a new iframe each time user selects a new game, and destroy this iframe afterwards. This will cause reallocation of the heap each time the user selects a new game, so the browser memory becomes fragmented, which will eventually cause out of memory after a few iterations in a single process 32-bit browser. When embedding content without iframes, you will be able to allocate the heap just once and then reuse it for each selected game without reallocation (considering that this shared heap is large enough).

  • You have multiple WebGL games on the same page, which use different assets, but share the same code (for example, user-created games based on the same build, model viewers etc.).
    When using iframes, you should be able to load the next selected WebGL content in the same iframe without relaunching and recompiling the build, by simply reloading the build data. However, you will not be able to move this iframe within the DOM tree without reloading it (consider the case when user clicks on a thumbnail in a list for 3d-preview). When embedding without iframes, you can freely move the embedded node within the DOM tree without reloading it.

Hi Alex,

Yes I have solution NOT using iframe. It works great and is so called “responsive”. So no matter on which device it always nicely fits into wordpress presentation. The only what is missing is to have possibility to have more webgl apps on one page. Which as I understand will be possible with your new export and a minor change in our wordpress plugin. Once I have it I will push it to asset store so that other developers can use it as well.

Thank you very much and look forward to 5.6

Have a great day, Marek.

@alexsuvorov this is a very interesting approach and I think there are many improvments to the current embedding mechanism.

One thing which bothers me is the *.unityweb file extension. Will this be some proprietary file format? Why not stick to regular JavaScript?

Hello tschera.

Quick answer:
In case of uncompressed build, it will be a “regular JavaScript” file, just with a different file extension.

Detailed answer about the .unityweb extension in general:

  1. In the current version of Unity we use at least 4 different extensions: .js, .data, .mem, .unity3d. And if you are using build compression, you will get 4 more extensions: .jsgz, .datagz, .memgz, .unity3dgz. This brings complications for IIS server setup, as by default IIS does not serve files with undefined MIME types. The developer should therefore add all those new extensions (except the .js) to the server configuration, which is a bit inconvenient and might also conflict with already existing configuration (especially for the .jsgz extension).

It would therefore make more sense to have only one extension for at least .data, .mem, .unity3d, .jsgz, .datagz, .memgz, .unity3dgz files, so that developer can setup MIME type only for this single extension just once and globally for the whole server (it is very unlikely that .unityweb extension will conflict with any other application running on the server).

  1. Normally, you would generate a compressed JavaScript file rather than regular JavaScript, as generating an uncompressed build only makes sense if your server supports static compression with the desired compression method. And even in that case there are some drawbacks:
  • Intermediate proxy or anti-virus software might strip out the initial Accept-Encoding header, in which case you will end up with fully uncompressed build being served to the user (however this situation is quite rare).
  • Files statically compressed with brotli can only be served over https, while compressed build gives you an option to also serve it over http and decompress in JavaScript.

In case of uncompressed build, .unityweb extension of the JavaScript files might bring some inconvenience when opening the file in the editor (no syntax highlighting, no file association). However, using .js extension for those files might bring issues when downloading the file from the server (if server decides to use chunked encoding, there will be no Content-Length header provided, so the download progress will be displayed incorrectly).

  1. Why using the same extension for compressed and uncompressed files?
    Currently used setup gives you ability to shift between different approaches using just server configuration (if you have sufficient access right to the server configuration, you may optionally add Content-Encoding: gzip header to the served content and rewrite the request url, in which case it will be decompressed by the browser). Although this approach is universal, it has some disadvantages:
  • It brings significant complications for developers not familiar with server setup when their server configuration is not default (affects both Apache and IIS).
  • In cases when Content-Encoding header is not set, the loader wastes some time to perform the initial request to a non-existing file before it switches to the decompression fallback.

It would make more sense to perform only one request to a file that necessarily exists, without having to use any redirection or header manipulations on the server side. The loader will automatically detect the compression method of the downloaded file and decompress it if necessary. In practice it means the following. Your JavaScript file can be uncompressed, compressed with gzip, or compressed with brotli, depending on the compression method selected at build time. In all those cases the file will have exactly the same .unityweb extension, the loader will take care of the rest.

Note that you can still speed up the decompression on the client side if you append appropriate Content-Encoding header, but you no longer have to rely on the mod_rewrite or URL Rewrite functionality, which significantly simplifies the server setup. If you append the appropriate Content-Encoding header, then the loader will receive a file already decompressed by the browser. If you do not append Content-Encoding header, then the loader will receive a compressed file and will automatically decompress it. In other words, whether the file will be received compressed or uncompressed is determined by the server configuration and is not known at build time, therefore a universal extension is used.

@alexsuvorov thanks for the detailed explanation. So the main reason is easier server setup. But then adding the js build with a script tag won’t be possible anymore. This means we are “forced” to parse and interprete the .unityweb file on the client in JavaScript. I was playing around with adding the JS file via script tag after I read the following thread and we also felt a performance gain (especially in Chrome). But when I understand it right, we could just rename the .unityweb file to .js and still use it as regular js file.
This would give us again the freedom of handing the build as regular js file to the browser and the browser could parse and interpret the js file with the browser engine (and use all the internal tricks for compiling, parsing, interperting etc)

Yes, if you are creating an uncompressed build. And by the way, note that remote file does not need to have .js extension in order to be loaded via script tag, i.e. the following code should work:

<script src="anyfile.anyextension"></script>
<script src="myscript.php?version=1"></script>

I am just not completely sure if proper Content-Type response header is required for streamed parsing in any browser or not, you can test it if you wish.

The script tag is not used by default for the following reasons:

  • You will not be able to monitor the code download progress, which is especially important for builds with large code and small data.
  • It makes namespacing more complicated. You should either store meta information about the code externally, or the code should send information about itself on load (both solutions are not very reliable)
  • You only gain about 1 second in Chrome, and only if your data file downloads faster than the code.
  • With the new setup it will be much easier for the developer to customize the loader for his specific needs, as the loader will be independent from the build (you can adjust it just once and it will work for all your subsequent game updates compiled with the same version of Unity).

Cool, I’ll try that. Setting up the server to send the right Content-Type is not a problem for us.

I agree with your points almost. But with the script embedding I don’t agree totally with you. I think 1 second is a huge performance gain. I mean it’s 10% improvment if startup time is about 10 seconds (like our startup time is). The thing with the data file is different in my opinion because one can work around this issues by laziliey fetching data and resources, but the problem of parsing-time can not be improved by us developers. Another example of how “much” 1 second is, are the violations shown by Chrome Canary. They show violations if a procedure in a requestAnimationFrame takes more than 60ms. So I think 1000ms and 10% performance gain is significantly. And I think many small improvements also add up to great performance gains. But maybe our usecase is different, because for us startup time is very important

Hello tschera.

This is a reasonable concern. I believe we can provide an “undocumented” way to achieve what you want using minimal effort. All you would have to do is to add a small prefix to your uncompressed JavaScript file on the server (something like UnityLoader[“myUniqueId”]=…), and add an additional parameter to the UnityLoader.instantiate() function at the embedding page (something like {asmId: “myUniqueId”}). Then the loader could “load and parse” the asm.js module via dynamically created script tag.

In this case you should take care of the file id being unique (this id is required for code namespacing and would otherwise be generated automatically). Also, the loading progress of the asm.js file can not be determined in this case. Means if the data loads faster than the code, you will see 100% complete bar for a while, until the code is fully loaded. And if the data loads slower than the code (for at least a second), then using a script tag wont bring you any advantage.

@alexsuvorov thanks for the explanation. It sounds reasonable :slight_smile:

“using minimal effort” is not the most important thing for us. For us it is important that there is an “official” solution which does not break on every update (and which can be automated and integrated into our build pipline).

I don’t think that 1 second is nothing and negligibly. Especially on the web users are used to fast websites. Otherwise they just abandon the website. In 2015 Google presented a statistic which showed that 1 second leads to 11% less page views and 16% decrease in customer satisfaction. For more details see from minute 4:
https://www.youtube.com/watch?v=jCKZDTtUA2A

I think Unity still does not load (and start) fast on the web platform, therefore we should squeeze out every single bit of performance possible. In my point of view, wasting 1 second just for convenience is not the best way to go.

1 Like

Hello stephanwinterberger.

As has been mentioned above, it is not possible to win this second without any drawbacks. Moreover, you will still be able to load the asm.js module via a script tag if you think that in your specific situation the advantages overweight the drawbacks. Assuming that you have huge experience in server setup and your server does not have any limitations, you will still face the following problems:

  • It is not possible to display the loading progress of a script loaded via tag.
    First, you can only get some advantage from using a script tag if your compressed data size is smaller than your compressed code size. So let’s consider that your data size is twice smaller than your code size. In default setup the user will see a loading bar evenly filled up for 10 seconds and then stuck at 100% for 1 second. When using a script tag the user will see a loading bar evenly filled up for 5 seconds and then stuck at 100% for 5 seconds. Would be interesting to have updated statistics of users abandoning the page considering this aspect. (Note: the numbers have been chosen approximately just to demonstrate the idea, i.e. it is not taken into account that the code download speed might increase after the data download is finished etc.)

  • Unlike the default setup, the code loaded via a script tag can not be reliably cached in the indexedDB.
    This means that on subsequent launches there is always some risk of the script being removed from the browser cache, as well as the risk of it not being cached at all due to size limitations (mostly applies to Safari). On the other hand, in default setup the script will be cached in the indexedDB, which is much more reliable than the browser cache. (Note: technically, with some overhead, you can get the script body and put it into the database even if it was loaded via a script tag, but what you actually can not do in this case is to revalidate the cached script, i.e. you will not be able to reliably track the changes of the script file on the server side)

You are right, there are some situations when developer can improve the loading time by modifying the default setup. The real question here is whether that specific modification should become the default. For example, @tschera suggested to make this modification an officially supported build option, which is quite reasonable. In other words, this should rather qualify as a “feature”, that might get official support after its efficiency is proven on real examples in updated loader setup (while time comparison mentioned above is based on versions 5.3 and 5.4).

Thats true that the size is a problem, not only for caching. But that’s another story :wink:

This sounds quite good. We don’t have problems with the constraints of the script-tag variant so for us this would help.

But with my previous comment, I wanted to stress that also one second matters a lot. And that it matters especially in the web. I think it’s dangerous to say: “It’s just a second”. If you say this 10 times then it’s 10 seconds :wink: so also seconds add up and every additional second will distract more users.

I know that you can not compare Unity with an e-commerce website but users are used to fast websites so we should try to provide a great perceived performance for the users. But if we spend 10 seconds on parsing some JavaScript, it is almost impossible to improve perceived performance. Hopefully there will be improvements coming! Looking forward to it :wink: