As AI technology continues to play a significant role in our society, we understand the importance of maintaining transparency, fairness, and accountability in its development and deployment. We are dedicated to adhering to government regulations worldwide and ensuring that our AI solutions are aligned with ethical standards.
In our pursuit of responsible AI usage, we have recently revised our guiding principles for ethical AI. We extend an invitation to our community of creators to join us in promoting the responsible and safe utilization of AI within the Unity community. Together, we can foster an environment where AI is used productively while upholding fairness and integrity.
You can learn more in our blog post. BUT we want to hear from you! Please share your thoughts, concerns, suggestions, and more with us below!
You’re focusing on something irrelevant, they said “unauthorized disclosure”, so no point in opposing that as the opposite would mean no data given to Unity to make the tools that would eventually trickle down to us, but I get that hiding the source could give them a way to have whatever source they want, so what I think Unity should do is disclose the category of the sources and not the sources individually, that’ll protect the sources while assuring developers that the data is not taken from say a black market although highly unlikely as the data is probably from games and videos…
Am I? They are painting disclosure as a bad thing, therefore when they don’t disclose anything it’s somehow a good thing.
If they had gone the extra mile of making sure they use data where there are no ethical issues and no potential litigation to be brought against them, they would have been touting their own horn so much.
No. What we’re saying there is: we will be treating all the data we use for training AI - which may include, for example, data that you have chosen to share with us - with appropriate care. We will keep it secure, we will not use it for things you didn’t agree to when you shared it with us, etc.
I agree that we could be more explicit about our approach to sources more generally. I discussed it with Legal and here is the position we plan to incorporate more clearly into the next iteration of the AI Guiding Principles:
We will uphold the rights of creators in any datasets that we construct ourselves, and will continually seek to work with partners who also share our belief in the importance of respecting the rights of creators.
That blog post is probably the most corporate nothing statement I have ever read in my life. I get it, you
, so you can’t actually say anything that’s binding in any way, but holy hell then maybe don’t say anything at all?
There’s a lot of ethical concerns about AI, that blog post mentions none of them directly. There’s a lot of concrete things you could do, that blog post mentions none of them directly. So it spends a couple of hundred words saying nothing. When BP writes a post like that, I assume that there’s just been an oil leak somewhere, so this mostly makes me wonder what bad thing you have done, rather than it instilling any kind of confidence.
If you have any courage left as a company, and are allowed to speak without double-checking with legal every third word, questions you could answer are:
What’s your stance on using data sets where artists might not have opted in to the use of their art?
(The blog post says “appropriate data sets” and “open and healthy expression”, but that doesn’t mean anything as appropriate is completely arbitrary)
Will you let your users know if you start using (or are using) AI services to replace actual staff? Things like bug classification and customer service or whatnot.
The asset store will probably (if it hasn’t already) be drowned in AI-generated texture packs etc. What’s your stance on that? Do those assets have to be marked as AI-generated?
If you sell us any AI services, do you make any kinds of guarantees about the copyright status of those services? As in, if say Unity creates some AI asset production tools and sells those, will you guarantee that the training set isn’t possibly tainted in a way that a future court case might decide that all assets made by those tools are copyrighted by a third party?
As creators, it’s our collective responsibility to demand a clear answer out of Unity about the source of the datasets before even considering using any of these AI tools.
For example, Adobe has been using user uploaded content from Adobe Stock to train Adobe Firefly without explicit creator consent and with no ability to opt-out and with no royalties paid on images trained on their source material. Rightfully, this led to massive controversy. So what in the FAQ right now is guaranteeing that Unity Muse isn’t intending to do the same?
And to be clear, this is not only about protecting the content made by Unity creators: existing popular datasets that were scraped from the internet at large without explicit consent or usage rights are profoundly unethical, if not potentially downright illegal (laws haven’t yet adapted to AI, but regulations are evolving quickly). Basic solidarity with artists from all fields should be a given for digital creators like gamedevs.
Not to mention both gamers and critics are already frowning heavily (and rightfully) upon games that are using unethical AI datasets — remember High on Life? Myst?
Unity has a responsibility to clearly define whatever “appropriate datasets” is supposed to mean.
“Appropriate” is meaningless legal/comms mumbo-jumbo. It does not mean ethical.
If Unity reps are ready to commit to it meaning “legally obtained from compensated creators who gave explicit consent”, then by all means, Muse and Sentis are tremendously exciting news.
But if it means “technically legal so far, at least none of the other big corps stealing content got sued yet”… then these tools are profoundly disrespectful of creators of all disciplines, and run an incredibly high risk for their users (us!) of copyright infringement pollution in our games as the law adapts over the next few years.
The asset store is the only real candidate for where they could get that training data that I can think of. I can imagine there’s something from a longstanding Weta R&D project or something like that, but the asset store seems far more likely when they say things like “data that you have chosen to share with us”
Edit: Looking through the asset store provider agreement, I can’t see anything that would constitute permission for this use, so I guess not?
The words used to describe the AI Muse feature are sufficiently nebulous and seemingly deliberately vague that I’d guess they’re looking at user code and project structures.
In this “Correction” of Unity’s stated values, there’s a nasty implication.
That your current AI Sollutions you’ve implemented in Unity do not adhere to these stated values. This post was made 2 weeks ago, before the announcement of Sentis and Muse. The updated AI Guiding Principles were conveniently NOT updated before this announcement.
“Will continually seek to work with” implies that you do not require lawful or ethical datasets. You’ll just give it the good ol’ college try.
The fact that you’re checking with lawyers to give an answer on this… is telling.
Something can be legal and also completely unethical.
But I have my sincere doubts on the legality of any of this. Navigating Copyright laws across the world is tedious in the best of times, and you (Unity) just decided to jump headfirst into the dubious realm of content generation trained on assets you literally have no license to use. Fair Use MIGHT apply in the US, but it literally doesn’t exist in Europe.
If you are serious about the ethics and accountability then transparency must be achieved as well or all this blog post is just PR posturing.
You’re mentioning the respect of creators rights then prove it and disclose your models and training datasets.
No current genAI is ethical at all and without unity’s transparency we can assume that your is not ethical as well.
Instead of vague and corporate words, tell us the actual action you’re taking to ensure that no copyrighted data was included in the datasets you’ve used (which I doubt because some of it awfully look mile stable diffusion which contain hundreds of millions of copyrighted data as well as CP) 1nd explain us exactly how you’re going to protect your user’s data from being used in the future.
Also, what is the point of having genAI “tools” in a video game production pipeline when you can not protect ML output and most likely infringe other’s rights? What studio will build an IP using assets that are worthless and fall into creative commons the moment they are generated? You will give a lot of studio legal departement a huge headache as they’ll have to be sure that their employee/contractor do NOT use it to not create trouble for the brand and people.
Lastly, using accessibility as a PR move is really a cheap shot. The whole blog post basically use a lot of words to say nothing, peak corporate.
I’m not pleased with Unity’s decision to jump on the latest unethical (and likely illégale) tech fad, you’re not making your product better, you’re hurting the creators.
The fact that this Atlas “3D Asset Creator” is not actually creating 3D assets but downloads them and is being promoted with the taglines:
" Build confidently with Verified Solutions
Find professional solutions that have undergone enhanced vetting from creators who are committed to providing high-quality solutions, service, and long term support.
"
At best this asset is “just” missleading customers.
This shouldn’t pass the sniff test especially during a high profile marketing push for only 10 assets.
Imagine finding out post launch that your “AI” generated main character is actually from some asset store and it turns out the uploader actually did not create the asset themselves but stole it from somebody else.
I’m all for a well done AI toolchain, but this is a problem.