Join us for our 2020.1 beta overview webinar on April 20th at 9:00 am PST (6:00 pm CET)
Our Games Evangelist Ashley Alicea ( @ashley_unity ) will walk you through the highlights of what’s coming in 2020.1 with a hands-on presentation based on the latest beta. There’s a limited number of spaces, so make sure you register soon and add a reminder in your calendar. If you’re interested in what 2020.1 beta has to offer, you can also check out this blog post.
Features from the following areas will be discussed:
Profiling
Scripting
Artist tools
Graphics tools
Lighting
Editor tooling
Platforms
Similar to our 2020 roadmap Q&A , we will host a Q&A session here in this thread following the webinar to answer as many of your questions about 2020.1 as possible. The thread will be opened for questions once the webinar ends and will remain open for at least three days.
Some basic rules for the Q&A:
Don’t bundle multiple unrelated questions in one reply. One question/topic per reply.
Only questions related to the topics of the roadmap session are permitted.
Hi, What are the updates and news concerning Machine Learning/ Deep Reinforcement Learning (formerly Barracuda)?
Will more common model architectures be supported, Tensorflow or PyTorch? The conversion process of model architectures has been a pain …
Will there be a session in near future presenting related news?
The short answer is yes. We do not have a firm estimate on a particular release for this, but are working diligently towards adding support for the highest tier of mobile devices.
Our team is constantly working on Barracuda and we ship new updates almost biweekly. The latest version 0.6.3 and we are working hard to rollout 0.7.0 update soon!
Barracuda supports .ONNX file format since version 0.3.х and it is the preferred way to import your model to Unity. Import functionality is constantly being improved, but it is a long way before we support any model. There is no easy answer to this question.
Short summary of what architectures we prioritise when developing Barracuda:
Reinforcement Learning models that are necessary for ML-Agents SDK
GAN/autoencoder architectures for image-to-image translation such as U-Net, SPADE and other style-transfer models
Image classification architectures based on VGG, ResNet, SqueezeNet, MobileNet and YOLO
Deep fully-connected and convolutional architectures that allow to generate animations and audio.
We are going to work to support Object Detection architectures such as YOLOv3 soon, but that is pretty challenging.
Which model architecture are you mostly interested in?
Will we one day be able to know what assets from the store are being used in our projects, as is the case for Unity packages? Or should we stick to pen and paper?