The team has implemented a set of improvements to Live Capture’s integration with Timeline and Sequences. The changes are not yet released but we want to give you a sneak peek of what’s coming.
We’ve prepared the following videos for your review and would appreciate your comments to ensure we’ve correctly acted on your input.
Changes to Take Recorder
New binding system: the binding is more flexible and you no longer need to connect to an actor to review takes.
New feature to lock the take context: you can lock the take in the timeline and the inspector
A quick button to create a new take track in Timeline: Rather than 5 clicks, there is a button to one-click add take track in the timeline.
Updates to Timeline
Directly drag & drop take assets from the Project View to Timeline
New clip-in feature: Same as animation clips, you can clip-in your take in the timeline.
Keyframe reduction: much smaller file size but same smooth animation
Live properties auto restoration: Disconnection will no longer dirty the scene data
I have not used Unity’s live recording much. I normally use Animation tracks where I have a set of animation clips I drop in, or use Animation tracks to animate properties of objects. E.g. I find an iPhone too shaky for camera movements, so I set up multiple virtual cameras (grabbing the phone current position and aligning a virtual camera to that I think is useful), then I drag the different virtual cameras into a track and blend between them etc for smooth transitions. To animate characters, I use a combination of “walk” animation clips, hand gesture clips (via override tracks and avatar masks), face expression clips, and live recordings (using an older package I had to create an animation clip). I combine them in animation tracks with overrides, avatar masks, and so on.
If I understand the 3rd video, you can have “shots” which keep track of a number of “takes” you can flip between. Nice! That “ownership” model makes complete sense (takes are scoped by the shot they belong to). Can I create animation tracks in the same way as a take? So I can have a shot own a take or a shot own an animation track?
My understanding at the moment is using a Sequence Asset for a Character, if I create a Timeline for doing animation tracks it is attached to (scoped by) the Character. I don’t want that because I end up with way to many things attached to the character. I want the animation track (in a timeline) scoped by the shot (like takes seem to be).
Hence my question is can I create Animation Tracks in the place of takes so I can construct a “take” either using a live recording, or by using animation clips combined with avatar masks etc?
Thanks for the info Akent99!
To answer your questions, yes you can create animation tracks in the same way as a take under any shot. How to structure the sequence and the shot is up to you
For your second question, are you asking if you are able to switch animation clips under a shot the same way as take tracks? If so the short answer is yes, I have made a quick demo using the sequence package.
I couldn’t record audio so let me know if the video is not clear. The example is me trying to create variant of the chicken character with different animations, and switch the variant freely according to the shot I am in (or directly in the parent timeline). Other than animation tracks, you can also have different avatar masks, sub-models added to each variant.
Thank you for the video, and sorry I was not clear. I meant without associating the Timeline holding the animation track per character. I am doing a series and can anticipate having say 1,000 animation tracks by the end of it for a single character. I don’t want 1,000 variants of a character in one list. What appealed to me about the Takes for a Shot was I assume you only see the takes for that specific shot. Much more manageable! I don’t want to see takes for all the other shots. I like the hierarchy of sequences and timelines - it helps me break things down into manageable groups. But as soon as you have to create variants per shot attached to a character, I think it will eventually become unusable. That is why I was curious to see if I can have animation tracks grouped per shot (not per character). Per shot means I can use the 3 levels of sequence nesting to group things appropriately and keep the numbers under control. (I just had not looked closely at Takes before with live recordings. I was curious to see if I had overlooked something there I could use.)
Oops, apologize on miss understanding your question. I think the confusion was caused due to us using the same name for sequence and live capture. “What appealed to me about the Takes for a Shot was I assume you only see the takes for that specific shot.”
Just to be sure, You are referring to the list of takes ordered in the take library under the inspector panel, which allows you to switch takes quickly according to the shot?
Yes. Having a list of takes only relevant to that shot (if that is what it is showing) is what appeals to me the most. Better control over the list of things that can be seen.
I don’t need nested timelines per character to put animation tracks in (it is slightly more cumbersome actually), but I definitely dont want a list of all animation tracks under a single character. (I have got my own set up going fine at the moment, but it feels a pity that I had to reimplement my own crude version of scene assembly to get there. But it works!)
Unfortunately with the current version of the take recorder, the takes are not grouped/displayed by shot. No matter which shot you are in, the list displays all takes recorded, let alone animation clips. They are, however, showing by “directory”, or the folder on your local drive. I am sorry that we don’t have this feature implemented. But I hear you and will add it to the request list. The current version is still very basic with lots improvement pending.
Ah, oh well! Going back to the original questions then from this thread. Sharing feedback in case helpful:
Do these updates help with a problem you have? If so, what problem(s) are being addressed?
Do any of the changes impact your workflow? In what ways?
What other changes/problems do you still have?
Any other feature requests? Feel Free to share them with us on our Product Roadmap
My main concern is about scaling a project. Because I want to create a series, I am going to be reusing the same location a lot, and the same characters a lot. So over time, I expect to have many (hundreds to thousands) of animation tracks for the main characters. I want to mix live recordings and prerecorded animation clips. But that is because I don’t have a fancy recording studio with camera stabilizers, mocap suits, etc. I use animation clips because it saves me time. I use live recordings to add a bit of personality and depth of feeling to key shots. I use virtual cameras to do nice camera pans (not live recordings). I use the targeting features and scripts to augment clips (e.g. “turn head to look at an object and track it”). I frequently live record the top half of a cody with an animation clip doing the bottom - I combine them in the same animation track. Then I add Animation Rigging IK overrides so the character can pick something up reliably. I do this by putting lots of things into the 3rd level sequence (I call it a “Shot” sequence) timeline. It groups it quite nicely - it just means I cannot use Sequence Assets and Sequence Assembly. This is not actually a big problem for me, it just felt “unfortunate”.
Having a flat list of all takes I ever capture is not that useful for me. It would be fine to use when learning, but I do lots of short video clips. Organization of the project and reducing tedious manual effort is the main benefit for Sequences for me. Having live takes unrelated to animation clips is not that useful to me - I want to combine them easily. Its too hard to line the hand of a character up with a door knob with a live recording - so I combine different assets on a timeline…
I have shared much of this before by they way, so sorry if just a repeat. But here is a screenshot of a sample Unity scene from Episode 1. You can see the range of shot numbers on the left - this was just for one location. I have a separate Unity project per location for performance reasons.You might see at the bottom the “Pointing action” override track - its a live recording of poking a finger forward combined with presets for expressions, standing poses, hands, mouth movement controls, camera zoom, etc.
Here is the final result for that few seconds of scene. Episode 1: Outsider. Extra Ordinary (the motion comic). This degree of complexity is common. Some are double the height where I have to align the animation sequences for different characters across tracks. For example, the above shot only had one character!
A different question. Using the live capture app on the iphone, is it possible to use the iphone as a 3D mouse? E.g. I would select an object in a scene using my real mouse, then use my iphone to move (including rotate) the object by holding down a button on the iphone, with different sensitivity settings. So it would do relative movements to when I touch the button on the phone app.
I could then create a Cinemachine virtual camera in a scene and position/move it using my phone as a 3D mouse. Or move the characters around. It can be quite painful at times using a mouse on a 2D screen to do fine control positioning. This would of course be useful to anyone, not just live capture usage.
Hmmm. I wonder if VR controllers could do the same thing. I don’t want a headset on, just a 3D mouse.
Apologize for the late reply. I was occupied the last two days. Btw thanks for providing me with more details and your real-life problem! In fact, I think what you had asked about grouping animation tracks the same way as take recorder is similar to features we are working on. The only addition needed is to group/categorize tracks by sequence structure (shots). I will check with the team to make sure that what we are talking about is in alignment.
On another hand,
“use my iphone to move (including rotate) the object by holding down a button on the iphone, with different sensitivity settings.”
This feature is in our backlog but we haven’t started the development yet. Reason being that we don’t know if it’s needed (Instead we will start the anchoring project first: which is to parent the virtual camera to a moving object). Now you had given us the reason to look into it. I don’t want to bombard you with the link but if you ever want to request anything, the productboard is an important resource for us to do the planning on package development. Submitting tickets can help us prioritize features/fixes according to your needs.
To be clear, I don’t know how useful it will be in practice. I was curious to see if could use the live camera support to try and “experiment” to see how useful. I just know that at times, to get good camera composition, very precise movements are useful. I tend use the w/s/d/x etc keys and mouse to try and position the editor camera a lot, then use the Game Object menu to align the Cinemachine virtual camera to the current view. But it can be finicky. Frequently I hunt for a good shot.
E.g., in cartoons they talk about how silhouettes are important - you want the focal characters not to have confusing backgrounds, or framed well. Since I tend to use existing assets from the Asset store (I don’t have the skill to build complete sets!), I have to find good angles in the sets I have, rather than design sets around shots I want. (Note: I am not very good at it yet!) E.g. https://episodes.extra-ordinary.tv/episodes/ep1/ep1-20-330 I did not get quite right - to make the character stand out better, I should have got more dark background all around the head. So I have to get the character position correct, and the camera position correct, and it’s often “centimeter” precision required. Reviewing, I would like to get the camera down a touch lower, the camera angle a touch higher, so the head would be silhouetted by the dark trees above the distant wall.
An example shot I like https://episodes.extra-ordinary.tv/episodes/ep1/ep1-20-230 - the focus pans from one character to the other. The lady is short, but I wanted her to look dominant with the final camera angle and her closer to the camera etc. I don’t want to record a camera held in my hand along the ark - I want to set keyframes (the start and end) and let Unity do the smooth transition.
I also tried to use my Quest 2 VR hand controllers, but it seems to shut off as soon as I take the headset off. I don’t want to wear a VR headset - I just want a 3D “mouse” while looking at my monitor. Maybe a I need a manikin to put the headset on…
I appreciate the updates and the optimization to keyframing for reduced file size. These workflow enhancements really do add up.
it was all great!
I have only encountered one possible request. The ability to integrate scripting behavior within timeline or sequences. For example if I desired to trigger an object instance or activate an api for manipulating a mesh. Would this be possible within the scope of timeline and sequences.
Of course scripting behaviors may occur during runtime but if the timeline could somehow reverse the action of a script like an undo? when seeking through timeline. This would help drive indirect and reusable animations without keyframes and still have the “lookdev” benefit for scripts. I’ve tried to get scripts running in edit mode but it gets quite messy when doing takes. I’m not certain how valuable or beneficial this would be across the entire platform and large teams. It could introduce a programming/scripting role for animations.
I just watched the two episodes, you are right the camera movement for the animations need to be very subtle. The virtual camera itself might not always be suitable for your use, but your suggestion on the take structure for shot is what we are also interested in hearing. The problem you mentioned with the sequence variant is a limitation we have. We can discuss further about just what you “need” so we can better assess potential changes.
Thanks for the feedback! Can I ask if you could provide a more specific example on how to add script within timeline / seuqnece? do you mean to have the timeline or sequence trigger instances? (e.g: at frame x manipulate x mesh)
and do you need it for both runtime and editor mode?
In case useful, one example of something like this is I wanted to type in text and get mouth shapes out of it, so the character looks like they are talking. I wanted to scrub forward and back through the timeline and see the exact mouth shapes to get all the timing right. My approach was to create a custom timeline track. I added a component to my characters to set the mouth positions (I simplified and used vowel sounds, no consonants like “m” or “p” - because my characters already had those blend shapes). I type in text and it does a REALLY crude mapping to visemes (it simply drops all the consonants!). In the custom track ProcessFrame() function per frame it works out the mouth position to use and merges that into the blend shapes for the rest of the body.
The code is at https://gist.github.com/alankent/b9e7af795f7f3a614de46da3576e6f58 - its not pretty, but its doing the job. The first two classes you can skip over - the Editor/ and RunTime/ classes are the ones where I create the custom track. There are lots of blogs and videos on how to create custom tracks.
This is not useful for physics driven events (shooting a ball that is meant to bounce off objects etc), but if it you can compute the value per frame, a custom track might help.
@akent99 suggestion for custom tracks is probably the best option for some situations I have in mind.
Hypothetical Example 1:
To provide a specific example. Lets say if I have an underwater scene or a forest where animals roam. It would be far more efficient for me to have the animals operate with simulated behavior via a herding runtime script. However, it would be impossible for me to have a timeline animated character walk or swim around the animals without knowing how to predict the path. Perhaps a given “seed” value would be tied to the timeline track, in order to keep a predictable record of varying outcomes. Of course a workaround could be designing an object/character avoidance but that is just one example.
The changed values that the script performs to the herding of animals would somehow be buffered/cached. Allowing timeline scrubbing back and forth for so called “real time” + timeline" scripting within edit mode.
Hypothetical Example 2:
Another, could be a space battle of ships. Would I want to animate every ship on a bezier path or could I take advantage of ai script. Perhaps the timeline caches the entire first playthrough in runtime and stores the runtime “replays” as a sequence (keyframed) result without the need to rerun the script and still be able to scrub the resulting animation.
But alas perhaps this is the role of custom timeline tracks. I may have to reassess custom track as a viable option for such behavior.
Hi Starpaq2, thanks for the example! the request is much clear now. I have added it to our product board, but it is also dependent on the timeline team and their planning.