It seems that rendered entities don’t get written into a motion vector map. Which means that they are not affected by motion blur. I tested with both built-in and HDRP pipelines (UWP does not support per-object motion blur).
Here’s a direct test in HDRP between moving an Entity by the Translation component (red, top) and a GameObject (blue, bottom)

GameObject code
using UnityEngine;
public class MoveSine : MonoBehaviour
{
public float gain = 10;
public float frequency = 10;
void Update()
{
transform.position = new Vector3(
Mathf.Sin(Time.time * frequency) * gain, 0, 0);
}
}
ECS code
using UnityEngine;
using Unity.Jobs;
using Unity.Collections;
using Unity.Mathematics;
using static Unity.Mathematics.math;
using Unity.Burst;
using Unity.Entities;
using Unity.Transforms;
using Unity.Rendering;
public class MoveSineSystem : JobComponentSystem
{
[BurstCompile]
struct MoveSineSystemJob : IJobForEach<Translation>
{
public float time;
public float gain;
public float frequency;
public void Execute(ref Translation c0)
{
float x = sin(time * frequency) * gain;
c0.Value = float3(x, c0.Value.y, 0);
}
}
protected override JobHandle OnUpdate(JobHandle inputDeps)
{
return new MoveSineSystemJob()
{
time = Time.time,
gain = 10,
frequency = 10
}.Schedule(this, inputDeps);
}
}
Here is also an example I also posted on the physics discussion thread, running with built-in renderer comparison between GameObject-based physics (PhysX) and DOTS (Havok) with motion vectors debug view:
Is there something I’m missing or is it something that’s not possible yet?
