Hand detection + Kinect Azure + Unity

I have project i’m working on where i use Azure Kincet and Unity.
I assumed that there should be no problem for tracking just hands, for example wrist joint position.
But it turns out that it only can  get the wrist joint position if only the full body  is detected.
Is this true or i’m not that well informed how to use the SDK  + Azure + Unit?
My Azure Kinect is (2,6m/8.5feet) from ground and is faced with sensors pointing down on a table(D_2m x W_1.5m x H_0.6m / 6.7feet x 5feet x 2 feet) 
The idea is to detect hands that are on that table to make it interactable.  But from this given angle and table the body detection is really
jittery and imprecise to use it for hand/wrist position detection.
Is there any way that i can use that Kinect information to detect multiple hands without the need of full body detection?

i have assets to use:
Azure Kinect for Unity3D (Camera API + Body Tracking API): LIGHTBUZZ
Azure Kinect Examples for Unity: RF Solutions
OpenCV for Unity: Enox Software

setup picture:

i will be grateful for any help advice!

I doubt the hand tracking from top is implemented in any camera because of how specifiq that is, but I see 3 alternatives, maybe using opencv for the hand tracking you can find some documentation here

or, since the table is flat, should be posible detect hand movement using the depth sensor. Also there are other alternative in the market that look more suitable for that specifiq task

like leap motion Ultraleap Plugin for Unity — Ultraleap for Developers