Combine Two or More Image Targets Simultaneously to Display a New 3D Object, Not Individually (Unity + Vuforia)

Hi all Unity, Vuforia, VR, and AR experts,

I have asked the same question on the Vuforia developer forum and would love some advice: https://developer.vuforia.com/forum/object-recognition/combine-two-or-more-image-targets-create-third-3d-object

Currently with Vuforia you can easily detect multiple image target simultaneously to display objects, but it’s a one to one association. Meaning 1 image target will display 1 object upon detection, and the system can simultaneously detect multiple image targets, say 2 images, and display 2 separate objects.

What I would like to accomplish is when 2 images are display at the same time, display a different object than that of object 1 (from image 1) or object 2 (from image 2) like in the following:

if Image 1 is detected, then show 3D Object 1

if Image 2 is detected, then show Object 2

if Image 1 + 2 are both detected (at the same time), then show Object 3

It seems currently the way it works is if an object is found, display its child, which is the actual object. I thought I can make a list to capture when objects are found, and then look through the list for if all the image target names exist in the list then display the model (Object 3 in this case); however as I am doing it it doesn’t make as much sense.

Wondering if anyone know of a more elegant solution or any suggestions. Is this even the right idea? I am not much a developer, so any help is much appreciated!

private void OnTrackingFound()
        {
            Renderer[] rendererComponents = GetComponentsInChildren<Renderer>(true); 
            Collider[] colliderComponents = GetComponentsInChildren<Collider>(true);

			// My addition: Declaring a list of strings
			List<string> foundComponents = new List<string>();

            // Enable rendering:
            foreach (Renderer component in rendererComponents)
            {
				//My addition: Captured the names of the image targets that are found
				foundComponents.Add(mTrackableBehaviour.TrackableName);
               
                            component.enabled = true;
            }

            // Enable colliders:
            foreach (Collider component in colliderComponents)
            {
                component.enabled = true;
            }

            Debug.Log("Trackable " + mTrackableBehaviour.TrackableName + " found");

			foreach( string component in foundComponents )
			{
				Debug.Log( component );
			}
        }

First of all that DefaultTrackableEventHandler which is a component for each image target you create (in which you can see those both tracking found and lost functions) is just a default script provided by vuforia.

  1. Remove it
  2. Create your own new script name it something which smell like its a tracking behaviour.
  3. In that script add namespace Vuforia as “Using Vuforia;
  4. now implement the interface of vuforia tracker which is : “ITrackableEventHandler

Here , We have our own tracker behaviour script now. Means, think you get two events as image has been tracked and lost. Let’s not have extended tracking for now. and you can do whatever you wanna do in those two events. you can use that image target object as a dummy object which handle that specific image tracking mechanism. there is no need that you have to keep the augmenting object as the child of your image target object. you can use the image target behaviour tracker results from your script as you need. here is a sample how you can do what you asked for.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Vuforia;

public class MTrackableBehaviour : MonoBehaviour, ITrackableEventHandler
{

    protected TrackableBehaviour mTrackableBehaviour;
    public bool isTracked=false;
    public GameObject aug_Model, model_3;


    protected virtual void Awake()
    {
        mTrackableBehaviour = GetComponent<TrackableBehaviour>();
        if (mTrackableBehaviour)
            mTrackableBehaviour.RegisterTrackableEventHandler(this);

        aug_Model.SetActive(false);
    }



    public void OnTrackableStateChanged(
       TrackableBehaviour.Status previousStatus,
       TrackableBehaviour.Status newStatus)
    {

        if (newStatus == TrackableBehaviour.Status.DETECTED ||
            newStatus == TrackableBehaviour.Status.TRACKED ||
            newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED)
        {
            trackedBehaviour();

        }
        else if (previousStatus == TrackableBehaviour.Status.TRACKED &&
                 newStatus == TrackableBehaviour.Status.NOT_FOUND)
        {
            trackLostBehaviour();
        }
        else
        {
            trackLostBehaviour();
        }
    }


    private void trackedBehaviour()
    {
        isTracked = true;

        if (!isBothTargetsTracked())
            aug_Model.SetActive(true);
        else
            model_3.SetActive(true);
    }

    private void trackLostBehaviour()
    {
        isTracked = false;
        aug_Model.SetActive(false);
        model_3.SetActive(false);
    }

    private bool isBothTargetsTracked()
    {
        bool value = true;

        foreach (MTrackableBehaviour m in FindObjectsOfType<MTrackableBehaviour>())
        {
            if (!m.isTracked)
                value = false;
        }
        return value;
    }

}

add this script to each ImageTarget object. the aug_model and model_3 can be anywhere in the scene hierarchy.

ENJOY…!
@weikian ,@iamklarey ,@BioFan ,@w8w

no answers to this?

I’m also looking for the answer.

I am seeking the solution for the same case as well…,I am looking for the answer too

@ashfaqueck
I have gone through many of answer and your answer should be the best one. But I have encountered with some new issues, would you mind to figure them out? Thanks in advance!


  1. Put target image A on camera for detection, show 3D object A
  2. Remove target image A from camera and put target image B on camera for detection, show 3D object B
  3. Keep target image B on camera and put back target image A, show 3D object C

But what I have seen on screen:


  1. Put target image A on camera for detection, show 3D object A [OKAY]
  2. Remove target image A from camera and put target image B on camera for detection, show 3D object B [show 3D object A and C simultaneously, 3D object B never show]
  3. Keep target image B on camera and put back target image A, show 3D object C [same as step 2]