Arrays and Classes Javascript

Here is a class Card and a 2d array aGrid.

The 2d array aGrid is set to aGrid = new Card[4,4]; it looks like it’s made into an instance of the Card class and an array at the same time. How does this work??

It looks like new Card is making aGrid into a instance of the Card class. Is it not? If not, what is it doing??? Along with that I know [4,4] is setting how much the array can hold.

If new Card is making it into an instance of the card class, how can aGrid be an instance of the Card class and an array at the same time?

Here is the code:

class Card extends System.Object {
    var isFace:boolean = false;
    var isMatched:boolean = false;
}
 
var aGrid:Card[,];//2d array to keep track of the shuffled, dealt cards
 
var aGrid = new Card[4,4];

The “​var aGrid : Card[,]​” sets up a local variable that can reference a 2-dimensional array typed to hold Card elements, but is initially null (the array hasn’t been created yet).

At this point you can think of the compiler/runtime as storing this (in ASCII art, no specific syntax):

aGrid (of type System.Array, 2D, for Cards) -> null

The new Card[4, 4] creates a new array with 16 (4x4) “slots” typed to hold Card references, but each slot is initially null.  “​var aGrid = new Card[4,4];​” as a whole operation (which has an potentially confusing extra var and should just be “​aGrid = new Card[4,4];​”) creates and puts the array into your aGrid local.

Now you have this:

aGrid (of type System.Array, 2D, for Cards) -> memory location 0xf8000001
memory location 0xf8000001 -> System.Array for Cards = {
  [0,0]: null, [0,1]: null, [0,2]: null, [0,3]: null,
  [1,0]: null, [1,1]: null, [1,2]: null, [1,3]: null,
  [2,0]: null, [2,1]: null, [2,2]: null, [2,3]: null,
  [3,0]: null, [3,1]: null, [3,2]: null, [3,3]: null
}

The code “​new Card()​” (without square brackets) doesn’t appear in your source code, so it hasn’t actually created any Cards yet.  If you were to add this:

for (var rowI : int = 0; rowI < 4; ++rowI) {
    for (var colI : int = 0; colI < 4; ++colI) {
        aGrid[rowI, colI] = new Card();
    }
}

Then your memory would look something like this:

aGrid (of type System.Array, 2D, for Cards) -> memory location 0xf8000001
memory location 0xf8000001 -> System.Array for Cards = {
  [0,0]: 0xf8000101, [0,1]: 0xf8000102, [0,2]: 0xf8000103, [0,3]: 0xf8000104,
  [1,0]: 0xf8000105, [1,1]: 0xf8000106, [1,2]: 0xf8000107, [1,3]: 0xf8000108,
  [2,0]: 0xf8000109, [2,1]: 0xf8000110, [2,2]: 0xf8000111, [2,3]: 0xf8000112,
  [3,0]: 0xf8000113, [3,1]: 0xf8000114, [3,2]: 0xf8000115, [3,3]: 0xf8000116
}
memory location 0xf8000101 -> Card { isFace: false, isMatched: false }
memory location 0xf8000102 -> Card { isFace: false, isMatched: false }
memory location 0xf8000103 -> Card { isFace: false, isMatched: false }
...
memory location 0xf8000116 -> Card { isFace: false, isMatched: false }

And you’d have a Card instance in each “slot”.

The fact that you have to create the Card instances independently of the array that holds them may be confusing because the syntax of square brackets added to the type name (​Card[,]​) creates built-in .NET arrays (you can learn more about them here: msdn.microsoft.com/en-us/library/9b9dty7d.aspx).  Using UnityScript (or if you want to buy into UT’s inaccurate advertising, “JavaScript”), you’re probably more familiar with their simplified UnityScript Arrays, which are covered here: docs.unity3d.com/Documentation/ScriptReference/Array.html.  Notable differences between the two are that native arrays can’t change in dimensions (the number of “slots”) once created, and UnityScript arrays can’t really be made 2-dimensional (in a way that works with Unity’s serializer).

This is all made more confusing by the fact that the Unity Editor will recognize certain serializable public fields on MonoBehaviours and initialize all the elements for you if they’re currently null.  That then-initialized data is saved into the scene or prefab for use when the game runs.  So Unity does new step(s) for you, but only when it’s a serialized field (when it shows up in the Inspector) and only for a limited set of types.

Since Unity has quite a few issues with serializing multi-dimensional arrays, the above will only work if you don’t need it to stick around.  However, if you need those Cards to persist longer than the lifespan of the instance (instances die and are re-created each time you hit the play button in the editor and occasionally when things change in the editor, and are re-created when running the game independently), you can’t use true multi-dimensional arrays. To solve the issue it seems you’re having, try something like this:

var aGrid : CardRow[] = new CardRow[4];

class CardRow extends System.Object {
	var cards : Card[] = new Card[4];
}

class Card extends System.Object {
	var isFace : boolean = false;
	var isMatched : boolean = false;
}

Unity should be able to initialize all the data itself, since it knows how to create and serialize an array of 4 CardRows, arrays of 4 Cards each, and booleans.  There are other solutions to Unity’s multi-dimensional array serialization issues here on the Answers if this one’s a bit too clumsy for you.