# Strange String behavior explanation wanted

So I was just stuck getting numerous “Array index out of bounds” errors and the error was rather mystical to me…

Here is what I was doing (simplified example):

``````var value = "0";
var image = Digits[ parseInt( value[0] ) ];
``````

Doing a debug.Log on value[0] gave me the result of “0”… as you would expect…
But parseInt( value[0] ) returned a result of 48.

Iam going to assume here that 48 is the ASCII value of the 0 digit so I understand where the 48 value comes from. What I do not understand is why the parseInt value would return that value… Same with int.Parse().

By doing this:

``````var value = "0";
var digit : String = value[0];
Debug.Log( digit + " becomes " + int.Parse(digit) );
``````

…resulted in “0 becomes 48”

Where as this:

``````var value = "0";
var digit : char = value[0];
Debug.Log( "Result: " + int.Parse(digit) );
``````

…resulted in an error message that no version of int.Parse accepts a ‘char’ data type. I found this particularly interesting because it means the int parsing functions take ONLY String data types yet when I pass it a string with a value of “0” it returns an int with the value of 48.

I finally managed to get the correct value returned by doing this:

``````var value = "0";
var digit : String = value[0] + "";
Debug.Log( "Result: " + int.Parse(digit) );
``````

So this is the odd behavior I was asking to be explained…
Why does:
parseInt(“0”) give a value of 48 when
parseInt(“0” + “”) gives a value of 0 ???

You’re question is wrong. It’s not:

Why does:
parseInt(“0”) give a value of 48 when
parseInt(“0” + “”) gives a value of 0 ???

It’s:

Why does:
parseInt(‘0’) give a value of 48 when
parseInt(“0” + “”) gives a value of 0 ???

The answer to this is really simple - ‘0’ is a char not a string. Why does this matter? Well because Int32.Parse() only accepts string as an input - so the compiler throws an error and you fix this silly mistake. Or not, as you’re running US [probably not in strict mode]… so the compiler guesses randomly at what you wanted to do.

So now you say that the logical thing to do would to be to cast the char to a string right?

Sadly no… that throws an error as that operator has not been implemented.

Okay you say, lets throw an error then, this is a waste of time!

No says the compiler! I CAN convert the char to it’s integer representation (int)‘0’ = 48!

How does that help?

I can the convert that int to a string!

How?

NFI - it’s not legit as far as C#'s concerned, I’m assuming some internal US rule says int’s can be magically converted to strings.

WTF?

Exactly: FUBAR.

And that, ladies and gentlemen is why we play with strongly typed languages.

Edit: Okay fine, you want a solution now I suppose?

Try:

``````parseInt( value.Substring(0,1) );
//or
parseInt( value[0].ToString() );
parseInt(value[0] + "");
//and for laughs
value[0] - '0'; //Or for US IIRC: value[0] - "0"[0];
``````

You are way too clever for me… LOL… I got lost somewhere down the middle…

You are saying I am using parseInt(‘0’) but to that I ask, how does a defined type (not an inferred type, a defined type!) automatically change types all on it’s own accord:

``````var digit : String = value[0];
``````

You are saying that this code changes digit’s type from String to char??? I’m having a hard time believing that…
(Just for interest sake, I do have the downcast, strict and implicit pragmas on. In case that might make a difference)

Oh, rereading your post, let me just highlight again that I do have pragma strict on, yes. This is why I said that whenI tried to pass a char to int.Parse it gave me a compiler error and I HAD to give it a String before I could run the code.

Basically, a summary of what you said is this:

• I passed a char
• Because char is not supported, instead of throwing an error, it handles it by returning the char’s ASCII value

All of that I can understand and I would have been very happy with that answer if not for two inconsistencies:

• First, I typecast the variable to String manually and did not rely on automatic type casting (pragma strict as standard with all my code, btw)
• When I tried to pass it a char it threw a compiler error

So my question is twofold:
How did a String become a char?
How was I able to pass it a char, within a String type, to bypass the compile time error and ultimately end up passing it a char value?

That I think is the main question… How did:
var digit : String = value[0];
…become a char data type???

I think your answer would be 100% relevant once this mystery can be explained.
Again, I can’t run the game when I try to pass it a char, yet when I pass it a String it becomes a char at runtime…? Possible? 0.o

The only thing I can think of is that a normal string ends with \0 while fetching a char from a string does not return this character and somehow leaving the string …open ended, somehow… The lack of this \0 confuses the parseInt to think the single character String is actually a char data type… It’s the only explanation that makes sense to me right now…

[edit:] or vice versa. The extracted char adds a \0 which normal Strings do not have. I forget which is which. I remember in my old Pascal days I never bothered with \0 and was first introduced to it when learning C. I know there are two different kinds of 'string’s where one ends with the null character and the other does not.

Read something along those lines when looking at Obj-C. They called it “the old ,C style, null terminated strings”. As opposed to what, I don’t remember… Been such a long time since I’ve done such low level string manipulation

Anyways… I am wondering if this might be the reason for my strange experience… possibly?

The code you showed was:

`Digits[ parseInt( value[0] ) ];`

All a string is is an array of characters [and a ton of goodies on top] - so as you would expect from an array it returns a character when you access it like an array. In short:

``````var s1 = "1";
var s2 = new char[] {'1'};

s1[0] == '1';  //true
s2[0] == '1';  //true
s1[0] == s2[0];  //true
``````

If you haven’t guessed already single quote is syntax for char in C#.

My only guess is that because you never explicitly said that s[0] was a char, and because the function could not take a char, the US compiler thought it was allowed to try and make it work.

I started mucking around with US - I discovered that:

``````int.Parse(1);
``````

Failed to work [yay] but that:

``````parseInt(1);
``````

Did work [boo!].

I’m looking for parseInt documentation.

Edit: Can’t see anything saying this is expected or good behavior… I’d put it down to [unintentionally] using undocumented US functionality.

I’d be interested in seeing that parseInt doc when you get it, please

Just a little note here, you do realize that I am working in JavaScript, not c#, right?
As I do code in both JS and C# I am aware of the ’ ’ and " " difference, yes… but just thought I’d highlight that little fact for just in case

Again, I get where you are coming from… You quoted the first example I gave, the one where I specifically mentioned it was an example, and from that example, yes I have no foot to stand on and your answer is 100% correct and I am a total, incontestable n00b… but the examples after that…

At first I did that and I got the error so I thought as you did, “maybe it becomes a char”, so I then tried other methods of doing this and it was THEN that I started casting the var as a String and got the same results… That was why I made this post. If it was simply a fact of me making such a silly mistake, heck that is like forgetting to put an ; at the end of a command… It’s common place… you fix it, you move on…

…but when I declared the variable as a String… then the results were no longer as I expected…

I am not disputing your statement that I passed it an int and that that is the issue. I am saying to you this:
This is my function that works:

``````		Value = new SRSizeableDigit[ cur_value.length ];
for (var i = Value.length - 1; i >= 0; i--) {
Value[i] = new SRSizeableDigit();
Value[i].Pos.x = area.x + area.width - 30 - offset;
Value[i].Pos.y = area.y + 5;
var digit  : String = cur_value[ i ] +"";
var digiti : int = int.Parse(digit);
Value[i].digit = Digits[ digiti ];

offset += 64;
}
``````

…and by removing the +“” it does not…

Fine? You really want me to show off my epeen? Hmm? You ready for it because here it comes:

``````namespace UnityScript.Lang

static class UnityBuiltins:

def eval(code as string) as object:
raise System.NotImplementedException()

def parseInt (value as System.String) as int:
return int.Parse(value)

def parseInt (value as single) as int:
return value

def parseInt (value as double) as int:
return value

def parseInt (value as int) as int:
return value

def parseFloat (value as System.String) as single:
return single.Parse(value)

def parseFloat (value as single) as single:
return value

def parseFloat (value as double) as single:
return value

def parseFloat (value as int) as single:
return value
``````

BOOYEAH!

https://github.com/bamboo/unityscript/blob/master/src/UnityScript.Lang/UnityBuiltins.boo

I’ll note three things:

1. Boo is really clean.
2. US source code isn’t as hard to find as I would have thought.
3. What F*\$^#% decided to allow parseInt to take in int WITHOUT DOCUMENTING IT!?!

Edit: So yeah, I think that covers it - hopefully you know know why that error occurred. It’s got to do with a poorly document built-in function - not a error on part of US or developer. If you have any other questions ask away.

Edi2: Regarding you last little problem - without the + “” you are assigning a char to digit - which is impossible so you should get an error - I did.

Reviewing this thread… I think I overlooked the significance of this. I note that trying to recreate the error in a more modern version of Unity prevents the assignment of a char to a string, avoiding this problem.