Searching a list using a character @_@ ? Use Case?

My question is simple.

Below; health is a string ! health[0] or health[1] is the character index.


But something i discovered while making this code, is that....


health string can be iterated health[ 'character' ]

My question is simple.


This is asking - why health[index] makes sense, as the health strings index, but if I ask for health[character] what exactly will it return? If the string for example had multiple characters, like 44 for example.

my question is simple


A "char" is a primitive type like byte or int. It's actually a 16 bit number. C# allows an implicit conversion between char and int. This is valid:

int myInt = 'V';

myInt will contain the value 86 which is the ascii code for the letter "V".

So using health['V'] is equivalent to do health[86]

that's all.

Oh so it is asking for the strings character index 86 this why it is null, but it could be asked of a string that is 86 char in length. And would return valid.

Seems odd that only numerals wouldn’t be used. But I guess for easier encryption and encoding index all letters could be discussed as an equivalent number.

But it occurs to me if I want 86 and I don’t want to use two numbers (why? Who knows ! ) but I wanted to do it with just one digit! I could use v

so if I had a private scripted value of 86 and I send that over network as a V !? ?

I can't make sense of this paragraph, sorry ^^. Maybe you struggle to understand what different datatypes are and how they are actually represented in binary? In the end EVERY bit of data in a computer is stored in binary.

This also doesn't really make much sense. A char is just a datatype with 16 bits so it can represent 65536 different values. An short is also a datatype with 16 bits and can also store 65536 different values. However a char is used to represent characters based on the unicode table while a short is just a number without any other association. However a short value can be converted into a char is needed. An Integer int on the other hand is a 32bit value and can store over 4 billion different values. Writing 'V' when you actually need / want the value 86 makes absolutely no sense. First of all its actually longer to write the char literal since you need the two quotes as well. So you write 3 characters in your source code compared to 2 when writing 86. However under the hood, in memory both require 2 bytes of memory. So you don't save anything space wise. It would just be more cryptic. A single byte value (8 bits) can store values in the range 0 - 255 and it always requires the same amount of space.

A char can represent any character from the BMP (Basic Multilingual Plane).

If you are using an English forum you should learn to speak English.

i couldn’t understand any of your reply on the first one I was able to piece together some consistent logic

were you high?

Way to go, insulting people who have been constructively contributing to this forum for way longer than yourself.

Bunny83's responses make much more sense to me than your questions. Maybe try re-reading them carefully, do your research and then ask specific questions for the parts you still don't understand?


@LethalGenes Hey buddy you can't just storm in with that kind of self-entitlement. You're the one doing something way above your head, then when you ask for help, you find it normal to insult people who try to explain it to you, like it's them who's weird.

Listen, computers don't know what letters are. Everything in memory is some binary data, everything is a sequence of 0s and 1s, so called bits. Text, music, video, code, whatever. If you split these binary sequences into 8-digit pieces, you get bytes. There is an ancient physical and technical reason why bytes have exactly eight bits, but I won't go there.

If you "glue" two bytes together, that's called a 'word'. A word has 16 bits. That's just a term, it has nothing to do with linguistic words.

C# standardizes strings as a type of an array of 16-bit types called char (short for character). Arrays are continual random access sequences allocated in memory, and in this case you can address any 16-bit element inside this, typically much longer sequence of bits. Strings are basically word arrays, and there is a lot more to be said how exactly this works, but I don't want to be accused of smoking.

C# designers made a decision to adopt the UTF-16 character encoding (16-bit Unicode Transformation Format), so that each word sequence represents a known data element across multiple machines. UTF-16 is just a conventional standard. So when you access the third element of the string "treehouse" you get a character "e" which is represented by this sequence 0000 0000 0110 0101, or just 0065 in a hexadecimal notation (0110 = 6; 0101 = 5). In a human-readable notation, Unicode standard annotates lowercase Latin letter 'e' as U+0065.

This is what char type does. It stores the 16 bits, so it's just a single word, but also a 16-bit value, because every such sequence on a computer correlates directly with a binary number. For convenience however, C# compiler allows you to input the character (intended for the char type) as a single-letter text, so you can also type 'e' in single-quotes and it will do the translation for you. However, this type is still internally considered to be an integer, and that's why you can add two chars together.

This is a bit advanced for a beginner, but incredibly useful because these Unicode standards are made to be compatible with how programmers are used to treat text, and so you can basically use them to determine the relative positioning between the letters in the alphabet, which is useful when sorting and comparing text, or when converting between lowercase and uppercase and so on, because there is some degree of order and rules that are followed by these standards, so that working with machine text is more consistent and versatile.

Finally, make sure you understand that -- fundamentally -- there are no other types of data other than 0s and 1s on a digital computer. Everything else is just an ongoing hallucination made by clever conversions of 0s and 1s to another signal. This is typically done by hardware, not software. However, software, at a level of a modern programming language, likes to pretend that there are various data types, such as text, decimal values, colors, or vectors, and this is mostly because this makes complex programs much easier to write and maintain and you are less likely to mix and match wrong types of data (this is what type is in C#). In reality, in memory and in CPU, it's all just a soup of discrete noise maintained by slightly different amounts of voltage.


Let me finish by explaining your last post
'a' is U+0061 and 'c' is U+0063
These (hexadecimal) values are equivalent to decimal values of 97 and 99 and if you add them together and print it out, you get 97+99 or 196. This isn't a useful value at all, but there you go, nobody stops you to make whatever you want, including nonsense. That's what programming is all about.

If you still can't understand a word of what I'm saying, you shouldn't really try to make programs, because you don't understand yet what computers are. There are some pretty important gaps in your knowledge and you should work with that instead of trying to make games, which are supposed to be the pinnacle of computer sciences, but are perceived differently.


Having seen a few of these threads, this guy reminds me of someone we used to see around here.


I also have some AnimalMan flashbacks. That's why I'm out :)


Closed. Because... well, you know.