@LethalGenes Hey buddy you can’t just storm in with that kind of self-entitlement. You’re the one doing something way above your head, then when you ask for help, you find it normal to insult people who try to explain it to you, like it’s them who’s weird.
Listen, computers don’t know what letters are. Everything in memory is some binary data, everything is a sequence of 0s and 1s, so called bits. Text, music, video, code, whatever. If you split these binary sequences into 8-digit pieces, you get bytes. There is an ancient physical and technical reason why bytes have exactly eight bits, but I won’t go there.
If you “glue” two bytes together, that’s called a ‘word’. A word has 16 bits. That’s just a term, it has nothing to do with linguistic words.
C# standardizes strings as a type of an array of 16-bit types called char (short for character). Arrays are continual random access sequences allocated in memory, and in this case you can address any 16-bit element inside this, typically much longer sequence of bits. Strings are basically word arrays, and there is a lot more to be said how exactly this works, but I don’t want to be accused of smoking.
C# designers made a decision to adopt the UTF-16 character encoding (16-bit Unicode Transformation Format), so that each word sequence represents a known data element across multiple machines. UTF-16 is just a conventional standard. So when you access the third element of the string “treehouse” you get a character “e” which is represented by this sequence 0000 0000 0110 0101, or just 0065 in a hexadecimal notation (0110 = 6; 0101 = 5). In a human-readable notation, Unicode standard annotates lowercase Latin letter ‘e’ as U+0065.
This is what char type does. It stores the 16 bits, so it’s just a single word, but also a 16-bit value, because every such sequence on a computer correlates directly with a binary number. For convenience however, C# compiler allows you to input the character (intended for the char type) as a single-letter text, so you can also type ‘e’ in single-quotes and it will do the translation for you. However, this type is still internally considered to be an integer, and that’s why you can add two chars together.
This is a bit advanced for a beginner, but incredibly useful because these Unicode standards are made to be compatible with how programmers are used to treat text, and so you can basically use them to determine the relative positioning between the letters in the alphabet, which is useful when sorting and comparing text, or when converting between lowercase and uppercase and so on, because there is some degree of order and rules that are followed by these standards, so that working with machine text is more consistent and versatile.
Finally, make sure you understand that – fundamentally – there are no other types of data other than 0s and 1s on a digital computer. Everything else is just an ongoing hallucination made by clever conversions of 0s and 1s to another signal. This is typically done by hardware, not software. However, software, at a level of a modern programming language, likes to pretend that there are various data types, such as text, decimal values, colors, or vectors, and this is mostly because this makes complex programs much easier to write and maintain and you are less likely to mix and match wrong types of data (this is what type is in C#). In reality, in memory and in CPU, it’s all just a soup of discrete noise maintained by slightly different amounts of voltage.