That’s kind of a complicated question.
Just having a huge amount of data tends to make things slower. To some extent, this is true regardless of what kind of data structure you use or how cleverly you use it, but there are certain things you can do better or worse.
There’s several different problems you can run into with dictionaries, and whether you actually will run into them sometimes depends on the distribution of your data and certain implementation details that you would usually ignore.
One problem you were very likely running into here is that dynamically-sized data structures need to guess how much memory they should allocate, and then if they guess too low they need to “reallocate” by creating a bigger data structure and copying all of their existing data into the new structure. For instance, if a List has enough space to store 128 items, but then you try to insert 129 items, it needs to create a new chunk of memory to store the new item, but then it also needs to copy all of the previous 128 items into the new memory in order to keep everything “lined up” (otherwise you’d forfeit most of the efficiency advantages of a list). So inserting that 129th item (in this hypothetical example) is way more expensive than the previous inserts, because it implicitly requires the List to copy the other 128.
Dictionaries are more complicated, but they do something similar. If you start with a small dictionary and then just add things one at a time, it will need to periodically copy everything into a bigger space to make room. The bigger the dictionary gets, the more expensive this copying gets. (Data structures like this usually grow exponentially, so that the frequency of reallocations gets smaller as they get more expensive.)
But if you know you’re going to need a really big dictionary, you can avoid all this copying by saying so in advance. There’s a constructor where you specify what “capacity” you want the dictionary to have. If you know you’re going to add a million items, you can ask for a capacity of a million up-front and it will start out big instead of needing to periodically grow.
This also applies to Lists, by the way! If you know you’re going to stuff a million entries into your List, make sure to construct it with a suitably large capacity.
Another problem that’s unique to dictionaries is hash collisions. Under the covers, C# Dictionaries use hash tables to store data. Briefly, that means each key you insert is “hashed” to produce a number that’s used to quickly find the right “approximate location” in the dictionary; think of it like turning to the correct page. Dictionaries work most efficiently when all of your keys have different hashes. When there’s only one entry on a given “page”, then you can easily see the answer as soon as you go there. When multiple keys have the same hash, that’s called a “collision”, and it means the “page” gets more crowded, so dictionary needs to spend more effort actually reading the “page” once it gets there to find the right individual entry.
So the question is: how different are your hashes? That can be a hard question to answer, since it depends both on the hash algorithm being used and on how your data is distributed. For example, imagine that you write a hash function for Vector2Int that simply returns x ^ y. That means (among other things) that the vector (x, y) will always have the same hash as the vector (y, x). If you happen to be using a lot of mirror-image vectors as keys, that could be a problem! But if your vectors are distributed randomly, it might not matter.
Since you are converting 2 ints (x and y coordinates) into a single int, you are always going to get some vectors that end up with the same hash. Theoretically, for any hash algorithm you use, there’s always some data set that will perform really badly with that hash algorithm. The goal is to try to pick one that performs well for “likely” data sets (while also being really fast to calculate).
If you use some obscure data type and the author didn’t really expect it to be used as a dictionary key, then they might not have put very much effort into that…
If your keys are really big, then there’s this annoying issue that writing a hash algorithm that takes into account every detail of the really big thing will take longer to compute, which slows things down. But if you don’t take every detail into account, then sometimes the details you ignore will be the important ones, and then you get a LOT of collisions. I heard one horror story about some company that decided to hash strings by looking at only the first 5 and last 5 characters, so that if you used a million-character-long string they wouldn’t have to hash the whole thing one character at a time. Then someone made a hash table with URLs as keys, so the first 5 characters were mostly “http:” and the last 5 were mostly “.html”, so they all hashed to the same value and the Dictionary tried to put everything on the same “page”…
So nowadays, if you use strings as keys, the hash function is probably looking at the whole string. This is good for safety, but means that using very long strings as keys can cause its own problems…
Collisions get even more complicated because dictionaries don’t actually store every unique hash separately; they sort the hashes into a limited number of bins based on how big the dictionary is. (You wouldn’t really want every dictionary to allocate int.MaxValue “pages” just-in-case, would you?) So sometimes different hashes end up on the same “page” even though they are different.