I think OP is being funny, but in case they’re not:
For the purposes of this comment, let’s say data is memory you can read and write, but not execute, and code is memory you can read and execute, but not write.
The CPU does not run faster when the executable code is located in memory near the data. (There are some minimal caveats, like if the code is data, such as with some JIT languages, but we can safely ignore those around here.)
In fact, on modern CPUs, appreciable effort has gone into making sure code and data don’t occur in the same memory regions for reasons of security and stability. Any unit of memory that matters for performance (page, cache line, …) is generally not allowed to contain both data and executable code.
The strategies for optimizing the memory layout of data differ greatly from comparable strategies for code. With data, you generally want things to neatly align with page boundaries, cache lines and register sizes. At each level, the specific reasons are different, but generally have to do with enabling the CPU to efficiently load things into types of memory where they are faster to access, or can be operated on (cache, registers…). The layout of data is pretty easy to optimize well, which is what allows ECS to make intelligent decisions about where to put stuff in memory.
(I’d go as far as saying ECS is mostly a fancy malloc, but I might get yelled at for saying that.)
Memory that contains code to be executed gets fetched into different caches and different registers. These have different properties, which are a lot harder to intuit. (Mostly they have to do with enabling branch prediction and speculative execution, which boils down to the CPU never running out of stuff to do.)