Most modern operating systems will use virtual memory capabilities (supported by hardware features) to memory map the executable file into memory, which suggests there will be little to no effect due to sheer size of the executable, if the contents are otherwise largely unused/unreferenced.
Virtual memory, combined with copy-on-write also works on read/write data unique to the particular instantiated process; copy-on-write detects (initially file) mapped pages that are modified, and, virtual memory generally it pushes dirty(modified) pages out to the paging file as needed when dealing with memory constraints.
Ok, factoring out load time, let's assume for starters that the extra, unused file content you're asking about occurs after all the code that indeed does get used, which is grouped together at the beginning. There should be virtually no effect on the runtime performance for that. Very generally speaking, except for the cpu cache, the same instruction sequence executed will take the same time.
There are cache specific behaviors that might give you a hiccup or even some pathological behaviors, but I don't really see any big problems associated with a big contiguous chunk of either code or data that is not referenced in any way.
To be more clear, hardware caches have a notion of associativity. On modern processors that is usually 8-way or more. Embedded might vary. What hardware cache associativity of N-way, does is allow up to N addresses that hash to the same value (a cpu-internal hash of addresses) to be cached. Now, when you try to cache a N+1'th value that happens at the same hash, then one of the other elements gets evicted, even if there's still kilobytes left unused in the cache. Hardware caches are designed to work really well with contiguous memory.
So, if you insert other memory that isn't used in between memory that is used, you could create a pathological case where you are not using the cache effectively due to an associativity limit. You would have to try pretty hard: by putting all your actually accessed code (& data) on the same cache line hash, you could exhaust the associativity. Mind that some of this (running out of associativity) happens even under normal circumstances, so I would say that barring really trying to construct a worst case, this should not be a factor for even one or more lumps of code or data loaded but not otherwise used.
Now, there's still another effect, which is that caches are segmented into lines, which are chunks of, say 128, 256 or 512 bits (or other). In another pathological case, you can use up the cache in a different way. Since the cache (almost, depending on cpu) loads a full cache line when even only a single byte is needed, you could create a scenario where the cache is exhausted more quickly than you'd expect, by using only a very small amount of actual memory per cache line. As with the other case, you would have to work hard to construct a pathological case, but the idea is to intersperse the code (or data) that is used with code or data that isn't used in certain very regular intervals.
(There is a similar effect on virtual memory paging, that you could use up real memory faster than you'd like by using only a small number of bytes per page.)
Barring some pathological construction designed to hurt the cache, extra unused code or data should not affect the runtime performance.