There are a number of factors here but we can definitely lay out some principles around these kinds of situations. Let's start with the basic framework. Consider the following visualization:
time it takes to load |----------|
time it takes to process |----------|
The length of the line represents time. The units involved matter in practice but not at the conceptual level.
Now here's a what it looks like when you load the data and then process it:
loading |----------|
process |----------|
We can simply add the time it takes to load to the time it takes to process. Now consider if we don't wait for loading to finish before we process it. It might look something like this:
loading |----------|
process |----------|
Now I've made an assumption here that the loading process can happen in parallel with processing. While this isn't guaranteed, it's absolutely doable with non-blocking IO. Even with regular IO, this is often still roughly how things happen.
Now if either the loading or processing is insignificant, this won't have a major impact either way. But when both take long enough to matter, stream processing can make a serious dent in the total time. Another case where this can make a big is when you chain processes steps such as in a 'pipes and filters' design. e.g. you could have this:
|----------|
|----------|
|----------|
|----------|
|----------|
Or this:
|----------|
|----------|
|----------|
|----------|
|----------|
This is simplifying some things, of course but at a high level it's absolutely true. So with regard to your situation, the most costly step is likely the download of the file. You don't seem to be considering that but if you wanted to stream, it would really be against the data as you pull it down. But if your processing is relatively quick, there's not much advantage and it could present some complexities.
Another factor to consider if you are really to eek out every last drop of performance: it takes time to allocate memory. Let's say you need to allocate 1KiB of memory per line and there are 1024 lines. That's 1 MiB of memory if you pre-load and 1KiB (roughly) process at a line level. It takes a lot longer to allocate a megabyte of memory than a kilobyte and then you need to reclaim which also takes time.
Ultimately, at a high-level, if you are processing data sequentially, it's going to take more time and resources to pre-load the data. When you are loading small files from disk or SSD, it's not going to matter and you might get a little speed boost by pre-loading because of how your hardware manages IO. But for any significant amount of data, pre-loading is less efficient.
It's important to note that there are other considerations such as how it can be more complex to handle errors in a streaming solution. If you need all the data for a calculation or need to access the same values repeatedly, streaming can become impractical or impossible.