0

The original question was deemed lack of focus. This post is specifically about cpu pipeline stall.

How does synchronous microarchitecture implement pipeline stall when a cache miss occurs during instruction fetch? Specifically,

  1. Does the cache controller have to determine a hit / a miss within a single CPU cycle and have a signal connected to CPU control to inform hit or miss and CPU control has to setup all the necessary stall signals to data path pipeline within the same cpu cycle? During the next cycle, CPU then disables PC register from updating in next clock cycle to keep the in-flight read address stable until it's fulfilled? Then CPU sends "bubble" down the pipeline. How could a "bubble" / NOP be implemented exactly in a pipeline?

  2. Does cache controller inform CPU that a read is finished by some strobe signal to CPU control? since a read miss could take arbitrary long time.

Oliver Young
  • 123
  • 4
  • uff, could you maybe restrict yourself to one question? With one question mark? – Marcus Müller Sep 29 '21 at 21:38
  • I am trying to understand things and trying to form a systematic view of a certain subarea. And my questions are closely related to a single area and someone who really understand that area could come up with a description of how thing work that address my confusion at the same time. [Edited by a moderator.] – Oliver Young Sep 29 '21 at 21:43
  • It's just that addressing all these questions is not possible in a terse manner, and you'd get better (or at all) answers when you try to focus on a single question. Adding 4 more questions to the same question doesn't "enhance" the first question. [Edited by a moderator.] – Marcus Müller Sep 29 '21 at 21:58
  • Okay as you mention my first question warrants writing an entire chapter. Could you refer me a book chapter that describe how a pipeline stall could be implemented in concrete way instead of just saying insert a bubble into the pipeline? At least I couldn't find any book that does it. Part of being a newbie to an area is not being able to ask educated/intelligent questions in that area. [Edited by a moderator.] – Oliver Young Sep 29 '21 at 22:05
  • And the reason that I added a list of followup questions that detail my first one is because I want to avoid someone coming along saying you just insert a bubble into the pipeline (which seems that's what every computer architecture books do). I'd like to base the discussion / questions on a detailed foundation thus adding a few leading questions and hopefully someone who really know subject could tell where exactly I am wrong in my thinking. – Oliver Young Sep 29 '21 at 22:08
  • All - To minimise further mod actions, *please* make sure to comply with the site's [Code of Conduct](/help/conduct) in your comments. Even if you disagree with a comment or are frustrated by it, please don't be unkind / abusive in your reply. If your comment makes readers think that you're "raising your voice", or if your comment includes the word "you", that's probably a sign that you should check your reply still follows the Code of Conduct. Several comments have had to be deleted and edited, to remove parts which either broke the Code, or which referred to now-deleted comments. Thanks. – SamGibson Sep 29 '21 at 22:56

1 Answers1

2

Oliver, you seem to be conflating a number of fundamental concepts all together. This is why you are getting pushback as your question lack focus and to give a meaningful answer would be difficult.

First up, 'synchronous microarchitecture' are fluff words. So let's concentrate on cache miss and pipeline stall.

A cache miss won't necessarily cause a pipeline stall - these are two separate mechanisms.

For the pipeline, think of a car production line. For whatever reason, incoming parts (instructions) are not available. What happens to the production line? It stalls. What does this entail? Nothing moves on the production line. Time is wasted. Obviously this is something we want to avoid due to inefficiency.

The idea of the cache is to anticipate what memory is going to be accessed and keep a fast, local copy. For our car production line example, the cache is the local store of parts (instructions). When that local store is depleted, then the next request is going to be a miss. Simply we have to wait until we get another shipment from main memory. This means our production line stalls.

Note that a pipelined processor doesn't necessarily involve a cache and a cache doesn't necessarily require a pipelined processor. This is why you want to simplify your scope.

in summary:

  1. The cache controller can take as long as it wants. In reality, we use a cache for a performance increase so we want it to be a fast as possible. 1 clock? maybe, but that's implementation dependant. It should be faster than a main memory access otherwise there would be no use for the cache.

What the the cpu do when it stalls? send a 'bubble' down the pipeline? Again, that's implementation dependant. The simple answer is ' the cpu does no useful work'.

As for the PC (program counter), that doesn't necessarily update each clock. Once again, implemenatation dependant.

  1. Does the cache controller inform the cpu when the memory request is satisified? Yes. There clearly needs to be some mechanism for this to occur. How else would the cpu know to continue? If the main memory cycle time was fixed, maybe we could assume X clock cycles and no need a handshake. Implementation dependant.
Dave Tweed
  • 168,369
  • 17
  • 228
  • 393
Kartman
  • 5,930
  • 2
  • 6
  • 13
  • Thank you for the answer. I apologize for not using the most accurate jargon. But I am afraid that's exactly the kind of answers that I was trying to avoid in the first place by adding series probing questions which some think of as lack of focus. Obviously there's some mechanism (implementation dependent) to everything. In the end, our computer work, don't they. I tried to probe for "A" concrete mechanism of implementing pipeline stall. And I know cache miss doesn't necessarily need to stall the pipeline. I was just look for "A" mechanism that could stall the pipeline if needed to. – Oliver Young Sep 29 '21 at 23:57
  • I think my issue is that I became too used to great professors being able to explain complex system/concepts from first principle with enough concrete details in a very systematic way. Obviously, it's not fair to expect everyone to be at that level and this platform is not the right place for that kind of discussion. – Oliver Young Sep 29 '21 at 23:57
  • Again now I realize this is not the right place to ask). Your answer is fair (even though not what I was looking for). [Edited by a moderator.] – Oliver Young Sep 30 '21 at 00:00
  • Oliver, stop for a moment and consider the bigger picture. The purpose of this site is to encourage precise questions that can elicit precise answers. This is not only for the person asking the questions, but for others as reference in the future. – Kartman Sep 30 '21 at 00:17
  • Hi Kartman, that’s all fair and I realize that my question is not the right question for this site. I don’t think the policy of allowing other users to close questions shortly after it’s raised without even giving opportunities of being answered is a good policy for a welcoming community. – Oliver Young Sep 30 '21 at 00:23