I'm going to preface this with the fact that most of what I'm finding comes from the 1970s and early 1980s. During this time, sequential process models were far more common than the iterative and/or incremental approaches (the Spiral model or the agile methods). Much of this work is built on these sequential models. However, I don't think that destroys the relationship, but one of the benefits of iterative/incremental approaches is to release features (an entire vertical slice of an application) quickly and correct problems in it before dependencies are injected and complexity of each phase is high.
I just pulled out my copy of Software Engineering Economics and found a reference to the data behind this chart in Chapter 4. He cites "Design and Code Inspections to Reduce Errors in Program Development" by M.E. Fagan (IEEE, PDF from UMD), E.B. Daly's "Management of Software Engineering", W.E. Stephenson's "An Analysis of the Resources Used in Safeguard System Software Development" (ACM), and "several TRW projects".
...the relative cost of correcting software errors (or making other
software changes) as a function of the phase in which the corrections
or changes are made. If a software requirements error is detected and
corrected during the plans and requirements phase, its correction is a
relatively simple matter of updating the requirements specification.
If the same error is not corrected until the maintenance phase, the
correction involves a much larger inventory of specifications, code,
user and maintenance manuals, and training material.
Further, late corrections involve a much more formal change approval
and control process, and a much more extensive activity to revalidate
the correction. These factors combine to make the error typically 100
times more expensive to correct in the maintenance phase on large
projects than in the requirements phase.
Bohem also looked at two smaller, less formal projects and found an increase in cost, but far less significant than the 100 times identified in the larger projects. Given the chart, the differences appears to be 4 times greater to fix a requirements defect after the system is operational than in the requirements phase. He attributed this to the smaller inventory of items that comprise the project and the reduced formality that led to the ability to implement simpler fixed faster.
Based on Boehm in Software Engineering Economics, the table in Code Complete is rather bloated (the low end of the ranges is often too high). The cost to make any change within phase is indeed 1. Extrapolating from Figure 4-2 in Software Engineering Economics, a requirements change should be 1.5-2.5 times in architecture, 2.5-10 in coding, 4-20 in testing, and 4-100 in maintenance. The amount depends on the size and complexity of the project as well as formality of the process used.
In Appendix E of Barry Boehm and Richard Turner's Balancing Agility and Discipline contains a small section on the empirical findings regarding the cost of change.
The opening paragraphs cite Kent Beck's Extreme Programming Explained, quoting Beck. It says that if the cost of changes rose slowly over time, decisions would be made as late as possible and only what was needed would be implemented. This is known as the "flat curve" and it is what drives Extreme Programming. However, what previous literature found was the "steep curve", with small systems (<5 KSLOC) having a change of 5:1 and large systems having a change of 100:1.
The section cites the University of Maryland's Center for Empirically Based Software Engineering (sponsored by the National Science Foundation). They performed a search of available literature and found that the results tended to confirm a 100:1 ratio, with some results indicating a range of 70:1 to 125:1. Unfortunately, these were typically "big design up front" projects and managed in a sequential manner.
There are samples of "small commercial Java projects" run using Extreme Programming. For each Story, the amount of effort in error fixing, new design, and refactoring was tracked. The data shows that as the system is developed (more user stories are implemented), the average effort tends to increase in a non-trivial rate. Effort in refactoring increases about 5% and efforts toward effort fixing increases about 4%.
What I'm learning is that system complexity plays a great role in the amount of effort needed. By building vertical slices through the system, you slow down the rate of curve by slowly adding complexity instead of adding it in piles. Rather than dealing with the mass of complexity of requirements followed by an extremely complex architecture, followed by an extremely complex implementation, and so on, you start very simply and add on.
What impact does this have on the cost to fix? In the end, perhaps not much. However, it does have the advantages of allowing for more control over complexity (through the management of technical debt). In addition, the frequent deliverables often associated with agile mean that the project might end sooner - rather than delivering "the system", pieces are delivered until the business needs are satisfied or have changed drastically that a new system (and therefore a new project) is needed.
Stephen Kan's Metrics and Models in Software Quality Engineering has a section in Chapter 6 about the cost effectiveness of phase defect removal.
He starts off by citing Fagan's 1976 paper (also cited in Software Engineering Economics) to state that rework done in high level design (system architecture), low-level design (detailed design), and implementation can be between 10 and 100 times less expensive than work done during component and system level testing.
He also cites two publications, from 1982 and 1984, by Freedman and Weinberg that discuss large systems. The first is "Handbook of Walkthroughs, Inspections, and Technical Reviews" and the second is "Reviews, Walkthroughs, and Inspections". The application of reviews early in the development cycle can reduce the number of errors that reach the testing phases by a factor of 10. This reduction in the number of defects leads to reduced testing costs by 50% to 80%. I would have to read the studies in more detail, but it appears that the cost also includes finding and fixing the defects.
A 1983 study by Remus, "Integrated Software Validation in the View of Inspections/Review", studied the cost of removing defects in different phases, specifically design/code inspections, testing, and maintenance, using data from IBM's Santa Teresa Laboratory in California. The cited results indicate a cost ratio of 1:20:82. That is, a defect found in design or code inspections has a cost-to-change of 1. If the same defect escapes into testing, it will cost 20 times more. If it escapes all the way to a user, it will multiple the cost-to-correct by up to 82. Kan, using sample data from IBM's Rochester, Minnessota facility, found the defect removal cost for the AS/400 project to be similar at 1:13:92. However, he points out that the increase in cost might be due to the increased difficulty to find a defect.
Gilb's 1993 ("Software Inspection") and 1999 ("Optimizing Software Engineering Specification and Quality Control Processes") publications on software inspection are mentioned to corroborate the other studies.
Additional information might be found in Construx's page on Defect Cost Increase, which provides a number of references on the increase in defect-repair cost. It should be noted that Steve McConnell, author of Code Complete, founded and works for Construx.
I recently listened to a talk, Real Software Engineering, given by Glenn Vanderburg at Lone Star Ruby Conference in 2010. He's given the same talk at Scottish Ruby Conference and Erubycon in 2011, QCon San Francisco in 2012, and O'Reilly Software Architecture Conference in 2015. I've only listened to the Lone Star Ruby Conference, but the talk has evolved over time as his ideas were refined.
Venderburg suggests that the all of this historical data is actually showing the cost to fix defects as time progresses, not necessarily as a project moves through phases. Many of the projects examined in the previously mentioned papers and books were sequential "waterfall" projects, where phase and time moved together. However, a similar pattern would emerge in iterative and incremental projects - if a defect was injected in one iteration, it would be relatively inexpensive to fix in that iteration. However, as the iterations progress, lots of things happen - the software becomes more complex, people forget some of the minor details about working in particular modules or parts of the code, requirements change. All of these will increase the cost of fixing the defect.
I think that this is probably closer to reality. In a waterfall project, the cost increases because of the amount of artifacts that need to be corrected due to an upstream problem. In iterative and incremental projects, the cost increases because of an increase in complexity in the software.