78

When Murray Gell-Mann was asked how Richard Feynman managed to solve so many hard problems Gell-Mann responded that Feynman had an algorithm:

  1. Write down the problem.
  2. Think real hard.
  3. Write down the solution.

Gell-Mann was trying to explain that Feynman was a different kind of problem solver and there were no insights to be gained from studying his methods. I kinda feel the same way about managing complexity in medium/large software projects. The people that are good are just inherently good at it and somehow manage to layer and stack various abstractions to make the whole thing manageable without introducing any extraneous cruft.

So is the Feynman algorithm the only way to manage accidental complexity or are there actual methods that software engineers can consistently apply to tame accidental complexity?

  • 34
    I wouldn't be surprised if the act of writing down the problem (and fleshing it out so you could adequately explain it to someone else) helped you identify a solution. – Rory Hunter Feb 17 '14 at 12:02
  • @RoryHunter - Agreed. And part of writing down the problem and sharing with someone indicates you admit you don't have a solution yet. – JeffO Feb 17 '14 at 13:28
  • 39
    @RoryHunter: This. Almost weekly, I come across a problem that I can't solve, write an email to someone to explain it. Then realise what it is that I'm not considering, solve the problem and delete the email. I've also written about a dozen questions on this site that never got sent. – pdr Feb 17 '14 at 13:43
  • The most fundamental thing I have learned is do not ask a developer to handle a problem that is just within their reach. While those problems are exciting, they also involve the developer stretching to unfamiliar places, leading to much "on the fly" growth. Some tools, such as spiral development, are good for grounding development teams in a small tractable problem before growing it into a final solution – Cort Ammon Feb 18 '14 at 01:31
  • 2
    @CortAmmon Not to be mean but that sounds like a pretty dumb insight. 99% of what developers know was learned at some point through your so-called 'on-the-fly growth'. It takes a good problem solver to make a good programmer. Solving problems is something we're inherently drawn to. If your developers are't growing they're probably doing a lot of boring repetitive work. The type of work that will make any reasonably talented developers unhappy and depressed. And... 'Spiral Development' is nothing but a re-hashing of the basic concept of iterative development with waterfall milestones. – Evan Plaice Feb 18 '14 at 19:20
  • (cont) BTW, don't take the 'dumb' remark an ad-hominem attack. By 'dumb' I mean your approach is naive. Jumping onto the latest methodology like 'Spiral Development' is the equivalent to skipping step 1 & 2. The spinning 3D animations of spirals are good if you're looking to give the 'business types' a 'warm and fuzzy' and a general idea of how to setup a development cycle. If you want to see some truly inspiring examples, just take a look at how Open Source projects manage release cycles. – Evan Plaice Feb 18 '14 at 19:48
  • JeffO pdr that's exactly the point of [rubber duck debugging](http://en.wikipedia.org/wiki/Rubber_duck_debugging) – Carlos Campderrós Feb 19 '14 at 08:32
  • @pdr: As a dev lead, I am considering a requirement for interns and rookies that peer assistance requests are put in writing for precisely that reason. (It sounds a bit draconian, so I probably won't enforce it, but the lesson is a powerful one.) – kmote Feb 21 '14 at 17:17
  • @Evan Plaice: The question was on how to manage accidental complexity, not how to develop good programmers. Yes, giving developers challenging tasks is essential to maintaining a healthy work enviornment. However, too much challenge can be exceedingly dangerous on the accidental complexity front. In my experience, if I reach for tasks that is at 50% of my capacity, I get bored quickly. If I reach for something that is at 90% of my capacity, I stay interested. However, if I approach a problem at 99% of my capacity, that's when accidental complexity creeps in. – Cort Ammon Feb 27 '14 at 02:12
  • At 99% of my capacity, that's where I begin to struggle with determining what parts of my work should be refactored for readability, because I can barely identify what parts are essential. I suppose a workflow where your developers spend a fraction of their time straining their limits, and the rest of the time recovering and refactoring, that could work. I see it a lot like professional sports. If a football player played at game speed during practice, they wouldn't last a month before their body gave out. If you want to manage injuries, you make sure they don't spend much time at max. – Cort Ammon Feb 27 '14 at 02:15
  • As a concrete example from my life: I have developed two pieces of code which were extreme stretches for me technically. I was tremendously interested in them, and I learned quite a bit from each one. However, the resulting code has far too much accidental complexity. As a result, I am the only developer capable of maintaining them -- if the original author can barely track the code, how should a new developer learn it? I consider the two projects a failure due to accidental complexity... but they're also my two favorite failures ever. – Cort Ammon Feb 27 '14 at 02:17
  • @CortAmmon Bad analogy. Kinesthetic memory responds in the way it is trained. Practicing at game speed is exactly what you should do, just in shorter intervals. Anyway, this isn't a fitness site. The point is, accidental complexity is the result of the inability to identify/manage complexity. When faced with new problems and a limited understanding of the possible solutions a developer will try to 'code around' the problem. It takes the experience of facing difficult problems to be able to recognize and implement new approaches. – Evan Plaice Feb 27 '14 at 02:59
  • I hate to sound cliche but what I'm saying is embodied by the quote, "10 years of experience is not the same as 1 year of experience 10 times." How much time allowance should programmers be given to see out new challenges depends on the schedule and organization. Google seems to thing 20% is a good investment. Not only has 20% time lead to new business opportunities but it has contributed to the long-term growth and happiness of their developer's talent. – Evan Plaice Feb 27 '14 at 03:05
  • @pdr Please do post your questions on StackExchange if they're good questions, even if you already solved it (you can even answer your own question). You never know how extremely valuable that small piece of information could be to somebody else :) – Chris Cirefice Aug 18 '16 at 17:34
  • @pdr What you describe is rubberducking - https://en.wikipedia.org/wiki/Rubber_duck_debugging. – user625488 Feb 02 '21 at 14:45

7 Answers7

106

When you see a good move, look for a better one.
—Emanuel Lasker, 27-year world chess champion

In my experience, the biggest driver of accidental complexity is programmers sticking with the first draft, just because it happens to work. This is something we can learn from our English composition classes. They build in time to go through several drafts in their assignments, incorporating teacher feedback. Programming classes, for some reason, don't.

There are books full of concrete and objective ways to recognize, articulate, and fix suboptimal code: Clean Code, Working Effectively with Legacy Code, and many others. Many programmers are familiar with these techniques, but don't always take the time to apply them. They are perfectly capable of reducing accidental complexity, they just haven't made it a habit to try.

Part of the problem is we don't often see the intermediate complexity of other people's code, unless it has gone through peer review at an early stage. Clean code looks like it was easy to write, when in fact it usually involves several drafts. You write the best way that comes into your head at first, notice unnecessary complexities it introduces, then "look for a better move" and refactor to remove those complexities. Then you keep on "looking for a better move" until you are unable to find one.

However, you don't put the code out for review until after all that churn, so externally it looks like it may as well have been a Feynman-like process. You have a tendency to think you can't do it all one chunk like that, so you don't bother trying, but the truth is the author of that beautifully simple code you just read usually can't write it all in one chunk like that either, or if they can, it's only because they have experience writing similar code many times before, and can now see the pattern without the intermediate stages. Either way, you can't avoid the drafts.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
  • 1
    Ahh, but you seem to have been able to write a clean answer to this question on your first try. (And a very cogent one, at that.) Maybe you're just Feynman in disguise. – kmote Feb 21 '14 at 17:22
  • 2
    tl;dr; refactor, be unafraid. – ocodo Oct 26 '14 at 13:37
  • 1
    +1 for embracing imperfection. Man, that's something that everyone *talks about*, but few people do. I try to rewire my brain to think of myself like a machine learning algorithm, where errors are actually **good** and teach about how to improve. Lovely way of phrasing it with your "drafts" metaphor. – Juan Carlos Coto Mar 06 '15 at 21:44
47

"Software architecture skill cannot be taught" is a widespread fallacy.

It is easy to understand why many people believe it (those who are good at it want to believe they're mystically special, and those who aren't want to believe that it's not their fault that they're aren't.) It is nevertheless wrong; the skill is just somewhat more practice-intensive than other software skills (e.g. understanding loops, dealing with pointers etc.)

I firmly believe that constructing large systems is susceptible to repeated practice and learning from experience in the same way that becoming a great musician or public speaker is: a minimum amount of talent is a precondition, but it's not a depressingly huge minimum that is out of reach of most practitioners.

Dealing with complexity is a skill you acquire largely by trying and failing a few times. It's just that the many general guidelines that the community has discovered for programming in the large (use layers, fight duplication wherever it rears its head, adhere religiously to 0/1/infinity...) are not as obviously correct and necessary to a beginner until they actually do program something that is large. Until you have actually been bitten by duplication that caused problems only months later, you simply cannot 'get' the importance of such principles.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • 11
    I really like your last sentence. I'm positive I learned so much at my first job because I was there long enough for my mistakes to catch up with me. It's a valuable experience. – MetaFight Feb 17 '14 at 12:17
  • 26
    "Good judgment comes from experience. Experience comes from bad judgment." --Mulla Nasrudin – Jonas Kölker Feb 17 '14 at 12:35
  • 9
    I don't understand how you can assert that the inability to teach software architecture is a fallacy, but go on to say that repeated practice, learning from experience, and making mistakes (coupled with some innate talent) is the only way to learn it. My definition of something that can be taught is something that you don't need to practice intensively to get, but that you can learn by watching, listening, and reading. I agree with everything in this answer, except the first line, since it is apparently contradicting the rest. – Thomas Owens Feb 17 '14 at 13:46
  • 4
    The point is that a lot of people claim that many people *absolutely, under any circumstances* cannot learn to be good architects (whether in a lecture room or in the industry), because they "haven't got what it takes". That's what I consider common but false. – Kilian Foth Feb 17 '14 at 13:48
  • 1
    I spent the whole of last week trying to teach some pretty bog standard design pattern and SoC stuff to an engineer with more years than myself and was constantly surprised at how difficult it was to hammer in the reasons _why_ and the focus on simple abstraction rather than smart monoliths. I concluded that I was a bad tutor. – Gusdor Feb 17 '14 at 15:24
  • 5
    @Thomas:Getting back to the public speaking analogy. No matter how many books you read, how many hours you spend studying the topic or how many teachers try to teach you to be good at public speaking, you can't get good at it unless you do it through repeated practice, learning from experience and making mistakes. You'll never convince me that someone can learn the skill simply by watching, listening and reading. Same goes for most any skill, including software architecture. I actually can't think of any skill of any substance that you can learn simply by watching, listening and reading. – Dunk Feb 17 '14 at 16:00
  • 2
    Although, I disagree that software architecture "in the large" can be taught to most sw developers. As most developers can't hold the big picture in their head, and even if they can, they can't use that picture to make the necessary connections between modules to comprehend the impacts of decisions. While Software Architecture "in the large" certainly isn't an elite skill, it also certainly isn't something that most developers will ever get good at. There is an inborn talent that some people have and others just don't have whatever it is that makes one good at it. – Dunk Feb 17 '14 at 16:09
  • 3
    "Cannot be taught" and "cannot be learned" are very different things, hence the objections... Apparently the point is that it can't really be taught, but can most certainly be learned? – alexis Feb 17 '14 at 20:35
22

Pragmatic thinking by Andy Hunt addresses this issue. It refers to the Dreyfus model, according to which there are 5 stages of proficiency in various skills. The novices (stage 1) need precise instructions to be able to do something correctly. Experts (stage 5), on the contrary, can apply general patterns to a given problem. Citing the book,

It’s often difficult for experts to explain their actions to a fine level of detail; many of their responses are so well practiced that they become preconscious actions. Their vast experience is mined by nonverbal, preconscious areas of the brain, which makes it hard for us to observe and hard for them to articulate.

When experts do their thing, it appears almost magical to the rest of us—strange incantations, insight that seems to appear out of nowhere, and a seemingly uncanny ability to know the right answer when the rest of us aren’t even all that sure about the question. It’s not magic, of course, but the way that experts perceive the world, how they problem solve, the mental models they use, and so on, are all markedly different from nonexperts.

This general rule of seeing (and as a result avoiding) different issues can be applied to specifically the issue of accidental complexity. Having a given set of rules isn't enough to avoid this problem. There will always be a situation which isn't covered by those rules. We need to gain experience to be able to foresee problems or identify solutions. Experience is something that cannot be taught, it can only be gained by constant trying, failing or succeeding and learning from mistakes.

This question from Workplace is relevant and IMHO would be interesting to read in this context.

superM
  • 7,363
  • 4
  • 29
  • 38
  • 4
    What a wonderful description of how experts think. It really isn't magic, it's just hard to articulate all the discrete and logical steps. – MetaFight Feb 17 '14 at 12:14
  • +1 Very much like the [Four Stages Of Competence model](http://en.wikipedia.org/wiki/Four_stages_of_competence) – Robbie Dee Feb 17 '14 at 13:19
  • That is like saying that it isn't magic how the "elite" athletes are able to do what they do, it is just a matter of them being able to naturally perform the discrete and logical steps necessary to perform at the highest levels. So if only those athletes could articulate those discrete and logical steps to us, then we could all be elite athletes. The concept that we could all be elite athletes, no matter what level of knowledge we obtain, is just as ridiculous as we could all be experts at whatever skill we are trying to learn, regardless of aptitude in that skill area. – Dunk Feb 17 '14 at 16:14
  • 1
    @Dunk, Magic would be when those "elite" athletes could perform the same without any trainings at all. The main idea is that there is no silver bullet. No matter how talented one is, experience cannot be gained just by studying some "discrete and logical steps". By the way, according to the same book, only 1-5% of people are experts. – superM Feb 17 '14 at 17:01
  • @Super:I would question any book/study that made such a ridiculous claim as only 1-5% of people are experts. Talk about pulling a number out of their #&$#. Experts at what? I bet there is a much higher percentage of people that are experts at breathing, walking, eating. Who decides what is expert level? A claim like the 1-5% discredits any further claims and analysis by such authors. – Dunk Feb 19 '14 at 15:10
  • @Super:Regarding the "elite"/magic issue. I reread the post and might have read more into it than it says because I am tainted by similar threads in the past which claim that becoming an expert is merely a matter of experience (ie. the 10,000 hour rule). I was simply claiming that no matter how much training, coaching and practicing a person does, if they don't have the inborn ability they could never become an "elite" athlete. Even with 100,000 hours of experience, let alone 10,000 hours. Same goes for any other difficult skills like those requiring high levels of brain functioning abilities. – Dunk Feb 19 '14 at 15:21
4

You don’t spell it out, but “accidental complexity” is defined as complexity that is not inherent to the problem, as compared to “essential” complexity. The techniques requireed for "Taming" will depend on where you start from. The following refers mostly to systems that have already acquired unnecessary complexity.

I have experience in a number of large multi-year projects were the “accidental” component significantly outweighed the “essential” aspect, and also those where it did not.

Actually, the Feynman algorithm applies to some extent, but that does not mean that “think real hard” means only magic that cannot be codified.

I find there are two approaches that need to be taken. Take them both – they are not alternatives. One is to address it piecemeal and the other is to do a major rework. So certainly, “write down the problem”. This might take the form of an audit of the system – the code modules, their state (smell, level of automated testing, how many staff claim to understand it), the overall architecture (there is one, even if it “has issues”), state of requirements, etc. etc.

It’s the nature of “accidental” complexity that there is no one problem that just needs addressed. So you need to triage. Where does it hurt – in terms of ability to maintain the system and progress its development? Maybe some code is really smelly, but is not top priority and fixing can be made to wait. On the other hand, there may be some code that will rapidly return time spent refactoring.

Define a plan for what a better architecture will be and try to make sure new work conforms to that plan – this is the incremental approach.

Also, articulate the cost of the problems and use that to build a business case to justify a refactor. The key thing here is that a well architected system may be much more robust and testable resulting in a much shorter time (cost and schedule) to implement change – this has real value.

A major rework does come in the “think real hard” category – you need to get it right. This is where having a "Feynman" (well, a small fraction of one would be fine) does pay off hugely. A major rework that does not result in a better architecture can be a disaster. Full system rewrites are notorious for this.

Implicit in any approach is knowing how to distinguish “accidental” from “essential” – which is to say you need to have a great architect (or team of architects) who really understands the system and its purpose.

Having said all that, the key thing for me is automated testing. If you have enough of it, your system is under control. If you don't . . .

Keith
  • 473
  • 2
  • 5
  • Could you explain how automated testing serves to differentiate accidental and essential complexity? – rpggio Feb 19 '14 at 01:55
  • 1
    @RyanSmith. In short, No. In fact, I don't think there is any particular way (other than "think hard") to *distinguish* these. But the question is about "managing" it. If you view requirements, design, test cases as part of the system architecture, then lack of automated tests is in itself accidental complexity, so adding automated testing where it is lacking does help address it and make what there is more *manageable*. But most definitely it does not resolve all of it. – Keith Feb 19 '14 at 02:03
3

"Everything should be made as simple as possible, but no simpler."
— attributed to Albert Einstein

Let me sketch my personal algorithm for dealing with accidental complexity.

  1. Write a user story or use case. Review with the product owner.
  2. Write an integration test that fails because the feature is not there. Review with QA, or lead engineer, if there is such thing in your team.
  3. Write unit tests for some classes that could pass the integration test.
  4. Write the minimal implementation for those classes that passes the unit tests.
  5. Review unit tests and implementation with a fellow developer. Go to Step 3.

The whole design magic would be on Step 3: how do you set up those classes? This turns to be the same question as: how do you imagine that you have a solution for your problem before you have a solution to your problem?

Remarkably, just imagining you have the solution seems to be one of the main recommendations of people who write on problem-solving (called "wishful thinking" by Abelson and Sussman in Structure and Interpretation of Computer Programs and "working backward" in Polya's How to Solve It)

On the other hand, not everyone has the same "taste for imagined solutions": there are solutions that only you find elegant, and there are others more understandable by a wider audience. That is why you need to peer-review your code with fellow developers: not so much to tune performance, but to agree on understood solutions. Usually this leads to a re-design and, after some iterations, to a much better code.

If you stick with writing minimal implementations to pass your tests, and write tests that are understood by many people, you should end with a code base where only irreducible complexity remains.

logc
  • 2,190
  • 15
  • 19
2

Accidental Complexity

The original question (paraphrased) was:

How do architects manage accidental complexity in software projects?

Accidental complexity arises when those with direction over a project choose to append technologies that are one off, and that the overall strategy of the project's original architects did not intend to bring into the project. For this reason it is important to record the reasoning behind the choice in strategy.

Accidental complexity can be staved off by leadership that sticks to their original strategy until such time as a deliberate departure from that strategy becomes apparently necessary.

Avoiding Unnecessary Complexity

Based on the body of the question, I would rephrase it like this:

How do architects manage complexity in software projects?

This rephrasing is more apropos to the body of the question, where the Feynman algorithm was then brought in, providing context that proposes that for the best architects, when faced with a problem, have a gestalt from which they then skilfully construct a solution, and that the rest of us can not hope to learn this. Having a gestalt of understanding depends on the intelligence of the subject, and their willingness to learn the features of the architectural options that could be within their scope.

The process of planning for the project would use the learning of the organization to make a list of the requirements of the project, and then attempt to construct a list of all possible options, and then reconcile the options with the requirements. The expert's gestalt allows him to do this quickly, and perhaps with little evident work, making it appear to come easily to him.

I submit to you that it comes to him because of his preparation. To have the expert's gestalt requires familiarity with all of your options, and the foresight to provide a straightforward solution that allows for the foreseen future needs that it is determined the project should provide for, as well as the flexibility to adapt to the changing needs of the project. Feynman's preparation was that he had a deep understanding of various approaches in both theoretical and applied mathematics and physics. He was innately curious, and bright enough to make sense of the things he discovered about the natural world around him.

The expert technology architect will have a similar curiosity, drawing on a deep understanding of fundamentals as well as a broad exposure to a great diversity of technologies. He (or she) will have the wisdom to draw upon the strategies that have been successful across domains (such as Principles of Unix Programming) and those that apply to specific domains (such as design patterns and style guides). He may not be intimately knowledgeable of every resource, but he will know where to find the resource.

Building the Solution

This level of knowledge, understanding, and wisdom, can be drawn from experience and education, but requires intelligence and mental activity to put together a gestalt strategic solution that works together in a way that avoids accidental and unnecessary complexity. It requires the expert to put these fundamentals together; these were the knowledge workers that Drucker foresaw when first coined the term.

Back to the specific final questions:

Specific methods to tame accidental complexity can be found in the following sorts of sources.

Following the Principles of Unix Programming will have you creating simple modular programs that work well and are robust with common interfaces. Following Design Patterns will help you construct complex algorithms that are no more complex than necessary. Following Style Guides will ensure your code is readable, maintainable, and optimal for the language in which your code is written. Experts will have internalized many of the principles found in these resources, and will be able to put them together in a cohesive seamless fashion.

Aaron Hall
  • 5,895
  • 4
  • 25
  • 47
  • What do you mean by "gestalt"? I've found that it's much like "paradigm" - commonly misused, or used to give something an air of academia. –  Feb 18 '14 at 21:43
  • @JonofAllTrades From wikipedia: [`Die Gestalt is a German word for form or shape. It is used in English to refer to a concept of 'wholeness'.](http://en.wikipedia.org/wiki/Gestalt) I use it here to refer to the expert's understanding of the whole picture, how [the human eye sees objects in their entirety](http://en.wikipedia.org/wiki/Gestalt_psychology) – Aaron Hall Feb 18 '14 at 22:16
0

This may have been a difficult question some years ago, but it is IMO no longer difficult to eliminate accidental complexity nowadays.

What Kent Becksaid about himself, at some point: "I'm not a great programmer; I'm just a good programmer with great habits."

Two things are worth highlighting, IMO: he considers himself a programmer, not an architect, and his focus is on habits, not knowledge.

Feynman's way of solving hard problems is the only way to do it. The description isn't necessarily very easy to understand, so I'll dissect it. Feynman's head was not just full of knowledge, it was also full of the skill to apply that knowledge. When you have both the knowledge and the skills to use it, solving a hard problem is neither hard nor easy. It's the only possible outcome.

There's a completely non-magical way of writing clean code, that does not contain accidental complexity, and it's mostly similar to what Feynman did: acquire all required knowledge, train to get used to putting it to work, rather than just having it stashed away in some corner of your brain, then write clean code.

Now, many programmers aren't even aware of all the knowledge required to write clean code. Younger programmers tend to discard knowledge about algorithms and data structures, and most older programmers tend to forget it. Or big O notation and complexity analysis. Older programmers tend to dismiss patterns or code smells - or not even know that they exist. Most programmers of any generation, even if they know about patterns, never remember the exact when to use and drivers parts. Few programmers of any generation constantly assess their code against the SOLID principles. Many programmers mix all possible levels of abstraction all over the place. I'm not aware of one fellow programmer, for the time being, to constantly assess his code against the stenches described by Fowler in his refactoring book. Although some projects use some metrics tool, the most used metric is complexity, of one sort or another, while two other metrics - coupling and cohesion - are to a large extent ignored, even if they are very important for clean code. Another aspect almost everybody ignores is cognitive load. Few programmers treat unit tests as documentation, and even fewer are aware that difficult to write or to name unit tests are yet another code stench, that usually indicates bad factoring. A tiny minority is aware of domain driven design's mantra to keep the code model and the business domain model as close to one another as possible, since discrepancies are bound to create problems down the road. All of these need to be considered, all the time, if you want your code clean. And many more that I can't remember right now.

You want to write clean code? There's no magic required. Just go learn all that's required, then use it to assess your code's cleanliness, and refactor until you're happy. And keep learning - software is still a young field, and new insights and knowledge are acquired at a fast pace.

user625488
  • 151
  • 3