42

I'm a recovering perfectionist.

According to my colleagues, I am also a good software engineer, but one of the feedback I have often received is that I tend to dive too deep too soon.

Suppose I start working on a new feature that requires going into the code base of another team - I often end up trying to understand how everything works together in great detail down to the flow of data and architecture of the system, sometimes even doing a tutorial on the language their repository is written in.

Perhaps a driver of this behaviour is a fear of coming across as "stupid/unprepared" (perfectionism).

Perhaps some of those times I would have benefited more if I had just reached out to an expert and received a high-level summary and waited to go into details at a later point during the implementation.

But at other times I noticed that by going deep, I uncovered risks that we had not considered before.

How do you decide how much depth of knowledge is enough?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
larrydalmeida
  • 549
  • 4
  • 8
  • 1
    "How do you decide how much depth of knowledge is enough?" doing as little work as is necessary to complete my job. https://wiki.up4distribution.ch/en/wiki/Economic_Minimum_Principle#The_economic_minimum_principle – Thomas Junk Nov 26 '21 at 15:27
  • 17
    You mention perfectionism but a lot of what you've described could also be called procrastination. – Robbie Dee Nov 26 '21 at 19:23
  • 18
    @RobbieDee: There's not a whole lot of difference between the two. – Robert Harvey Nov 26 '21 at 19:32
  • 1
    @RobertHarvey On the face of it, perhaps, but the desired outcomes are actually quite different. – Robbie Dee Nov 26 '21 at 19:41
  • Do you work with a member of the other team to determine how the feature will be implemented and what the risks are? – DaveG Nov 26 '21 at 20:02
  • 7
    There's a difference between (1) going deep in order to try and understand a real-world problem domain, check assumptions and uncover unforeseen risks, (2) analysis paralisis and (3) overengineering (planning too much ahead before you know enough). (1) is not a problem unless it leads to (2), and (3) is basically planning without doing (1). I can't tell from your question, but consider that it could be that you're experiencing a bit of (2), but it could also be that your teammates are not doing (1) at all. Or maybe it's a bit of both. – Filip Milovanović Nov 27 '21 at 08:07
  • 1
    A lot of this probably comes with experience and reflection. If you notice that you've learnt a lot of things you don't need (or if preparation just took particularly long), then scale (that part of) the preparation down next time. If you notice that you're hopelessly underprepared, then scale it up next time. Time is money, so the question is not "how much detail should I go into when preparing", it's "what's the *least* amount of detail I need to go into to (a) efficiently get this done and (b) do it sufficiently well". – NotThatGuy Nov 27 '21 at 09:28
  • Why isn't this on *[The Workplace](https://workplace.stackexchange.com/tour)*? E.g., [tag *software development*](https://workplace.stackexchange.com/questions/tagged/software-development): *"Questions regarding the workplace interactions and considerations that are involved in the process of designing, implementing, testing, deploying, etc., a software product."* – Peter Mortensen Nov 27 '21 at 15:47
  • 2
    @PeterMortensen There are overlaps between various parts of the StackExchange network. This is a good thing rather than a bad thing. A given question might be on-topic at multiple SE sites. This question is very much on-topic here. – David Hammen Nov 28 '21 at 09:42
  • 7
    If you're supposed to do work on a new feature in an existing codebase you've never worked on before, and don't even know the language it's written in (else why are you doing tutorials?), then *anybody* (including you) thinking that you'll produce results quickly is being a bit optimistic. Even making any estimate at all for how long the task will take is impossible, under those circumstances. You usually **do** need to know how the codebase generally "works" to do productive work in it, but if an expert is available you should definitely ask before reverse engineering it from scratch. – Ben Nov 28 '21 at 10:14
  • 1
    How are you at actually getting stuff done? If your efficiency is similar to that of your colleagues, then you're probably not wasting time in unnecessary research. – Matt Timmermans Nov 29 '21 at 03:12
  • _Perhaps some of those times I would have benefited more if I had just reached out to an expert_ => Does this mean that you deep dive _without_ talking to some knowledgeable about the codebase first? – Matthieu M. Nov 29 '21 at 13:04
  • @MatthieuM. Sometimes yes, because alignment with that expert may take a while especially if they are in another team or hard to get a hold of. In that case I dive a bit so I can maximise the use of the time slot I get with them and not waste time in getting answers to 'basic' questions. If they are within my team then I reach out on chat or wait until we have our usual check-in. I'm sure there can be improvements made in this too though. – larrydalmeida Nov 29 '21 at 13:58
  • One does not develop a feature. A feature is always an integration of multiple parts/components in the system. The components encapsulate the volatilities of the system. So, the planning should be in the change of the components that you already have. That is if you have any sort of thought-through architecture. And if you are discovering that you need new components now, when you already have an architecture, then the initial architecture is a failure and you would be building an ugly workaround to satisfy your feature. – tridy Dec 16 '21 at 11:05

10 Answers10

28

There's no one-size-fits-all answer to this. It's highly context sensitive.

One of the biggest factors is risk. You want to do just enough up-front design and planning to bring the risk to a tolerable level. However, what amount of risk is tolerable depends a lot on the stakeholders - the customers, the end users, and the development organization. The amount of acceptable risk for an internal R&D effort is different than the amount of acceptable risk before you announce new functionality as under development to the world. Acceptable risk for a supporting tool is different than acceptable risk for a device where failure can lead to injury or death. Consider risk to the business, to customers, and to users.

When it comes to figuring out risk, though, the unknown unknowns are the trickiest to deal with. You probably know what you know, and you probably have a list of things that you think you need to know but don't currently know - so the known knowns and the known unknowns. However, there's also a class of things that you don't know that you need to know - the unknown unknowns. Since you don't know them, you don't know how big this space is until you start doing the work.

The way I approach this is that there is always going to be some risk. Reduce the known unknowns to a point where the stakeholders are comfortable. From there, breaking the work down into small pieces, making the progress highly visible, and getting feedback as frequently as possible can turn the unknown unknowns into known unknowns and then you can plan the steps you need to take to turn them into knowns.

Thomas Owens
  • 79,623
  • 18
  • 192
  • 283
  • 3
    I would only add that too much planning (i.e. Big Design Up Front) carries its own risks. – Robert Harvey Nov 26 '21 at 16:39
  • @RobertHarvey Totally. I just added a little emphasis to the second sentence in the second paragraph to make that stand out. The "just enough" is key, since spending too much time and effort in planning can lead to issues. – Thomas Owens Nov 26 '21 at 18:17
  • 1
    I agree that it's all about risk. That being said, I feel this answer is incomplete without at least a mention of the [cone of uncertainty](https://en.wikipedia.org/wiki/Cone_of_Uncertainty). – John Wu Nov 28 '21 at 03:26
13

This boils down to time management and a measured approach to risk. First consider where your stakeholders stand -- While not universally true, it's typically the case that stakeholders prefer the Pareto principle by looking to find the 20% effort which delivers 80% of the desired outcomes.

This crops up often in complex software - many activities can suffer from a sort of contradiction whereby they are impossible to clearly define or estimate with any confidence yet also suffer the law of diminishing returns with regards to developer productivity and overall benefit of time taken.

This is a problem because all activities need a priority and a decision about whether they are worth starting at all, as well as whether other activities or goals might need to be pushed out of the way.

Activities which snowball out of control can easily lead to a lot of poorly-spent time wrecking the team's ability to meet their overall delivery commitments, as well as the team drowning under the weight of expectation for undeliverable promises, especially if a developer or team are regularly taking on a lot of those unbounded activities.

A common approach to time management for such seemingly unbounded tasks is to consider Timeboxing some investigation work which simply has the goal of sufficient investigation for a more confident range of estimates -- at least a "worst case" vs "best case".

Before starting investigation, plan out how to make the best use of the timebox - remember the goals are discovery of unknowns and estimation so your concerns are at a higher-level rather than deep-dive detail. Consider some of the following tasks:

  • If the features you're touching are unfamiliar, turn to the product owner, BA and QA testers to learn about those and users' data.
  • Seek architectural information/docs around that area of the code from other developers who have worked on the system before.
  • Identify the most important architectural boundaries, structures and interfaces which surround those features
  • Attach a debugger, enable full logging then try hitting the relevant parts of the code from a user perspective.
  • Check existing tests for that code.
  • Introduce some new tests to try to check any of your most important assumptions.
  • Look at the output of static analysis tools to get an approximate idea of the code quality.

Once a timebox ends, work stops and the developer reports back to the team (including Product Owner) on how far they were able to get in the timebox, to try to talk about the worst/best case estimates. The outcome might just be that the problem is still nowhere near to something that anyone can define or estimate. Then it should be the role of the Product Owner and/or technical & project leadership to use the findings to decide whether/how to progress, whether to do more investigation, or even to put the work on-hold.

The amount of time to timebox is somewhat risk-dependent; it should be enough time to have a realistic attempt at discovering the unknowns but it is ultimately nothing more than a discovery task. Risk is a context-dependent subject, but there are a some common factors to think about:

  • Importance of the task to the team delivering on its commitments (e.g. how heavily dependent other work might be).
  • Implications for security/privacy, health/safety, legal/regulatory compliance and contractual arrangements.
  • Quality and complexity of the code.
  • Automated testing and tooling that surrounds the code.
  • Extent of the code and features under investigation (if it's a very large portion of the code then that'll naturally require a longer timebox)
  • The organisation's collective understanding of the software (poorly understood code which nobody has touched in years generally needs some investment to unearth its secrets)
  • Availability of others who could share knowledge and support/assist in the investigation

Ideally the organisation and stakeholders should be actively managing risk - while they may not understand anything of the technical aspect, it's important for the team to communicate upwards about why some tasks are shrouded in unknowns because it feeds into their expectations and perception of the team, as well as business planning, resourcing, budgets, commitments to customers, etc.

Ultimately this should be something to approach as a whole team; while you may personally find it hard to manage time and risk, teams exist to work towards collective goals and support each other, so it may be that your team as a whole could benefit by changing how risky/unknown tasks are planned and elaborated.

Ben Cottrell
  • 11,656
  • 4
  • 30
  • 41
9

There are projects for which that's a totally valid approach. Safety-critical ones, for example. You've already spotted that the deep dive process lets you identify risks earlier.

The challenge is to get a good match of the speed/risk tradeoff comfort level between yourself, your team, and your organization. To increase the speed, you have to be able to say "we accept that there may be increased errors, but either the cost of those errors is low or we have processes in place to catch and mitigate them early".

Some organizations put huge amounts of effort into those error-mitigation processes, such as fancy incremental deployment, in order to allow themselves the increased development speed.

The ultimate example is of course the SpaceX project, which accepted that the effort consumed trying to get hoverslam landing right entirely in theory and simulation is simply too large to be achievable, and that it was cheaper to just crash a number of rockets in order to discover the "unknown unknowns" of that control system.

pjc50
  • 10,595
  • 1
  • 26
  • 29
  • 1
    "Match the organization" is key. The leadership of any organization has not only a level of risk, but a *style* of risk that they know how to handle in the business levels of the company. – Cort Ammon Nov 28 '21 at 17:25
6

How do you decide how much depth of knowledge is enough?

As the title of the old song goes: There are more questions than answers.

Simply put: you need enough information to be able to make an informed decision without going off down too many blind alleys. Some things you will probably want to consider (N.B. not an exhaustive list):

  • Do you have enough knowledge to do what you need to do. If not, iterate until you do.
  • What are the timescales for the work? Are they aggressive or do they allow for some self-learning?
  • If I take time to learn about this body of code, what other projects are there like this that I could transfer my knowledge to?
  • If time allows, and the project uses a technology you have wanted to learn, you may want to spend some time honing your skills with a real world project
  • If you can lay hands on the information - how long did the last change of this type for this project take?
  • Is the project still being developed (albeit possibly legacy) or is it nearing end of life?
Robbie Dee
  • 9,717
  • 2
  • 23
  • 53
6

You should research a new feature to the level of detail you need to begin working on it and to estimate how long implementing it will take you to a reasonable degree of certainty.

You don't need to know everything up front but you need to be reasonably certain there are no show-stopping problems with your design. You should have a good idea what the hard parts are going to be and plan on working on those first.

Beyond the above there isn't much more anyone can say. How good your intuition is about this kind of thing is what separates beginning programmers from pros and competent programmers from the greats.

jwezorek
  • 179
  • 3
6

This is where a diverse team is helpful. It's not a bad thing to methodically gather context on a problem, but you need someone to help you stop at a certain point and actually implement something.

There is probably someone on your team who you think of as a "leap before looking" kind of problem solver. Being inclined to action isn't a bad thing either, but they need someone to help them slow down a bit and not miss something important.

You probably annoy each other somewhat, but your skills are complementary, and you can make great partners if you both recognize that.

When working alone, I like this animation of the A* algorithm as an illustration of how I gather context when solving a programming problem. When there's an obstacle, I gather a lot of information that I don't know yet if it will be useful or not. Once I've cleared the obstacle, I go fairly directly to the goal, just gathering a little context to either side to make sure I'm taking the optimal path. Ultimately, there's a lot of unexplored territory, but it's not important to my goal.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
4

You talk about "planning a new feature" but I'm not clear whether you are separating specification of the feature from implementation of the feature. It's a good idea to keep these as separate as possible. In an ideal world, specifying the feature shouldn't require any knowledge of the code base at all, it should all be done in terms of the impact on the software's externally observable behaviour.

Once you're implementing the feature, I think a thorough understanding of the part of the system that you're changing is essential. It's also useful to have an understanding of the parts you aren't changing, because without it you might miss opportunities to introduce new abstractions and code reuse.

It's also useful to research the level of test coverage of the relevant area. If the test coverage is good, then you can afford to be a bit more experimental in making code changes -- there's a good chance that any bugs you introduce will be caught by the existing tests. If the test coverage is poor, then it might be best to start by writing more tests for the existing code: debugging them when they fail will teach you a lot about how the existing code works (or doesn't).

Michael Kay
  • 185
  • 3
1

I find the other answers very informing, and I would just like to offer a slight different perspective. It is rather intuitive, but it might help you.

From my experience, if I start coding too early, I inevitably end up solving design problems while writing code. This leads to blurred and ugly naming and coding.

So I know in a practical sense why I am planning ahead, i.e. I know what I want to avoid. I feel that, if you will, I must not start programming until I know precicely enough what I want to do.

So I start designing, planning, examining cases and scenarios, organizing calls and classes etc etc, and suddenly I "know what I want to do". It just feels like "being well informed", "having had enough". I check by answering questions, how things are crafted into each other to realize scenarios.

This has become an inner guide to me: Do I know what I want to do? Or is there still some "we will see that later?" If there is "we will see that later", and I haven't solved an equivalent problem up to now, I must continue planning because it is dangerous.

How I find to this point, is different in every situation. If I want to fast deliver a feature in a sprint, I first try to be as clear as possible in mind, and code later in one go. If I want to find out what will happen and I have no time or money pressure, I allow myself to meander while coding, knowing that it will be scribbled more or less.

To come to an end, this good feeling "I know what I want to do" might be a counterweight to your urge to comb through more and more details.

peter_the_oak
  • 661
  • 3
  • 8
1

A lot of good answers already. I'll give a tactic that might be useful.

The 1/6th Rule: Spend 1 unit of time on design 5 units of time on implementation.

This method is particularly useful if the deadline is known for a project, then you can scope what you can do in this timeframe.

Note: There will be some projects like high security, hardware or government related projects. They might have a different rule. This is for Web/mobile developement

Thellimist
  • 111
  • 3
1

At first I would ask myself: What is the expected time to work on the feature?

The more time is expected, the more uncertain the planning.

If the expected time is less than a day of work, I would not consider to dive deeper. If there is a problem with the feature, it will reveal on implementation. In any case, not much work will have been lost.

If it is more than a day of work, there should be a detailed plan for the feature. For planning, it is not necessary, that a single person has every piece of knowledge. Identify the needed knowledge, do some quick research or ask experts concerning risks. At the end, you should be able to write down a list of tasks needed to implement the feature.

If your answer to the expected time is more than a week of work, do not agree to implement it. The feature should be splitted into smaller ones. As a rule, long expected times are a sign that there is a deep lack of understanding of the feature. Going deep is necessary in this case, because there are certainly a lot of uncovered risks.

Trendfischer
  • 207
  • 1
  • 6