5

When faced with writing an algorithm to solve a small project/problem is it better to come up with pseudo code that isn't efficient/optimum but solves the problem and then try to use the inefficient code to inspire a better/good solution? The solution must be acceptable, not necessarily "super-amazing-out-of-this-world" type.

The answer is subjective, but I am looking for a general opinion or consensus from experienced programmers in order to develop a better algorithm generation/problem solving process.

The reason I ask this is that I often start composing the most intuitive algorithm but then get derailed thinking about how inefficient it is and if I should be using a better data structures etc etc.

rrazd
  • 1,398
  • 2
  • 12
  • 23
  • 4
    How often do you come up with an efficient/optimum solution on the first try? I know my answer would be very rarely. – JB King Jul 15 '11 at 15:42
  • Strongly depends on how much time you have... –  Jul 15 '11 at 16:15
  • 1
    One of my earliest mentors said to me once, in an almost offhand kind of way, that "You do not so much write good software, you grow it". That remark has remained over the years and it's truth has showed itself to me over and over again. – leed25d Jul 15 '11 at 20:23

7 Answers7

5

Yes...sometimes you need to have time to do it "wrong" (or close) first to finally get it right. You generally get to a solution first time through - THEN go for optimization and improvements on the design. Sometimes you will understand the problem better after going through a first pass.

Catchops
  • 2,529
  • 13
  • 20
3

In general "efficiency" and "performance" need to be measured. However, if you are creating an algorithm there are techniques to determine (or have a good idea) if it will be O(1), O(n), O(log n), O(n2) etc.

Sometimes though creating a working piece of code helps to break any "block" that you may be experiencing and it may indeed lead to other and hopefully better solutions.

Otávio Décio
  • 5,299
  • 23
  • 34
0

Definitely.

You need to draw out exactly what you want in order to figure out what you need to build, and what you want is not always efficient.

The opposite is far worse - Starting with efficient methods and trying to figure out how to get them to do what you need to do. You end up with something that may be fast, but doesn't do exactly what you want. Or you slap on some band-aids to make it do what you want and the code turns into a mess.

I'm constantly building something so it works, refactoring to make it more programmer-friendly (no copy/paste code, comments, splitting up methods, etc), then checking performance to see if the code needs to be refined at all.

Rachel
  • 23,979
  • 16
  • 91
  • 159
0

Absolutely and in two ways:

The first is that you by applying a rough, non-ideal solution you start to see exactly where and why it fails which allows you to target your effort on real problems rather than problems you think might occur.

The second is that sometimes you'll see these rough, non-ideal solutions implemented and work fine and that's a valuable lesson. Not every solution needs to be perfectly engineered, sometimes rough and ready does the job and allows you to expend the effort you would have spent on needless polishing on something that will make a genuine difference.

Personally I've suffered at least as much dealing with over engineered code which aimed to solve problems or deal with situations that never arose than I did dealing with quick and dirty solutions that weren't flexible and elegant.

Jon Hopkins
  • 22,734
  • 11
  • 90
  • 137
0

TL;DR: Does it run and run well, and who's standards are you meeting?

You have to look at it as whether your code is inefficient to the point of impracticallity. Are you nesting for loops and switch statements 5 deep, or is it something as simple as "whoops, used an array instead of a hash table?"

Also, further questioning, because I know I do this, is it your own standards that you're holding your code to, or is it a published standard or company standard you're trying to meet. Yes, it's cool if your code runs in O(nlogn) time, but does it take up more lines than your company wants code to take, or maybe it runs in that time, but is totally unmaintainable. There are a large number of "standards" that can be met besides efficiency, and those need to be taken into account to.

Jeff Langemeier
  • 1,397
  • 9
  • 19
0

I always determine the theoretical best asymptotic complexity before I write any code. Then I make a conscious decision about if I want to aim for that or not. If your data is finite, like for example, a list of countries, you can use O(n) or even O(n2) instead of an O(1) algorithm and never notice. In that case, my primary consideration can be maintainability instead of efficiency.

I almost never rewrite for efficiency unless a previous developer made a noticeably bad choice. I rewrite my first draft for maintainability all the time.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
0

Code that works now is far better than unreleased code that may work faster in the future. Of course good programmers will plan ahead and avoid nasty O(N^2) situations, but here's my main point: Optimizing Before Measuring Is Bad.

Nothing inspires writing code in a "better" way than having running code to measure, and in the meantime you have running code.

Patrick Hughes
  • 1,371
  • 1
  • 8
  • 12