-2

I think most of us strive to make our code run smoothly, read logically, and altogether function well. But for those who really enjoy taking the time to figure out how to optimize already good code (whether it's their own or someone else's), where do these individuals best find themselves in the programming world?

Whether that optimization is to improve on the current algorithm implemented in their project, take the time to make minor differences in performance, studying the run time of loops and conditionals in specific a language (for example, 3 or less if else statements are generally much quicker than 3 or less switch statements is some languages), or clean up code (useless variables, better modularize existing functions, etc.).

I imagine that a lot of these folks find themselves at home on projects critical to hardware performance, like fields related to microarchitecture. Anything else?

8protons
  • 1,359
  • 1
  • 10
  • 27
  • 1
    You might like writing a static code analyser to automate all this. – MetaFight Jun 10 '16 at 19:25
  • @MetaFight Is this what you're referring to? https://en.wikipedia.org/wiki/Static_program_analysis – 8protons Jun 10 '16 at 19:28
  • 3
    indeed it is. I brought it up because you mentioned the relative performance of switch statements vs if statements in certain conditions. That's the kind of thing that is better off done by a static analysis tool rather than a paid human. – MetaFight Jun 10 '16 at 19:29
  • @MetaFight I was completely unaware of that topic/field/procedure. It sounds highly interesting. That wiki mentions that it's common practice in hardware testing of medical devices, as well as in the aviation and nuclear industry. Are there companies that work on crafting these kinds of analyzers or is it done in house? Or are they simply open source? – 8protons Jun 10 '16 at 19:32
  • I'm afraid it's not my area of expertise, so I can't help you there. – MetaFight Jun 10 '16 at 19:35
  • @MetaFight: Insights like that seem more discoverable with a performance profiler rather than a static code analysis tool. – Robert Harvey Jun 10 '16 at 19:41
  • I guess it depends on the language though, doesn't it? I agree that a profiler would be a better tool here for a jitted language implementation, but wouldn't static analysis be just as good for something like plain old C? – MetaFight Jun 10 '16 at 19:46
  • @MetaFight: Well, that a good question. While it's possible to attempt to predict a compiler's output, many compilers employ optimization techniques that would be difficult for a static analysis tool to predict without intimate knowledge of the compiler innards. Even simple inlining of a function would confound a static analysis tool, because all it sees is the source code. Managed languages have a better time at static analysis because the IL contains metadata, but you still can't predict what the JIT will do. – Robert Harvey Jun 10 '16 at 19:52
  • Interesting. Though, according to that Wikipedia article linked earlier, some static analysis tools actually run on obj files instead of the uncompiled source itself. I get your point though. – MetaFight Jun 10 '16 at 20:02
  • 1
    Devs who do optimization just for the sake of optimization are on the edge of finding themselves nowhere in the programming world, because if their boss notices that, they will probably get fired. Honestly, as a professional dev who writes programs not just for scientific reasearch, but for users, one should optimize because some code in stake runs too slow and not because optimization is so much fun ( I mean, you can enjoy it, but don't make it the root cause of your actions). – Doc Brown Jun 10 '16 at 23:37

3 Answers3

3

If they have competitors, then performance is a major criterion by which they are judged.

By the way, if you think performance is a matter of using switch vs. if, you're missing the point by an enormous margin.

You don't know what to fix in the code until you find out what takes time. That can seldom (i.e. never) be done by eyeballing the code. What I do is run it under a debugger and manually pause it at random to see what it's doing.

Here are some examples:

  • Spending more than 50% of time during application startup reading dlls in order to extract strings so they could be internationalized, but they were mostly strings the user never sees.

  • Spending a large fraction of time doing new and free, when previously allocated memory blocks could simply be re-used.

  • Spending a large fraction of time calling library routines like sin, exp, or log with the same arguments as last time. The prior results could just be remembered.

Mike Dunlavey
  • 12,815
  • 2
  • 35
  • 58
1

Projects that have performance requirements.

It really doesn't matter what the hardware is. I can drive anything to 100% utilization. Same as I can fill any hard drive. The question is, can I ignore performance and still hit my performance goals. If yes, I am justified in ignoring performance.

Some projects find they have performance problems. If they wrote easy to read code while ignoring performance fixing the problem usually isn't hard.

If it is hard it might be research level hard. If you have the chops for that I'll keep you in mind if we ever run into that. Otherwise, I don't optimize for the fun of it. I'd rather it take 5% longer if that means to code is readable.

If you want reliable employment doing this seek out jobs that have pushing the hardware to it's limits as a goal. This can happen on microarchitecture, super computers, smart phones, data centers, calculators, cloud, toasters, and even the humble PC.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
1

Embedded development.

When you're working with a tiny memory footprint and real time requirements you have to constantly be thinking about every aspect of performance, both speed and memory.

Real time embedded devices often require response times in the order of milliseconds in my experience. If a routine is off running for 10 seconds somewhere, you've just missed input from a button on your device. If you enjoy making things fast, real time can be an incredibly fun challenge.

RubberDuck
  • 8,911
  • 5
  • 35
  • 44
  • I know programs have gotten larger in size as hardware has gotten more powerful per physical size (those the two aren't necessarily increasing at a correlating rate). But has better hardware made these tasks that you mention easier? In other words, did the average 30-year old C hardware programmer in 1989 have to be much more knowledgable and optimization-minded than the average 30-year old C hardware programmer in 2016? Is hardware programming today much more forgiving? – 8protons Jun 11 '16 at 21:36
  • 1
    That depends @8protons. I've heard numerous embedded devs refer to Arduino as ".Net for embedded", meaning you've got this cushy framework under you. However, if you're writing a system sans OS on a custom board, it's just as unforgiving as ever. – RubberDuck Jun 11 '16 at 21:39