It will be context-dependent.
"Fixing a quadratic runtime performance bug" is typically what I see. However, whether that deserves fixing (a code change) is context-dependent.
Keep in mind that databases provide lots of tools to improve time complexity. For example, to get the top N results from a database, just say so. When converting inefficient kludge into built-in optimized call, explanation seems superfluous.
The reason I consider an algorithm with quadratic runtime to deserve a code review (discussion) is not so much because it is slow (slow is relative; quadratic is fast when compared to exponential), but because human intuition (such as your customers, or fellow programmers) are innately uncomfortable with a software functionality that deviates too far from linear runtime, due to mixing of expectations from everyday life.
A lot of customer complaints about software performance falls into these two categories:
The whole system (software and hardware) were specified based on estimated usage. Last week, everything runs fine, a certain functionality took less than 5 seconds. This week, after installing an update, the same functionality takes more than 1 minute.
- This is a comparison with a previously benchmarked performance. The customer holds future performance to an absolute yardstick of a human time-scale (from seconds to minute).
I submitted 100 jobs to the system. Why is it taking 400x the time to process, compared to the time it takes for a single job?
- The customer expects processing time to be linear. In fact, the customer cannot understand or accept that there exists tasks that are slower than linear.
For this reason, a customer will consider the execution time to be a bug if both are true:
- Slower than linear
- Noticeable (i.e. falls within the human time range (longer than seconds or minutes) given typical task sizes)
Legitimate arguments that explain that a quadratic runtime algorithm doesn't pose a problem (i.e. doesn't deserve a code change):
- The size of task typically handled by this quadratic runtime function is somewhat bounded
- Given the typical size range, the actual (absolute) execution time is still small enough to be dismissed
- If a user actually tries to submit a task that is large enough to be noticeable, the user will receive a message warning about the long running time
- The users of the system are all experts, therefore they know what they are doing. For example, users of an API should have read the fine print on the API documentation.
A lot of algorithms useful for typical application development are in fact slower than linear (mostly O(N log N), as in sorting), therefore large-scale software will in fact try to workaround that, by only sorting the relevant part of the data, or use histogram (statistical) filtering techniques which achieves similar effect.
This applies to software customers, but if you consider the users of a software library or API function to be "customers" as well, then the answer would still apply.