38

I am a big fan of writing assert checks in C++ code as a way to catch cases during development that cannot possibly happen but do happen because of logic bugs in my program. This is a good practice in general.

However, I've noticed that some functions I write (which are part of a complex class) have 5+ asserts which feels like it could potentially be a bad programming practice, in terms of readability and maintainability. I think it's still great, as each one requires me to think about pre- and post-conditions of functions and they really do help catch bugs. However, I just wanted to put this out there to ask if there is a better paradigms for catching logic errors in cases when a large number of checks is necessary.

Emacs comment: Since Emacs is my IDE of choice, I have it slightly gray out the assert statements which helps reduce the feeling of clutter that they can provide. Here's what I add to my .emacs file:

; gray out the "assert(...)" wrapper
(add-hook 'c-mode-common-hook
  (lambda () (font-lock-add-keywords nil
    '(("\\<\\(assert\(.*\);\\)" 1 '(:foreground "#444444") t)))))

; gray out the stuff inside parenthesis with a slightly lighter color
(add-hook 'c-mode-common-hook
  (lambda () (font-lock-add-keywords nil
    '(("\\<assert\\(\(.*\);\\)" 1 '(:foreground "#666666") t)))))
Alan Turing
  • 1,533
  • 1
  • 16
  • 20

9 Answers9

47

I've seen hundreds of bugs that would have been solved faster if someone had written more asserts, and not a single one that would have been solved quicker by writing fewer.

[C]ould [too many asserts] potentially be a bad programming practice, in terms of readability and maintainability[?]

Readability could be a problem, perhaps - although it's been my experience that people who write good asserts also write readable code. And it never bothers me to see the beginning of a function start with a block of asserts to verify that the arguments aren't garbage - just put a blank line after it.

Also in my experience, maintainability is always improved by asserts, just as it is by unit tests. Asserts provide a sanity check that code is being used the way it was intended to be used.

Bob Murphy
  • 16,028
  • 3
  • 51
  • 77
  • 1
    Good answer. I also added a description to the question of how I improve readability with Emacs. – Alan Turing Jun 27 '11 at 12:25
  • 2
    "it's been my experience that people who write good asserts also write readable code" << excellent point. Making code readable is as up to the individual programmer as it is the techniques he or she is and isn't allowed to use. I've seen good techniques become unreadable in the wrong hands, and even what most would consider bad techniques become perfectly clear, even elegant, by the proper use of abstraction and commenting. – Greg Jackson Jun 27 '11 at 19:24
  • I've had a few application crashes that were caused by erroneous assertions. So I've seen bugs that wouldn't have *existed* if someone (myself) had written fewer asserts. – CodesInChaos Jan 06 '16 at 22:38
  • @CodesInChaos Arguably, typos aside, this points to an error in the *formulation* of the problem - that is, the bug was in the design, hence the mismatch between assertions and (other) code. – Lawrence Jan 10 '16 at 13:16
14

Is possible to write too many asserts?

Well, of course it is. [Imagine obnoxious example here.] However, applying the guidelines detailed in the following, you shouldn't have troubles pushing that limit in practice. I'm a big fan of assertions, too, and I use them according to these principles. Much of this advice is not special to assertions but only general good engineering practice applied to them.

Keep the run-time and binary footprint overhead in mind

Assertions are great, but if they make your program unacceptably slow, it will be either very annoying or you will turn them off sooner or later.

I like to gauge the cost of an assertion relative to the cost of the function it is contained in. Consider the following two examples.

// Precondition:  queue is not empty
// Invariant:     queue is sorted
template <typename T>
const T&
sorted_queue<T>::max() const noexcept
{
  assert(!this->data_.empty());
  assert(std::is_sorted(std::cbegin(this->data_), std::cend(this->data_)));
  return this->data_.back();
}

The function itself is an O(1) operation but the assertions account for O(n) overhead. I don't think that you would like such checks to be active unless in very special circumstances.

Here is another function with similar assertions.

// Requirement:   op : T -> T is monotonic [ie x <= y implies op(x) <= op(y)]
// Invariant:     queue is sorted
// Postcondition: each item x in the queue is replaced by op(x)
template <typename T>
template <typename FuncT>
void
sorted_queue<T>::apply_monotonic_function(FuncT&& op)
{
  assert(std::is_sorted(std::cbegin(this->data_), std::cend(this->data_)));
  std::transform(std::cbegin(this->data_), std::cend(this->data_),
                 std::begin(this->data_), std::forward<FuncT>(op));
  assert(std::is_sorted(std::cbegin(this->data_), std::cend(this->data_)));
}

The function itself is an O(n) operation so it hurts much less to add an additional O(n) overhead for the assertion. Slowing down a function by a small (in this case, probably less than 3) constant factor is something we can usually afford in a debug build but maybe not in a release build.

Now consider this example.

// Precondition:  queue is not empty
// Invariant:     queue is sorted
// Postcondition: last element is removed from queue
template <typename T>
void
sorted_queue<T>::pop_back() noexcept
{
  assert(!this->data_.empty());
  return this->data_.pop_back();
}

While many people will probably be much more comfortable with this O(1) assertion than with the two O(n) assertions in the previous example, they are morally equivalent in my view. Each adds overhead on the order of the complexity of the function itself.

Finally, there are the “really cheap” assertions that are dominated by the complexity of the function they're contained in.

// Requirement:   cmp : T x T -> bool is a strict weak ordering
// Precondition:  queue is not empty
// Postcondition: if x is returned, then there is no y in the queue
//                such that cmp(x, y)
template <typename T>
template <typename CmpT>
const T&
sorted_queue<T>::max(CmpT&& cmp) const
{
  assert(!this->data_.empty());
  const auto pos = std::max_element(std::cbegin(this->data_),
                                    std::cend(this->data_),
                                    std::forward<CmpT>(cmp));
  assert(pos != std::cend(this->data_));
  return *pos;
}

Here, we have two O(1) assertions in an O(n) function. It probably won't be a problem to keep this overhead even in release builds.

Do keep in mind, however, that asymptotic complexities are not always giving an adequate estimate because in practice, we are always dealing with input sizes bounded by some finite constant and constant factors hidden by “Big-O” might very well be not negligible.

So now we have identified different scenarios, what can we do about them? An (probably too) easy approach would be to follow a rule such as “Don't use assertions that dominate the function they are contained in.” While it might work for some projects, others might need a more differentiated approach. This could be done by using different assertion macros for the different cases.

#define MY_ASSERT_IMPL(COST, CONDITION)                                       \
  (                                                                           \
    ( ((COST) <= (MY_ASSERT_COST_LIMIT)) && !(CONDITION) )                    \
      ? ::my::assertion_failed(__FILE__, __LINE__, __FUNCTION__, # CONDITION) \
      : (void) 0                                                              \
  )

#define MY_ASSERT_LOW(CONDITION)                                              \
  MY_ASSERT_IMPL(MY_ASSERT_COST_LOW, CONDITION)

#define MY_ASSERT_MEDIUM(CONDITION)                                           \
  MY_ASSERT_IMPL(MY_ASSERT_COST_MEDIUM, CONDITION)

#define MY_ASSERT_HIGH(CONDITION)                                             \
  MY_ASSERT_IMPL(MY_ASSERT_COST_HIGH, CONDITION)

#define MY_ASSERT_COST_NONE    0
#define MY_ASSERT_COST_LOW     1
#define MY_ASSERT_COST_MEDIUM  2
#define MY_ASSERT_COST_HIGH    3
#define MY_ASSERT_COST_ALL    10

#ifndef MY_ASSERT_COST_LIMIT
#  define MY_ASSERT_COST_LIMIT MY_ASSERT_COST_MEDIUM
#endif

namespace my
{

  [[noreturn]] extern void
  assertion_failed(const char * filename, int line, const char * function,
                   const char * message) noexcept;

}

You can now use the three macros MY_ASSERT_LOW, MY_ASSERT_MEDIUM and MY_ASSERT_HIGH instead of the standard library's “one size fits all” assert macro for assertions that are dominated by, neither dominated by nor dominating and dominating the complexity of their containing function respectively. When you build the software, you can pre-define the pre-processor symbol MY_ASSERT_COST_LIMIT to select what kind of assertions should make it into the executable. The constants MY_ASSERT_COST_NONE and MY_ASSERT_COST_ALL don't correspond to any assert macros and are meant to be used as values for MY_ASSERT_COST_LIMIT in order to turn all assertions off or on respectively.

We're relying on the assumption here that a good compiler will not generate any code for

if (false_constant_expression && run_time_expression) { /* ... */ }

and transform

if (true_constant_expression && run_time_expression) { /* ... */ }

into

if (run_time_expression) { /* ... */ }

which I believe is a safe assumption nowadays.

If you're about to tweak the above code, consider compiler-specific annotations like __attribute__ ((cold)) on my::assertion_failed or __builtin_expect(…, false) on !(CONDITION) to reduce the overhead of passed assertions. In release builds, you may also consider replacing the function call to my::assertion_failed with something like __builtin_trap to reduce the foot-print at the inconvenience of losing a diagnostic message.

These kinds of optimizations are really only relevant in extremely cheap assertions (like comparing two integers that are already given as arguments) in a function that itself is very compact, not considering the additional size of the binary accumulated by incorporating all the message strings.

Compare how this code

int
positive_difference_1st(const int a, const int b) noexcept
{
  if (!(a > b))
    my::assertion_failed(__FILE__, __LINE__, __FUNCTION__, "!(a > b)");
  return a - b;
}

is compiled into the following assembly

_ZN4test23positive_difference_1stEii:
.LFB0:
        .cfi_startproc
        cmpl    %esi, %edi
        jle     .L5
        movl    %edi, %eax
        subl    %esi, %eax
        ret
.L5:
        subq    $8, %rsp
        .cfi_def_cfa_offset 16
        movl    $.LC0, %ecx
        movl    $_ZZN4test23positive_difference_1stEiiE12__FUNCTION__, %edx
        movl    $50, %esi
        movl    $.LC1, %edi
        call    _ZN2my16assertion_failedEPKciS1_S1_
        .cfi_endproc
.LFE0:

while the following code

int
positive_difference_2nd(const int a, const int b) noexcept
{
  if (__builtin_expect(!(a > b), false))
    __builtin_trap();
  return a - b;
}

gives this assembly

_ZN4test23positive_difference_2ndEii:
.LFB1:
        .cfi_startproc
        cmpl    %esi, %edi
        jle     .L8
        movl    %edi, %eax
        subl    %esi, %eax
        ret
        .p2align 4,,7
        .p2align 3
.L8:
        ud2
        .cfi_endproc
.LFE1:

which I feel much more comfortable with. (Examples were tested with GCC 5.3.0 using the -std=c++14, -O3 and -march=native flags on 4.3.3-2-ARCH x86_64 GNU/Linux. Not shown in the above snippets are the declarations of test::positive_difference_1st and test::positive_difference_2nd which I added the __attribute__ ((hot)) to. my::assertion_failed was declared with __attribute__ ((cold)).)

Assert preconditions in the function that depends on them

Suppose you have the following function with the specified contract.

/**
 * @brief
 *         Counts the frequency of a letter in a string.
 *
 * The frequency count is case-insensitive.
 *
 * If `text` does not point to a NUL terminated character array or `letter`
 * is not in the character range `[A-Za-z]`, the behavior is undefined.
 *
 * @param text
 *         text to count the letters in
 *
 * @param letter
 *         letter to count
 *
 * @returns
 *         occurences of `letter` in `text`
 *
 */
std::size_t
count_letters(const char * text, int letter) noexcept;

Instead of writing

assert(text != nullptr);
assert((letter >= 'A' && letter <= 'Z') || (letter >= 'a' && letter <= 'z'));
const auto frequency = count_letters(text, letter);

at each call-site, put that logic once into the definition of count_letters

std::size_t
count_letters(const char *const text, const int letter) noexcept
{
  assert(text != nullptr);
  assert((letter >= 'A' && letter <= 'Z') || (letter >= 'a' && letter <= 'z'));
  auto frequency = std::size_t {};
  // TODO: Figure this out...
  return frequency;
}

and call it without further ado.

const auto frequency = count_letters(text, letter);

This has the following advantages.

  • You only need to write the assertion code once. Since the very purpose of functions is that they be called – often more than once – this should reduce the overall number of assert statements in your code.
  • It keeps the logic that checks the preconditions close to the logic that depends on them. I think this is the most important aspect. If your clients misuse your interface, they cannot be assumed to apply the assertions correctly either so it is better the function tells them.

The obvious disadvantage is that you won't get the source location of the call-site into the diagnostic message. I believe that this is a minor issue. A good debugger should be able to let you trace back the origin of the contract violation conveniently.

The same thinking applies to “special” functions like overloaded operators. When I'm writing iterators, I usually – if the nature of the iterator allows it – give them a member function

bool
good() const noexcept;

that allows to ask whether it is safe to dereference the iterator. (Of course, in practice, it is almost always only possible to guarantee that it won't be safe to dereference the iterator. But I believe you can still catch a lot of bugs with such a function.) Instead of littering all my code that uses the iterator with assert(iter.good()) statements, I'd rather put a single assert(this->good()) as the first line of the operator* in the iterator's implementation.

If you're using the standard library, instead of asserting manually on its preconditions in your source code, turn on their checks in debug builds. They can do even more sophisticated checks like testing whether the container an iterator refers to still exists. (See the documentation for libstdc++ and libc++ (work in progress) for more information.)

Factor common conditions out

Suppose you're writing a linear algebra package. Many functions will have complicated preconditions and violating them will often cause wrong results that are not immediately recognizable as such. It would be very good if these functions asserted their preconditions. If you define a bunch of predicates that tell you certain properties about a structure, those assertions become much more readable.

template <typename MatrixT>
auto
cholesky_decompose(MatrixT&& m)
{
  assert(is_square(m) && is_symmetric(m));
  // TODO: Somehow decompose that thing...
}

It will also give more useful error messages.

cholesky.hxx:357: cholesky_decompose: assertion failed: is_symmetric(m)

helps a lot more than, say

detail/basic_ops.hxx:1289: fast_compare: assertion failed: m(i, j) == m(j, i)

where you'd first have to go look at the source code in the context to figure out what was actually tested.

If you have a class with non-trivial invariants, it is probably a good idea to assert on them from time to time when you have messed with the internal state and want to ensure that you're leaving the object in a valid state on return.

For this purpose, I found it useful to define a private member function that I conventionally call class_invaraiants_hold_. Suppose you were re-implementing std::vector (Because we all know it ain't good enough.), it might have a function like this.

template <typename T>
bool
vector<T>::class_invariants_hold_() const noexcept
{
  if (this->size_ > this->capacity_)
    return false;
  if ((this->size_ > 0) && (this->data_ == nullptr))
    return false;
  if ((this->capacity_ == 0) != (this->data_ == nullptr))
    return false;
  return true;
}

Notice a few things about this.

  • The predicate function itself is const and noexcept, in accordance with the guideline that assertions shall not have side effects. If it makes sense, also declare it constexpr.
  • The predicate doesn't assert anything itself. It is meant to be called inside assertions, such as assert(this->class_invariants_hold_()). This way, if assertions are compiled-out, we can be sure that no run-time overhead is incurred.
  • The control flow inside the function is broken apart into multiple if statements with early returns rather than a large expression. This makes it easy to step through the function in a debugger and find out what part of the invariant was broken if the assertion fires.

Don't assert on silly things

Some things just don't make sense to assert on.

auto numbers = std::vector<int> {};
numbers.push_back(14);
numbers.push_back(92);
assert(numbers.size() == 2);  // silly
assert(!numbers.empty());     // silly and redundant

These assertions don't make the code even a tiny bit more readable or easier to reason about. Every C++ programmer should be confident enough how std::vector works to be certain that the above code is correct simply by looking at it. I'm not saying that you should never assert on a container's size. If you have added or removed elements using some non-trivial control flow, such an assertion can be useful. But if it merely repeats what was written in non-assertion code just above, there is no value gained.

Also don't assert that library functions work correctly.

auto w = widget {};
w.enable_quantum_mode();
assert(w.quantum_mode_enabled());  // probably silly

If you trust the library that little, better consider using another library instead.

On the other hand, if the documentation of the library is not 100 % clear and you gain confidence about its contracts by reading the source code, it makes a lot of sense to assert on that “inferred contract”. If it is broken in a future version of the library, you'll notice quickly.

auto w = widget {};
// After reading the source code, I have concluded that quantum mode is
// always off by default but this isn't documented anywhere.
assert(!w.quantum_mode_enabled());

This is better than the following solution which will not tell you whether your assumptions were correct.

auto w = widget {};
if (w.quantum_mode_enabled())
  {
    // I don't think that quantum mode is ever enabled by default but
    // I'm not sure.
    w.disable_quantum_mode();
  }

Don't abuse assertions to implement program logic

Assertions should only ever be used to uncover bugs that are worthy of immediately killing your application. They should not be used to verify any other condition even if the appropriate reaction to that condition would also be to quit immediately.

Therefore, write this…

if (!server_reachable())
  {
    log_message("server not reachable");
    shutdown();
  }

…instead of that.

assert(server_reachable());

Also never use assertions to validate untrusted input or check that std::malloc did not return you the nullptr. Even if you know that you'll never turn assertions off, even in release builds, an assertion communicates to the reader that it checks something that is always true given that the program is bug-free and has no visible side-effects. If this is not the kind of message you want to communicate, use an alternative error handling mechanism such as throwing an exception. If you find it convenient to have a macro wrapper for your non-assertion checks, go ahead writing one. Just don't call it “assert”, “assume”, “require”, “ensure” or something like that. Its internal logic could be the same as for assert, except that it is never compiled out, of course.

More information

I found John Lakos' talk Defensive Programming Done Right, given at CppCon'14 (1st part, 2nd part) very enlightening. He takes the idea of customizing what assertions are enabled and how to react on failed exceptions even further than I did in this answer.

5gon12eder
  • 6,956
  • 2
  • 23
  • 29
  • 4
    `Assertions are great, but ... you will turn them off sooner or later.` - Hopefully sooner, like before the code ships. Things that need to make the program die in production should be part of the "real" code, not in assertions. – Blrfl Jan 10 '16 at 04:27
4

I find that over time I write fewer asserts because many of them amount to "is the compiler working" and "is the library working". Once you start thinking about what exactly you're testing I suspect you'll write fewer asserts.

For example, a method that (say) adds something to a collection shouldn't need to assert that the collection exists - that's generally either a precondition of the class that owns the message or it's a fatal error that should make it back to the user. So check it once, very early on, then assume it.

Assertions to me are a debugging tool, and I'll generally use them in two ways: finding a bug at my desk (and they don't get checked in. Well, perhaps the one key one might); and finding a bug on the customer's desk (and they do get checked in). Both times I'm using assertions mostly to generate a stack trace after forcing an exception as early as possible. Be aware that assertions used this way can easily lead to heisenbugs - the bug may well never occur in the debug build that has assertions enabled.

  • 4
    I don't get your point when you say *“that's generally either a precondition of the class that owns the message or it's a fatal error that should make it back to the user. So check it once, very early on, then assume it.”* What are you using assertions for if not for verifying your assumptions? – 5gon12eder Jan 06 '16 at 22:43
4

Too few assertions: good luck changing that code riddled with hidden assumptions.

Too many assertions: can lead to readability problems and potentially code smell - is the class, function, API designed right when it has so many assumptions placed in assert statements?

There could be also assertions that do not really check anything or check things like compiler settings in each function :/

Aim for the sweet spot, but no less (as someone else already said, "more" of assertions is less harmful than having too few or god help us - none).

MaR
  • 702
  • 5
  • 8
3

It would be awesome if you could write an Assert function that took only a reference to a boolean CONST method, in this way you are certain that your asserts do not have side effects by ensuring that a boolean const method is used to test the assert

it would draw a bit from readability, specially since i don't think you can't annotate a lambda (in c++0x) to be a const to some class, meaning you can't use lambdas for that

overkill if you ask me, but if i would start seeing a certain level of polution due to asserts i would be wary of two things:

  • making sure no side-effects are happening in the assert (provided by a construct as explained above)
  • performance during development testing; this can be addressed by adding levels (like logging) to the assert facility; so you can disable some asserts from a development build in order to improve performance
lurscher
  • 341
  • 3
  • 13
2

I want to work with you! Someone who writes a lot of asserts is fantastic. I don't know if there's such a thing as "too many". Far more common to me are people who write too few and ultimately end up running into the occasional deadly UB issue which only shows up on a full moon which could have been easily reproduced repeatedly with a simple assert.

Fail Message

The one thing I can think of is to embed failure information into the assert if you aren't doing it already, like so:

assert(n >= 0 && n < num && "Index is out of bounds.");

This way you might no longer feel like you have too many if you weren't already doing this, as now you're getting your asserts to play a stronger role in documenting assumptions and preconditions.

Side Effects

Of course assert can actually be misused and introduce errors, like so:

assert(foo() && "Call to foo failed!");

... if foo() triggers side effects, so you should be very careful about that, but I'm sure you are already as one who asserts very liberally (an "experienced asserter"). Hopefully your testing procedure is also as good as your careful attention to assert assumptions.

Debugging Speed

While the speed of debugging should generally be on the bottom of our priority list, one time I did end up asserting so much in a codebase before that running the debug build through the debugger was over 100 times slower than release.

It was primarily because I had functions like this:

vec3f cross_product(const vec3f& lhs, const vec3f& rhs)
{
    return vec3f
    (
        lhs[1] * rhs[2] - lhs[2] * rhs[1],
        lhs[2] * rhs[0] - lhs[0] * rhs[2],
        lhs[0] * rhs[1] - lhs[1] * rhs[0]
    );
}

... where every single call to operator[] would do a bounds-checking assertion. I ended up replacing some of those performance-critical ones with unsafe equivalents that don't assert just to speed up the debugging build drastically at a minor cost to just implementation-detail-level safety, and only because the speed hit of it was starting to very noticeably degrade productivity (making the benefit of getting faster debugging outweigh the cost of losing a few asserts, but just for functions like this cross product function which was being used in the most critical, measured paths, not for operator[] in general).

Single Responsibility Principle

While I don't think you can really go wrong with more asserts (at least it's far, far better to err on the side of too many than too few), the asserts themselves may not be a problem but may be indicating one.

If you have like 5 assertions to a single function call, for example, it might be doing too much. Its interface might have too many preconditions and input parameters, e.g. I consider that unrelated to just the topic of what constitutes a healthy number of assertions (for which I'd generally respond, "the more the merrier!"), but that might be a possible red flag (or very possibly not).

  • 1
    Well, there can be "too many" asserts in theory, though that problem gets obvious really fast: If the assert takes considerably longer than the meat of the function. Admittedly, I can't remember having found that in the wild yet, the opposite problem is prevalent though. – Deduplicator Jan 06 '16 at 23:46
  • @Deduplicator Ah yeah, I encountered that case in those critical vector math routines. Though it definitely seems a lot better to err on the side of too many than too few! –  Jan 06 '16 at 23:47
2

I've written in C# much more than I did in C++, but the two languages are not terribly far apart. In .Net I do use Asserts for conditions that should not happen, but I also often throw exceptions when there is no way to continue. The VS2010 debugger shows me plenty of good info on an exception, no matter how optimized the Release build is. It is also a good idea to add unit tests if you can. Sometimes logging is also a good thing to have as a debugging aid.

So, can there be too many asserts? Yes. Choosing between Abort/Ignore/Continue 15 times in one minute gets annoying. An exception gets thrown only once. It is hard to quantify the point at which there are too many asserts, but if your assertions fulfill the role of assertions, exceptions, unit tests and logging, then something is wrong.

I would reserve assertions for the scenarios that should not happen. You may over-assert initially, because assertions are faster to write, but re-factor the code later - turn some of them into exceptions, some into tests, etc. If you have enough discipline to clean up every TODO comment, then leave a comment next to each one that you plan to rework, and DO NOT FORGET to address the TODO later.

Job
  • 6,459
  • 3
  • 32
  • 54
  • If your code fails 15 assertions per minute, I think there is a bigger problem involved. Assertions should never fire in bug-free code and it they do, they should kill the application to prevent further damage or drop you into a debugger to see what is going on. – 5gon12eder Jan 06 '16 at 22:46
-1

It's very reasonable to add checks to your code. For plain assert (the one built into C and C++ compiler) my usage pattern is that a failed assert means there is a bug in the code which needs to be fixed. I interpret this a bit generously; if I expect a web request to return a status 200 and assert for it without handling other cases then a failed assertion does indeed show a bug in my code, so the assert is justified.

So when people say an assert that only checks what the code does is superfluous that's not quite right. That assert checks what they think the code does, and the whole point of the assert is to check that the assumption of no bug in the code is right. And the assert can serve as documentation as well. If I assume that after executing a loop i == n and it isn't 100% obvious from the code, then "assert (i == n)" will be helpful.

It's better to have more than just "assert" in your repertoire to handle different situations. For example the situation where I check that something doesn't happen that would indicate a bug, but still continue working around that condition. (For example, if I use some cache then I might check for errors, and if an error happens unexpectedly it may be safe to fix the error by throwing the cache away. I want something that is almost an assert, that tells me during development, and still lets me continue.

Another example is the situation where I don't expect something to happen, I have a generic workaround, but if this thing happens, I want to know about it and examine it. Again something almost like an assert, that should tell me during development. But not quite an assert.

Too many asserts: If an assert crashes your program when it is in the hands of the user, then you must not have any asserts that crash because of false negatives.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
-3

It depends. If the code requirements are clearly documented, then the assertion should always match the requirements. In which case it is a good thing. However, if there are no requirements or badly written requirements, then it would be hard for new programmers to edit code without having to refer to the unit test each time to figure out what the requirements are.

  • 3
    this doesn't seem to offer anything substantial over points made and explained in prior 8 answers – gnat Jan 10 '16 at 15:22