62

So I am working on a software design using C for a certain processor. The tool-kit includes the ability to compile C as well as C++. For what I am doing, there is no dynamic memory allocation available in this environment and the program is overall fairly simple. Not to mention that the device has almost no processor power or resources. There's really no strong need to use any C++ whatsoever.

That being said, there are a few places where I do function overloading (a feature of C++). I need to send a few different types of data and don't feel like using printf style formatting with some kind of %s (or whatever) argument. I've seen some people that didn't have access to a C++ compiler doing the printf thing, but in my case C++ support is available.

Now I'm sure I might get the question of why I need to overload a function to begin with. So I'll try to answer that right now. I need to transmit different types of data out a serial port so I have a few overloads that transmit the following data types:

unsigned char*
const char*
unsigned char
const char

I'd just prefer not to have one method that handles all of these things. When I call on the function I just want it to transmit out the serial port, I don't have a lot of resources so I don't want to do barely ANYTHING but my transmission.

Someone else saw my program and asked me, "why are you using CPP files?" So, that's my only reason. Is that bad practice?

Update

I'd like to address some questions asked:

An objective answer to your dilemma will depend on:

  1. Whether the size of the executable grows significantly if you use C++.

As of right now the size of the executable consumes 4.0% of program memory (out of 5248 bytes) and 8.3% of data memory (out of 342 bytes). That is, compiling for C++... I don't know what it would look like for C because I haven't been using the C compiler. I do know that this program will not grow any more, so for how limited the resources are I'd say I'm okay there...

  1. Whether there is any noticeable negative impact on performance if you use C++.

Well if there is, I haven't noticed anything... but then again that could be why I'm asking this question since I don't fully understand.

  1. Whether the code might be reused on a different platform where only a C compiler is available.

I know that the answer to this is definitely no. We are actually considering moving to a different processor, but only more powerful ARM-based processors (all of which I know for a fact have C++ compiler tool-chains).

Snoop
  • 2,718
  • 5
  • 24
  • 52
  • 60
    I've been known to use C++ for a project only using C features just so that I can have `//` comments. If it works, why not? – Jules Apr 20 '17 at 19:43
  • 77
    The bad practice would be restricting yourself to C when you have a good use for features it doesn't provide. – Jerry Coffin Apr 20 '17 at 20:00
  • 2
    If you decide to stick with C (which IMHO is the right thing to do) there is [generic selection](http://en.cppreference.com/w/c/language/generic) which may help you (directly look at the examples). – FISOCPP Apr 20 '17 at 23:03
  • 35
    When you say "use a C++ compiler" you mean "use C++". Just say it. You can't compile C with a C++ compiler, but you *can* switch from C to C++ easily, which is what you'd actually be doing. – user253751 Apr 20 '17 at 23:04
  • 4
    "For what I am doing, there is no dynamic memory allocation on the processor and the program is overall fairly simple. Not to mention that the device has almost no processor power or resources. There's really no strong need to use any C++ whatsoever." I hope the first of those two sentences are supposed to be reasons to not use C++, because they're pretty bad ones if they are. C++ is perfectly fine to use with embedded systems. – Pharap Apr 21 '17 at 01:06
  • 24
    @Jules I’m sure you know this and were thinking back a while, but in case somebody reading this doesn’t: `//` comments have been in the C standard since C99. – Davislor Apr 21 '17 at 01:54
  • @Davislor - actually, I haven't worked on a plain C project since 1997... – Jules Apr 21 '17 at 04:21
  • @Jules I was worked on a plain C project (for my job), then one day I decided I wanted to use `std::string`, so I made it a mixed C/C++ project. It was very painless - I just changed my file extension to cpp and added the appropriate compilation rule. – user253751 Apr 21 '17 at 08:28
  • @DocBrown Why didn't you just edit it? You're correct, I see what you're saying so... sure I can edit it. – Snoop Apr 21 '17 at 12:00
  • @DocBrown Yeah, but I read you're response and agree with your statements. You make the edit, I insist. – Snoop Apr 21 '17 at 12:20
  • @Jules There is/was a C compiler that doesn't allow you to have `//` comments!? – Snoop Apr 21 '17 at 12:41
  • 2
    [C already supports overloading](http://stackoverflow.com/q/479207/1366431). – Alex Celeste Apr 21 '17 at 14:01
  • 1
    @Snoopy You didn't tag this with embedded-systems. Does code actually differ based on type (other than value vs pointer)? Typically, I always used (void *) when blasting things to memory mapped device/port. – mpdonadio Apr 21 '17 at 14:56
  • @mpdonadio I thought about that, but if I wanted to ask about it in the context of a specific embedded device I'd probably just go to the vendor forum or website and ask... So I decided against it. What do you think? And, I don't really know how to answer your question of whether it differs based on type. – Snoop Apr 21 '17 at 15:04
  • 1
    We also use C++ for our embedded projects on (sometimes) very small controllers (<1k RAM), and features like overloading, static_assert, namespaces, stricter typing to avoid bugs, even templates (e.g. for typesafe memcpy), const correctness, etc. improved our code. Good C++ is what fits your application, cherrypicking features is fine if you know why you do it (see e.g. the JSF++ coding standard by Bjarne). At least parts of the C++ community think so as well, e.g. see https://youtu.be/D7Sd8A6_fYU, in your case just don't forget to switch off RTTI and exceptions... – Andreas Wallner Apr 23 '17 at 09:50
  • 'Bad pracitce' compared to what? The question is meaningless without a referent. There is no such thing as 'good practice' and 'bad practice'. The choice you have to make is between this and some unstated alternative, and the choice should be made primarily on cost grounds and secondarily on 'technical debt' grounds, i.e. maintainability. – user207421 Apr 23 '17 at 11:43
  • @mpdonadio Looking back at this... Do you have any reference as to where you learned about *used (void *) when blasting things to memory mapped device/port* I'd like to know more. – Snoop Jun 12 '17 at 09:58

10 Answers10

79

I wouldn't go so far as to call it "bad practice" per se, but neither am I convinced it's really the right solution to your problem. If all you want is four separate functions to do your four data types, why do not what C programmers have done since time immemorial:

void transmit_uchar_buffer(unsigned char *buffer);
void transmit_char_buffer(char *buffer);
void transmit_uchar(unsigned char c);
void transmit_char(char c);

That's effectively what the C++ compiler is doing behind the scenes anyway and it's not that big of an overhead for the programmer. Avoids all the problems of "why are you writing not-quite-C with a C++ compiler", and means nobody else on your project is going to be confused by which bits of C++ are "allowed" and which bits aren't.

Philip Kendall
  • 22,899
  • 9
  • 58
  • 61
  • 8
    I'd say why not is because the type being transmitted is (probably) an implementation detail, so hiding it and letting the compiler deal with selecting the implementation may well result in more readable code. And if using a C++ feature improves readability, then why not do that? – Jules Apr 20 '17 at 19:47
  • 30
    One can even `#define` transmit() using C11 `_Generic` – Deduplicator Apr 20 '17 at 19:48
  • 16
    @Jules Because it's then very confusing as to what C++ features are allowed to be used in the project. Would a potential change containing objects be rejected? What about templates? C++ style comments? Sure, you can work around that with a coding style document, but if all you're doing is a simple case of function overloading, then just write C instead. – Philip Kendall Apr 20 '17 at 20:57
  • 25
    @Phillip "C++ style comments" have been valid C for well over a decade. – David Conrad Apr 20 '17 at 22:05
  • 12
    @Jules: while the type being transmitted is probably an implementation detail for software that makes insurance quotes, the OPs application sounds to be an embedded system doing serial communication, where the type and datasize is of significant importance. – whatsisname Apr 21 '17 at 02:52
  • @Jules when the type being transmitted is in the function signature it's not being hidden anyway. – Ergwun Apr 21 '17 at 05:08
  • 1
    @Ergwun - OK, "hidden" is perhaps an overstatement of what I meant, which is really that the extra redundancy of repeating the type every time the value is used isn't useful when you can easily find the type from the declaration (or via IDE tools, etc). – Jules Apr 21 '17 at 05:23
  • 2
    @DavidConrad Fair point well made. Shame I can't edit my comment now. – Philip Kendall Apr 21 '17 at 10:40
  • 3
    @DavidConrad actually that's homing in on *two* decades... Sorry ;) – Quentin Apr 21 '17 at 11:41
  • 1
    @DavidConrad ""C++ style comments" have been valid C for well over a decade" - in the standards perhaps, but he doesn't say that his compiler isn't C89 or even older (although perhaps there being a C++ version might mean it's not that old). – Neil Apr 21 '17 at 12:11
  • @Neil That's a fair point, although a lot of C89 compilers allowed them as an extension, which is partly why the committee standardized them in C99. – David Conrad Apr 21 '17 at 18:34
  • 1
    quibbling about the example of comment syntax is silly -- that's not the point -- the point is that use of a C++ compiler risks allowing any and/or all C++ features to creep in without any possibility of a warning. One should either accept to use _all_ of C++, or none of it. – Greg A. Woods Apr 22 '17 at 20:30
  • @GregA.Woods that seems a bit harsh, I do not see a reason why anyone would be better off by a mandate to use all C++ features. We also use C++ for embedded development (w/o dynamic memory allocs, etc.). Yes, you might have to think about the impact of some stuff more deeply, but our code is much more maintainable because of the improved type system, static asserts, namespaces, overloading, stricter guarantees allowing for better compiler optimizations, etc. Close to no project will use all C++ features and demanding so seems weired. Good C++ IMO is what fits you target application. – Andreas Wallner Apr 23 '17 at 09:40
  • Unless one has good independent tooling for detecting language violations, or you're dealing with a trivial-sized code base and you have superbly skilled programmers, one cannot possibly make any claims about whether or not the result honours the commitment to use only a limited subset of a different language. If you can't constrict yourself to C, then use C++ and admit that you're using C++ and be prepared for any and all C++ features to enter your code base (or write your own compiler for your own non-standard language and make sure only it can be used!). – Greg A. Woods Apr 23 '17 at 17:28
56

Using only some features of C++ while otherwise treating it as C is not exactly common, but also not exactly unheard of either. In fact, some people even use no features at all of C++, except the stricter and more powerful type checking. They simply write C (taking care to write only in the common intersection of C++ and C), then compile with a C++ compiler for type checking, and a C compiler for code generation (or just stick with a C++ compiler all the way).

Linux is an example of a codebase where people regularly ask that identifiers like class get renamed to klass or kclass, so that they can compile Linux with a C++ compiler. Obviously, given Linus's opinion of C++, they always get shot down :-D GCC is an example of a codebase that was first transformed to "C++-clean C", and then gradually refactored to use more C++ features.

There is nothing wrong with what you are doing. If you are really paranoid about the code generation quality of your C++ compiler, you can use a compiler like Comeau C++, which compiles to C as its target language, and then use the C compiler. That way, you can also do spot inspections of the code to see if using C++ injects any unforeseen performance-sensitive code. That shouldn't be the case for just overloading, though, which is literally just automatically generating differently named functions – IOW, exactly what you would be doing in C anyway.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
15

An objective answer to your dilemma will depend on:

  1. Whether the size of the executable grows significantly if you use C++.
  2. Whether there is any noticeable negative impact on performance if you use C++.
  3. Whether the code might be reused on a different platform where only a C compiler is available.

If the answers to any of the questions is "yes", you might be better off creating differently named functions for different data types and sticking with C.

If the answers to all the questions is "no", I see no reason why you should not use C++.

R Sahu
  • 1,966
  • 10
  • 15
  • 9
    I must say I've never m encountered a C++ compiler that generates significantly worse code than a C compiler for code written in the shared subset of the two languages. C++ compilers get a bad rep for both size and performance, but my experience is it's always inappropriate use of C++ features that caused the issue ... Particularly, if you're concerned about size, don't use iostreams and don't use templates, but otherwise you should be fine. – Jules Apr 20 '17 at 19:55
  • @Jules: Just for what it's worth (not much, IMO) I have seen what was sold as a single compiler for C and C++ (Turbo C++ 1.0, if memory serves) produce significantly different results for identical input. As I understand it, however, this was before they'd finished their own C++ compiler, so even though it looked like one compiler on the outside, it really had two entirely separate compilers--one for C, the other for C++. – Jerry Coffin Apr 20 '17 at 21:12
  • 1
    @JerryCoffin If memory serves, the Turbo products have never had a great reputation. And if it was a 1.0 version, you could be excused for not being highly refined. So it's probably not very representative. – Barmar Apr 20 '17 at 21:49
  • 10
    @Barmar: Actually, they did have a pretty decent reputation for quite a while. Their poor reputation now is due primarily to their sheer age. They're competitive with other compilers of the same vintage--but nobody posts questions about how to do things with gcc 1.4 (or whatever). But you're right--it's not very representative. – Jerry Coffin Apr 20 '17 at 21:53
  • @JerryCoffin Yeah, I guess you're right. It's like saying that the Model T was a pretty primitive automobile. At the time, it was probably the most high tech device its owners owned. – Barmar Apr 20 '17 at 22:11
  • 1
    @Jules I'd argue that even templates are fine. Despite popular theory, instantiating a template does not implicitly increase code size, the code size generally won't increase until a function from a template is used (in which case the size increase will be proportional to the function size) or unless the template is declaring static variables. The rules of inlining still apply, so it's possible to write templates where all functions are inlined by the compiler. – Pharap Apr 21 '17 at 01:17
  • @DocBrown, excellent point. Updated the answer. – R Sahu Apr 21 '17 at 03:49
  • @Jules On the performance issue: I did in fact encounter a major performance problem with `new` compared to `malloc()` once. Compiler was some `g++` version. Effectively, I could save about 100 CPU cycles per call by simply implementing `operator new()` as a call-through to `malloc()`... The standard C++ library is *wast*, and complicated due to all the involved templates. As such, you must expect it to be slightly less optimized than the rather small and simple C standard library. (Small and simple only by comparison, of course!) – cmaster - reinstate monica Apr 21 '17 at 08:51
  • @cmaster yes the supplied `operator new` was probably a more complex wrapper that called `malloc`, and checked for null and called the new-handler function. As for the *wast* library, I was very surprised to see complex-looking string streaming calls generate inline code that was *simple* and faster than `itoa` etc. The templates generate better code than the run-time handling of related types and similar cases in a single function. – JDługosz Apr 22 '17 at 19:14
  • @JDługosz The `NULL` check cannot explain the 100 CPU cycles. The call to the new-handler function and the `throw` are never executed anyway, so it's only the `NULL` check itself that's actually on the execution path, and that does exactly two things: 1. Compare a value in a register to zero, 2. take a conditional branch to skip over the new-handler call and the `throw` statement. The conditional branch is always taken if you are on linux as I am, so that's definitely less than ten cycles on average... – cmaster - reinstate monica Apr 23 '17 at 05:27
  • @cmaster I do recall that function picking up more and more cruft over the years. It predates exceptions, and then it had switchable behavior that checked a global variable to decide whether to throw. – JDługosz Apr 23 '17 at 06:01
  • For `printf`, I'd expect the C++ code to be smaller, because it doesn't need all the handlers for all escape sequences, just the ones you actually use. – Simon Richter Oct 14 '17 at 19:04
13

You pose the question as if compiling with C++ simply gave you access to more features. This is not the case. Compiling with a C++ compiler means that your source code is interpreted as C++ source, and C++ is a different language from C. The two have a common subset large enough to usefully program in, but each has features that the other lacks, and it is possible to write code that is acceptable in both languages but interpreted differently.

If truly all you want is function overloading, then I really don't see the point of confusing the issue by bringing C++ into it. Instead of different functions with the same name, distinguished their parameter lists, just write functions with different names.

As for your objective criteria,

  1. Whether the size of the executable grows significantly if you use C++.

The executable may be slightly larger when compiled as C++, relative to similar C code built as such. At minimum, the C++ executable will have the dubious benefit of C++ name-mangling for all function names, to the extent that any symbols are retained in the executable. (And that's in fact what provides for your overloads in the first place.) Whether the difference is large enough to be important to you is something you would have to determine by experiment.

  1. Whether there is any noticeable negative impact on performance if you use C++.

I doubt you would see a noticeable performance difference for the code you describe vs. a hypothetical pure-C analog.

  1. Whether the code might be reused on a different platform where only a C compiler is available.

I'd throw a slightly different light on this: if you want to link the code you are building with C++ to other code, then either that other code will also need to be written in C++ and built with C++, or you'll need to make special provision in your own code (declaring "C" linkage), which, additionally, you cannot do at all for your overloaded functions. On the other hand, if your code is written in C and compiled as C, then others can link it with both C and C++ modules. This sort of problem can usually be overcome by turning the tables, but since you really don't seem to need C++ then why accept such an issue in the first place?

John Bollinger
  • 941
  • 5
  • 11
  • 4
    *"The executable may be slightly larger when compiled as C++...to the extent that any symbols are retained in the executable."* To be fair, though, any decent optimizing toolchain should have a linker option to strip these symbols in "release" builds, which means they'd only bloat your debugging builds. I don't think that's a major loss. In fact, it is more often a benefit. – Cody Gray - on strike Apr 22 '17 at 08:07
5

Back in the day, using a C++ compiler as “a better C” was promoted as a use case. In fact, early C++ was exactly that. The underlying design principle was that you could use only the features you wanted and would not incur the cost of features you were not using. So, you could overload functions (by declaring the intent via the overload keyword!) and the rest of your project not only would compile just fine but would generate code no worse than the C compiler would produce.

Since then, the languages have diverged somewhat.

Every malloc in your C code will be a type mismatch error — not a problem in your case with no dynamic memory! Likewise, all your void pointers in C will trip you up, as you will have to add explicit casts. But…why are you doing it that way … you will be led down the path of using C++ features more and more.

So, it might be possible with some extra work. But it will serve as a gateway to larger scale C++ adoption. In a few years people will complain about your legacy code that looks like it was written in 1989, as they replace your malloc calls with new, rip out blocks of code of loop bodies to just call an algorithm instead, and berate the unsafe fake polymorphism that would have been trivial had the compiler been allowed to do it.

On the other hand, you know that it would be all the same had you written it in C, so is it ever wrong to write in C instead of C++ given its existence? If the answer is “no”, then using cherry-picked features from C++ can’t be wrong either.

JDługosz
  • 568
  • 2
  • 9
2

Many programming languages have one or more cultures that develop around them, and have particular ideas about what a language is supposed to "be". While it should be possible, practical, and useful to have a low-level language suitable for systems programming which augments a dialect of C suitable for such purpose with some features from C++ which would likewise be suitable, the cultures surrounding the two languages don't particularly favor such a merger.

I've read of an embedded C compilers that incorporated some features from C++, including the ability to have

foo.someFunction(a,b,c);

interpreted as, essentially

typeOfFoo_someFunction(foo, a, b, c);

as well as the ability to overload static functions (name mangling issues only arise when exported functions are overloaded). There's no reason C compilers shouldn't be able to support such functionality without imposing any additional requirements on the linking- and run-time environment, but many C compiler's reflexive rejection of almost everything from C++ in the interest of avoiding environmental burdens causes them to reject even features that would impose no such cost. Meanwhile, the culture of C++ has little interest in any environment that can't support all the features of that language. Unless or until the culture of C changes to accept such features, or the culture of C++ changes to encourage implementations of lightweight subsets, "hybrid" code is apt to suffer from many of the limitations of both languages while being limited in its ability to reap the benefits of operating in a combined superset.

If a platform supports both C and C++, the extra library code required to support C++ will likely make the executable somewhat larger, though the effect will not be particularly great. Execution of "C features" within C++ will likely not be particularly affected. A bigger issue may be how C and C++ optimizers handle various corner cases. The behavior of data types in a C is mostly defined by the contents of bits stored therein, and C++ has a recognized category of structures (PODS--Plain Old Data Structures) whose semantics are likewise mostly defined by the stored bits. Both the C and C++ standards, however, sometimes allow behavior contrary to that implied by the stored bits, and the exceptions to "stored-bits" behavior differ between the two languages. Systems programming often requires behavioral guarantees beyond those mandated either language's specification, but because of the difference between the cultures of C and C++, implementations for the two langauges may offer different guarantees.

supercat
  • 8,335
  • 22
  • 28
  • 1
    Interesting opinion but doesn't address the OP's question. – R Sahu Apr 21 '17 at 17:26
  • @RSahu: Code which fits the "culture" surrounding a language is apt to be better supported and more easily adaptable to a variety of implementations than code which doesn't. The cultures of C and C++ are both averse to the kind of usage the OP suggests, I'll add some more specific points with regard to the specific questions. – supercat Apr 21 '17 at 18:22
  • 1
    Note that there is an embedded/financial/gaming group in the C++ standard committee that is aimed at solving exactly the kind of "little interest" situations you claim are discarded. – Yakk Apr 22 '17 at 17:13
  • @Yakk: I'll confess I've not been following the C++ world terribly closely, since my interests lie mainly with C; I know there have been efforts toward more embedded dialects, but I hadn't thought they'd really gotten anywhere. – supercat Apr 23 '17 at 17:30
2

Another important factor to consider: whoever will inherit your code.

Is that person always going to be a C programmer with a C compiler? I suppose so.

Will that person also be a C++ programmer with a C++ compiler? I would want to be reasonably sure about this before making my code depend on C++ specific things.

reinierpost
  • 587
  • 6
  • 8
2

Polymorphism is a really good feature that C++ provides for free. However, there are other aspects that you may need to consider when choosing a compiler. There is an alternative for polymorphism in C but its use can cause some raised eyebrows. Its the variadic function, please refer to Variadic function tutorial.

In your case it would be something like

enum serialDataTypes
{
 INT =0,
 INT_P =1,
 ....
 }Arg_type_t;
....
....
void transmit(Arg_type_t serial_data_type, ...)
{
  va_list args;
  va_start(args, serial_data_type);

  switch(serial_data_type)
  {
    case INT: 
    //Send integer
    break;

    case INT_P:
    //Send data at integer pointer
    break;
    ...
   }
 va_end(args);
}

I like Phillips approach but it litters your library with a lot of calls. With the above the interface is clean. It does have its drawbacks and ultimately its a matter of choice.

  • Variadic functions primarily serve the use case where both the number of arguments *and* their types are variable. Otherwise, you don't need it, and in particular, it is overkill for your particular example. It would be simpler to use an ordinary function that accepts an `Arg_type_t` and a `void *` pointing to the data. With that said, yes, giving the (single) function an argument that indicates the data type is indeed a viable alternative. – John Bollinger Apr 27 '17 at 13:55
1

Is it bad practice to use a C++ compiler just for function overloading?

IMHO standpoint, yes, and I'll need to become schizophrenic to answer this one since I love both languages but it has nothing to do with efficiency, but more like safety and idiomatic use of languages.

C Side

From a C standpoint, I find it so wasteful to make your code require C++ just to use function overloading. Unless you are utilizing it for static polymorphism with C++ templates, it's such trivial syntactical sugar gained in exchange for switching to an entirely different language. Further if you ever want to export your functions to a dylib (may or may not be a practical concern), you can no longer do so very practically for widespread consumption with all the name-mangled symbols.

C++ Side

From a C++ standpoint, you shouldn't be using C++ like C with function overloading. This is not stylistic dogmatism but one related to practical use of everyday C++.

Your normal kind of C code is only reasonably sane and "safe" to write if you're working against the C type system which forbids things like copy ctors in structs. Once you're working in C++'s much richer type system, daily functions which are of enormous value like memset and memcpy don't become functions you should lean on all the time. Instead, they're functions you generally want to avoid like the plague, since with C++ types, you shouldn't be treating them like raw bits and bytes to be copied and shuffled around and freed. Even if your code only uses things like memset on primitives and POD UDTs at the moment, the moment anyone adds a ctor to any UDT you use (including just adding a member which requires one, like std::unique_ptr member) against such functions or a virtual function or anything of that sort, it renders all of your normal C-style coding susceptible to undefined behavior. Take it from Herb Sutter himself:

memcpy and memcmp violate the type system. Using memcpy to copy objects is like making money using a photocopier. Using memcmp to compare objects is like comparing leopards by counting their spots. The tools and methods might appear to do the job, but they are too coarse to do it acceptably. C++ objects are all about information hiding (arguably the most profitable principle in software engineering; see Item 11): Objects hide data (see Item 41) and devise precise abstractions for copying that data through constructors and assignment operators (see Items 52 through 55). Bulldozing over all that with memcpy is a serious violation of information hiding, and often leads to memory and resource leaks (at best), crashes (worse), or undefined behavior (worst) -- C++ Coding Standards.

So many C developers would disagree with this and rightly so, since the philosophy only applies if you are writing code in C++. You most likely are writing very problematic code if you use functions like memcpy all time in code that builds as C++, but it's perfectly fine if you do it in C. The two languages are very different in this regard because of the differences in the type system. It's very tempting to look at the subset of features these two have in common and believe one can be used like the other, especially on the C++ side, but C+ code (or C-- code) is generally far more problematic than both C and C++ code.

Likewise you shouldn't be using, say, malloc in a C-style context (which implies no EH) if it can call any C++ functions directly which can throw, since then you have an implicit exit point in your function as a result of the exception which you can't effectively catch writing C-style code, prior to being able to free that memory. So whenever you have a file that builds as C++ with a .cpp extension or whatever and it does all these types of things like malloc, memcpy, memset, qsort, etc, then it is asking for problems further down the line if not already unless it is the implementation detail of a class that only works with primitive types, at which point it still needs to do exception-handling to be exception-safe. If you're writing C++ code you instead want to generally rely on RAII and use things like vector, unique_ptr, shared_ptr, etc, and avoid all normal C-style coding when possible.

The reason you can play with razor blades in C and x-ray data types and play with their bits and bytes without being prone to causing collateral damage in a team (though you can still really hurt yourself either way) is not because of what C types can do, but because of what they'll never be able to do. The moment you extend C's type system to include C++ features like ctors, dtors, and vtables, along with exception-handling, all idiomatic C code would be rendered far, far more dangerous than it currently is, and you will see a new kind of philosophy and mindset evolving which will encourage a completely different style of coding, as you see in C++, which now considers even using a raw pointer malpractice for a class that manages memory as opposed to, say, a RAII-conforming resource like unique_ptr. That mindset didn't evolve out of an absolute sense of safety. It evolved out of what C++ specifically needs to be safe against features like exception-handling given what it merely allows through its type system.

Exception-Safety

Again, the moment you are in C++ land, people are going to expect your code to be exception-safe. People might maintain your code in the future, given that it's already written and compiles in C++, and simply use std::vector, dynamic_cast, unique_ptr, shared_ptr, etc. in code called either directly or indirectly by your code, believing it to be innocuous since your code is already "supposedly" C++ code. At that point we have to face the chance that things will throw, and then when you take perfectly fine and lovely C code like this:

int some_func(int n, ...)
{
    int* x = calloc(n, sizeof(int));
    if (x)
    {
        f(n, x); // some function which, now being a C++ function, may 
                 // throw today or in the future.
        ...
        free(x);
        return success;
    }
    return fail;
}

... it's now broken. It needs to be rewritten to be exception-safe:

int some_func(int n, ...)
{
    int* x = calloc(n, sizeof(int));
    if (x)
    {
        try
        {
            f(n, x); // some function which, now being a C++ function, may 
                     // throw today or in the future (maybe someone used
                     // std::vector inside of it).
        }
        catch (...)
        {
            free(x);
            throw;
        }
        ...
        free(x);
        return success;
    }
    return fail;
}

Gross! Which is why most C++ developers would demand this instead:

void some_func(int n, ...)
{
    vector<int> x(n);
    f(x); // some function which, now being a C++ function, may throw today
          // or in the future.
}

The above is RAII-conforming exception-safe code of the kind that C++ developers would generally approve since the function won't leak no which line of code triggers an implicit exit as a result of a throw.

Choose a Language

You should either embrace C++'s type system and philosophy with RAII, exception-safety, templates, OOP, etc. or embrace C which largely revolves around raw bits and bytes. You shouldn't form an unholy marriage between these two languages, and instead separate them into distinct languages to be treated very differently instead of blurring them together.

These languages want to marry you. You generally gotta pick one instead of dating and fooling around with both. Or you can be a polygamist like me and marry both, but you have to completely switch your thinking when spending time with one over the other and keep them well-separated from each other so that they don't fight each other.

Binary Size

Just out of curiosity I tried taking my free list implementation and benchmark just now and porting it to C++ since I got really curious about this:

[...] don't know what it would look like for C because I haven't been using the C compiler.

... and wanted to know if the binary size would inflate at all just building as C++. It required me to sprinkle explicit casts all over the place which was fugly (one reason I like actually writing low-level things like allocators and data structures in C better) but only took a minute.

This was just comparing an MSVC 64-bit release build for a simple console app and with code that didn't use any C++ features, not even operator overloading -- just the difference between building it with C and using, say, <cstdlib> instead of <stdlib.h> and things like that, but I was surprised to find it made zero difference to the binary size!

The binary was 9,728 bytes when it was built in C, and likewise 9,278 bytes when compiled as C++ code. I actually didn't expect that. I thought things like EH would at least add a little bit there (thought it would at least be like a hundred bytes different), though probably it was able to figure that there was no need to add EH-related instructions since I'm just using the C standard library and nothing throws. I thought something would add a little bit to the binary size either way, like RTTI. Anyway, it was kinda cool to see that. Of course I don't think you should generalize from this one result, but it at least impressed me a bit. It also made no impact on the benchmarks, and naturally so, since I imagine the identical resulting binary size also meant identical resulting machine instructions.

That said, who cares about binary size with the safety and engineering issues mentioned above? So again, pick a language and embrace its philosophy instead of trying to bastardize it; that's what I recommend.

1

For your particular case of statically overloading a "function" for a few different types, you might consider instead just using C11 with its _Generic macro machinery. I feel it might be enough for your limited needs.

Using Philip Kendall's answer you could define:

#define transmit(X) _Generic((X),      \
   unsigned char*: transmit_uchar_buffer, \
   char*: transmit_char_buffer,           \
   unsigned char: transmit_uchar,         \
   char: transmit_char) ((X))

and code transmit(foo) whatever is the type of foo (among the four types listed above).

If you care only about GCC (and compatible, e.g. Clang) compilers, you could consider its __builtin_types_compatible_p wtth its typeof extension.

Basile Starynkevitch
  • 32,434
  • 6
  • 84
  • 125