C++ architectural preferences
2026-03-02 09:30 by Ian
I've done firmware architecture for at least four companies (depending on how you count it). This is a running log of my observations and experiences building embedded code bases to solve common business problems in an engineering org.
It becomes progressively more expensive to find bugs as more of the software lifecycle is traversed before finding them.
That is, it is always cheaper to find a bug at buildtime, rather than during unit testing.
The most expensive way to find a bug is to be informed of it by a customer.
Therefore, any bugs that can be systematically prevented by failing a build are ipso facto preferable to a bug that passes a build. This will be a theme in feature selection.
Features I encourage
Strict pointer type checking
We will get strict pointer type enforcement by default, just by naming the file .cpp.uint8* buf = malloc(42); // Suddenly become a build-breaking error, and will need to be changed to specifically cast the type. uint8* buf = (uint8*) malloc(42);PRO: Type-punning is never permitted. Immediate reliability benefits.
CON: For already existinging codebases, someone needs to go clean up all the sloppy allocations. It usually takes only a few man-hours to do this unless bugs are discovered in the process.
There are usually bugs discovered in the process.
enum class
PRO: Forces code to either stay within a static namespace, or cast explicitly. Big buff to reliability. Finds hidden bugs. No runtime cost.
CON: Makes enums slightly more verbose. MY_ENUM_KEY becomes EnumName::MY_ENUM_KEY, for instance.
CON: Will break builds until all uses of the enum are disciplined. See "Pros" above.
Function overloading
PRO: Legibility++; Brevity++;
PRO: Allows the deprecation of macros like min() and max().
CON: Mangles names for translation units linked as C++.
Templates
PRO: Implements logic without regard for types. Don't repeat yourself.
Code is replicated by the compiler for each type, thus simplifying logic that would otherwise need to account for differences in (size) or (algebra) for two given types. (IE, vector multiplication [and float multiplication, for that matter]) is not commutative. Allows such issues to be handled by mages practiced at those crafts, and isolates everyone else from needing to care.
PRO: Reduces complexity of the binary, vis-a-vis writing the same logic twice or type-punning.
Side note about type-punning: If type-punning of pointers is in use anywhere in the codebase, the upside to templates dwarfs the worst of their possible downsides. Type-punning is a leading cause of RedBull overdose.
CON: Teams can get carried away (or write sloppy templates), and triple the sizes of their builds in an afternoon.
CON: Careless or redundant typing can increase build size for no beneficial decrease in complexity.
Lambdas
PRO: Easy to read and possibly the tightest means of scoping code available in a computer language.
CON: If you think finding name-mangled functions in the linker's output is burdensome, lambdas will drive you bonkers.
CON: Tends to subvert object-oriented patterns unless C++17 or higher is supported by the compiler.
Some people prefer to write functionally, but even those that don't have used something analogous. Lambdas give all the important properties of a real function, without the namespace burden and clutter that would come with something flavored like
typedef void (*FxnPointer)();It may be a bad idea in cases where you want precise linktime control over how and where a function is placed in the final binary. It is just a function like any other, and can be relocated as you would otherwise expect. But by their nature, such functions are nameless. Without even a name to mangle, the compiler autogenerates something obtuse.
OO
- PRO: Elegant use of OO conceals complexity. See "CON", below.
- CON: Elegant use of OO conceals complexity. See "PRO", above.
Anything written below can be construed as a pro or a con, depending on your values. The choice to use OO (or not) has wide-ranging consequences (both "good" and "bad"). Product reliability, engineer spin-up time/depth, company profitability, and even the kinds of engineers that choose to work on the software team. If we chose, we could use C++ without any OO whatsoever (what you may have heard described as "C-flavored C++", or simply "C+".) Some of the fastest and tightest software in the world is written this way, and it scales down to a coin cell fairly well, if done correctly.
Basically everyone who has written a non-trivial program in the past 20 years understands OO. Even if they don't know much C++. The design practice leverages brain architecture that already works reliably in everyone. It is difficult to overstate the value of this. But it is the difference between having a new engineer making his own RoI in four weeks versus six months (or a year).
This makes it easier to learn and reason about, but also easier to mis-apply or take for granted.
As a finished piece of clockwork, OO will introduce runtime overhead in three ways:
- Build size since toolchain-provided features might have code associated with them for things such as allocators and any data types you lean against. The same bloat you'd see from using a 3rd-party library.
- Build size from vtables. Although small, they represent a per-call data access to memory that was probably linked as "read-only", and therefore might be stored in serial ROM.
- Execution time due to the point above. This is at the root of why many embedded engineers have a sour taste for C++ performance generally on embedded (especially ROMless parts). But a C program of comparable size could easily suffer the same drag from veneer functions. In either case, mitigating it demands level-30 magecraft be exercised in the linker scripts. ESP-IDF handles this automatically, IIRC. Vtable data should always be placed in IRAM, and usually isn't but a few KB.
Isolation of concerns makes it much easier to easy to think about contracts, and it becomes far easier for stateful programs (such as hardware drivers). Once a C program gets complicated enough, most programmers write their own OO to manage state rather than use the OO provided by C++. IE, they do things like this. Unless we are striving to provide a pure-C API (as is X11's case), we might increase reliability/reasonability by renaming the file to .cpp, and adding private and protected designators. That benefit will be to the extent that we are defining large numbers of structs that have state-tracking members.
Such problems as (concurrency, structure-packing, memory management, etc) are suddenly better-bounded with language-enforced OO. If you have an enforceable contract, you can write tests.And if you can have tests, you can have automated enforcement (the CI pipeline).
And at the end of it all, you can truly say: "I don't make the same mistakes twice."
Features I discourage, but accept with good reasons
Standard library
The standard C++ library can easilly overtake your program in terms of build complexity and runtime. Certain pieces of the C++ stdlib are a given (operator new/delete and the allocators that go with them, init routines, and a handful of small classes and data structures, but even using
Exceptions
With a modest amount of platform effort, the exceptions feature in C++ can be tied to hardware exception handling, thus allowing try, catch, and throw to do the expected things for common hardware-supported exceptions with trivial risk and overhead. Division by zero is a common case. There isn't much downside to enabling that tie-in, other than to possibly encourage a pattern that is of questionable value and sparse use to begin with.
Sometimes, you have a library that needs it. [shrug]
Reflection
If I see GCC invoked with "--no-rtti", I immediately think it is embedded C++. RunTime Type Information is required for all use of native C++ reflection, as far as I am aware. Its primary runtime cost (and why I discourage it for embedded) is binary size. Sometimes several hundred kilobytes, depending on the program.
If you are using C++ for its rich type expression and control, you likely have many types that will contribute to the resting flash load of RTTI, and are probably only present to support a design choice that (like exceptions) is questionable in an embedded context. However, I have seen reports that C++23 now supports reflection for enums with zero runtime overhead. I have not verified this, but if true, should be usable without RTTI. Any use of reflection that does not depend on RTTI is probably fine.
Multiple inheritance
Like reflection, multiple inheritance is one of those questions that draws a clear line between "can you" versus "should you". Done carefully, I've seen many cases of MI used in a manner that saves time, effort, and is actually worth the complexity. But every metaphor breaks down eventually. And the same benefits of neural re-use that OO allows, also exposes OO designs to the same kinds of mistakes and sloppy logic that we normally exhibit for sets and categories. Over-generalizing some into all or none, "single-implies != double implies", etc...
Fortunately, out-of-control MI is also fairly easy to refactor unless you've allowed it evolve unaddressed for a long time. So there usually isn't any harm in doing it for leaf-classes to try out an idea.
Features I blacklist
Type auto
Use of type auto hurts understanding of a program so badly that I blacklist it from all of my code bases, despite its negligible-to-nothing runtimes costs. It invites bugs, laziness, and conceals the ontology of the very data that the program is meant to handle.
Previous: Notes on using ChatGPT for software work
Next: