4

As a side-effect of an embedded project I've been working on, I've developed a small operating system for an ARM processor. While the OS and my user code are in separate directories and have clean boundaries between them, they're built into a single image using a single Makefile.

I'd like to release the OS as open source. It consists of an exokernel and userland driver libraries.

What I'm trying to decide is how a user of this OS should combine it with their own code at build time so they have a single image to flash. I can think of three possibilities:

1) Have the user include the OS source files as part of their project. They'd be recreating the setup I currently have, and it's how FreeRTOS, for example, appears to do it (although it only has three files).

2) Have the OS build itself into a .a file that the user code can link with. That would allow the user full control over memory placement using their own linker script (and allow the linker to tell them when a combined text, BSS or data section exceeds its space).

3) Have the OS build itself into a raw binary that expects to have the user code concatenated onto the end of it. The user code would begin with a small header (placed by the linker script) that had the address of the main function. The user code would also link with a .a file of just the userland drivers, not the kernel.

(1) would make it easiest for me to develop the OS and my user code in parallel with git subtree. (2) seems to have the most useful link process. (3) gives the biggest separation.

If you were using it, which would you prefer and why?

Isvara
  • 610
  • 6
  • 18
  • (3) essentially requires you to create a module loader mechanism. Binaries don't work by concatenation; they work by having addresses, offsets and jump tables that point to the right places. – rwong Aug 26 '15 at 11:20
  • "Binaries don't work by concatenation" doesn't even make sense as a statement. Anyway, I already stated: "The user code would begin with a small header (placed by the linker script) that had the address of the main function." The user code would need to be relocatable, of course. – Isvara Aug 26 '15 at 16:38

2 Answers2

7

For a small-systems, hardcore embedded OS, the first option is the only really viable one.

The problem is that for nearly every chipset, you would have to make small adjustments to the interfacing with the hardware and that gives you a high risk that the OS needs to be re-compiled for the new chipset.

So, the first option is not only the easiest for you, but also the easiest for getting it incorporated in the next project.

Bart van Ingen Schenau
  • 71,712
  • 20
  • 110
  • 179
3

Let me first say that I agree with Bart's answer, but I want to argue for option (2+).

Assume that your OSS project is a success, that you have some dozens (or hundreds) of developers using your OS, and you are busy working on V2, which has some breaking changes with V1. A few users have weird problems you can't reproduce, which you suspect are related to their build environment. Suddenly you find your clean boundaries are not so clean.

Given a software component and an application that uses it, it is highly desirable that the point at which the two get combined be as late in the build process as possible. Separate compilation is better than include files, linkable libs are better than objects, dynamic libraries are better than static, runtime is better than compile time. [Obviously not all choices are available on all technologies.]

It's your call based on the technologies you have at your disposal, but erring on the side of separation should deliver benefits over time. The bigger the separation the better.

david.pfx
  • 8,105
  • 2
  • 21
  • 44