11

Over the course of some months I've created a little framework for game development that I currently include in all of my projects.

The framework depends on SFML, LUA, JSONcpp, and other libraries. It deals with audio, graphics, networking, threading; it has some useful file system utilities and LUA wrapping capabilities. Also, it has many useful "random" utility methods, such as string parsing helpers and math utils.

Most of my projects use all of these features, but not all of them:

  • I have an automatic updater that only makes use of the file system and networking features
  • I have a game with no networking capabilities
  • I have a project that doesn't need JSONcpp
  • I have a project that only needs those string/math utils

This means that I have to include the SFML/LUA/JSON shared libraries in every project, even if they aren't used. The projects (uncompressed) have a minimum of 10MB in size this way, most of which is unused.

The alternative would be splitting the framework in many smaller libraries, which I think would be much more effective and elegant, but would also have the cost of having to maintain more DLL files and projects.

I would have to split my framework in a lot of smaller libraries:

  • Graphics
  • Threading
  • Networking
  • File system
  • Smaller utils
  • JSONcpp utils
  • LUA utils

Is this the best solution?

Vittorio Romeo
  • 213
  • 2
  • 7
  • 1
    Keep in mind that you should be able to build a directed graph of your dependencies. If some of your modules depend on modules which depend on them, you're just asking for trouble. If you can't structure this such that the dependencies aren't circular, you shouldn't mess with it. – Bobson Mar 18 '13 at 16:42
  • It's also depend on how your programming and related programming enviroment, manages libraries and related concepts such libraries, packages, namespace, and alike ... – umlcat Oct 27 '14 at 17:19
  • To me, a library is just a *shipping container* - It is the size and interdependencies (and outside dependencies) of the crates that are inside that matters. As long as you design the single crates (object files, for example) as stand-alone functioning parts without unnecessary dependencies, both ways are fine. – tofro Dec 18 '17 at 11:24

4 Answers4

15

I'd personally go for many small libraries.

  • Discourages developers from creating dependencies between otherwise unrelated packages.
  • Smaller more manageable libraries that are much more focused.
  • Easier to break up and have separate teams manage each library.
  • Once you have a new requirement that's sufficiently complex, its better to add a new module rather than find an existing library to shove the new code in. Small libraries encourage this pattern.
p.s.w.g
  • 4,135
  • 4
  • 28
  • 40
  • Overall I agree although I have seen cases where the small library approach wasn't well managed and got out of hand. On one project each library had semi-duplicated data access layer code. On another, there were too many dependent relationships between libraries. – jfrankcarr Mar 18 '13 at 18:04
  • @jfrankcarr True, poor code management can affect any project. My sense is that projects with monolithic libraries are more susceptible to than projects with small 'micro-libraries'. – p.s.w.g Mar 18 '13 at 18:43
  • Just a side note - SFML itself is split into multiple modules, so you don't have to link to e.g. Networking module if your game is single-player only. – sjaustirni Mar 17 '18 at 08:35
6

You've given one side of the trade-off, but not the other one. Without a "fair and balanced" of the pressures you're operating under, we can't possibly tell you.

You say that splitting the libraries will make all your projects smaller. That's a clear plus. I can imagine various minuses:

  • splitting the libraries is in itself an effort, even if it has to be done only once.
  • maintaining versions of many libraries consistently is a small but persistent additional effort.
  • it is not as easy to be sure that every project does in fact really bundle the things it needs
  • splitting may not be possible as cleanly as you believe at the moment, and introduce extra work, maybe even threaten the conceptual integrity of some modules.

Depending on how probable/important such counterarguments are for you, splitting may or may not be the right choice for you. (Note that the dichotomy between "splitters" and "lumpers" is considered by many to be a fundamental personality trait that is not susceptible to logic in the first place!)

That said, the different tasks you say your modules are doing are so far removed from each other that I'd consider at least some splitting probably called for.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
3

There isn't a clear cut answer. The best driving factor I can think of is how interrelated are the libraries now, and do you expect them to become related later. If you have a complex web of dependencies then one big library will probably be easier, if you have minimal relationships then you can split them up cleanly.

Sign
  • 2,643
  • 19
  • 22
0

This might be very subjective and depends on your psychology and sensibilities, but my longest-lasting libraries that I've been using for my personal projects and didn't start hating over the years were always my smallest, most isolated (no external dependencies to other libs).

It's because it only takes one dumb or archaic idea to kind of mess with my whole perception of the library. Like I might have perfectly reasonable C code to rasterize shapes in a drawing library except it depends on an image and math library which I wrote in the 90s against 16-bit images which, in retrospect, are now totally shite. I might also have a C++ parsing library with some decent parsing code and AST code in there except I coupled it to a monolithic parsing stream which, in retrospect, was a really dumb and impractical design. So now the whole thing feels like shite. Most of my C++ code from the 90s is total crap to me now since I didn't really know how to design effectively in C++ back then and did dumb things like using inheritance to "extend" and provide superset functionality forming classes with 100+ members and goofy abstractions instead of modeling proper subtypes with minimalist interfaces. More of my C code has survived my shite filter, though only a fraction. Mostly I came up with a mountain of shite. The little gold nuggets I was able to pick out were always the most most decoupled, minimalist code with the greatest singularity of purpose and often depending on little more than primitive data types.

Then I don't even wanna bother with these libs anymore except maybe port the code to a new lib which doesn't bother with those and just works against raw 32-bit and 128-bit pixels and inlines the vector math instead of depending on some external math lib for, say, the rasterization lib. Then the code lasts a lot longer and keeps me happy. I'm kind of cynical with my views of libraries. I tend to judge them by the weakest links instead of the strongest links. I can't overlook the bad in favor of the good until the bad is completely eliminated from that library.

So I vote for the smaller, more independent libraries since they have a lower probability, at least for me, of feeling like they're shite later on. If you're working in a team I'd vote for that even more with stronger standards to keep the libraries decoupled from each other, since they can get messy really fast with many hands on them unless they have a very singular purpose and an aim towards minimalism (looking for reasons not to add more instead of always finding reasons to add more -- you can't hate what you don't add)... though it sounded by the question that this was more for personal projects where maybe psychology factors in more. But further I'd vote to split off very decoupled pieces of functionality. You don't necessarily have to split your framework to all the pieces you want right away. I'd just seek to start building up stable libraries of code, well-tested, that are minimal and decoupled in nature -- things you feel good about lasting a good while.