4

Which underlying system parameters have most influence on how fast a typical Java project (say dozens of classes and dependencies) builds?

There is a lot information on JIT (bytecode to CPU instructions) compiler optimization, but what if you need a large project to compile (very) fast? Say, 10 times faster? 100 times faster?

J. Doe
  • 175
  • 7
  • 1
    Recommended reading: **[Open letter to students with homework problems](https://softwareengineering.meta.stackexchange.com/q/6166/31260)**. "If your question... is just a copy paste of homework problem, expect it to be downvoted, closed, and deleted - potentially in quite short order." – gnat Apr 04 '19 at 13:57
  • happy to hear my question is worth giving it as homework :-) @gnat – J. Doe Apr 04 '19 at 13:59
  • So the question is addressed to make the compilation more performant. Right? – Laiv Apr 04 '19 at 14:02
  • yes, I'll add some details. – J. Doe Apr 04 '19 at 14:03
  • Is this a "how to choose hardware for build server(s)" question? – Caleth Apr 04 '19 at 14:12
  • @Caleth no because this would not help to understand the bottlenecks. – J. Doe Apr 04 '19 at 14:13
  • What build ecosystem do you use? Something like Maven or Gradle where you have dependencies on artifacts hosted somewhere on the internet? Or a completely local build? – Ralf Kleberhoff Apr 05 '19 at 15:21

2 Answers2

3

As in all "how do I make X faster" questions, your first approach should be to profile the process. IME the biggest bottleneck is usually the disk speed. With SSDs becoming ubiquitous, this may shift around a bit (network speed can be an issue if you're reading/writing to a remote machine, memory pressure can limit the amount of parallelism you can use, etc). If nothing else, for a first approximation I'd launch some kind of system monitor that displays CPU, disk, network and memory usage, then kick off a build. This should give you some idea.

TMN
  • 11,313
  • 1
  • 21
  • 31
2

Generally, such speed-ups can only be achieved by splitting the project into modules that are then compiled separately.

But doing that in a way that doesn't reduce maintainability and development speed often requires re-structuring and re-architecting, because the common modules of "Core", "Model", "UI", etc.. are not suited for this kind of compilation.

So instead of "horizontal slices" you need "vertical slices". Modules that contain all logic for specific feature, that can be compiled separately from others. Or maybe some kind of plugin system, where plugins can be compiled separately. But that requires creation of stable and solid API for those plugins to use.

If you don't wan't to go as far as creating separate modules, then re-architecting to increase capacity of parallel compilation is best option. Most project that I saw had really "tall" dependency trees of modules. Instead, you should strive for "shallow" and "wide" dependency tree. Which maximizes amount of modules that can be compiled in parallel.

Euphoric
  • 36,735
  • 6
  • 78
  • 110
  • Worth nothing to say that if we compile in dedicated environments, the more dedicated they are the faster is the compilation. If compilation competes against other processes it's likely we have to provide the compilation process with a higher priority to break such competition for resources (mainly CPU). – Laiv Apr 04 '19 at 14:38
  • @Laiv True. But from what I saw. Most companies already have dedicated build environments. And if build times are really a concern, buying better build server is often the most obvious first step. – Euphoric Apr 04 '19 at 14:41