7

I am building a small application that supports a research project. My goal is to make the code to be painlessly executable and readable on as many operating systems as long as possible.

My reasoning is that 2-3-5-10 years down the line I will work on a completely different research project, but my software may have to run again as well as modified again. The person who manages that code at that time should then be able to do that as painlessly as possible. This means, ideally, avoiding to install an entire OS from the present, that is outdated at that future time, together with legacy libraries, to make the software work and make changes to it. Note that I'm not looking for absolute solutions, as everything in computer science always changes, but something that is good enough! I.e. I would like to know what runtime is still useable yet changes sufficiently slowly and base my approach in that, so that my software's half-life is longest.

Non-solutions and their reasons (but correct me if I'm wrong):

  • Docker. While Docker makes code be more easily reproducible in present runtimes, by making the steps to have it run more uniform, sharing any code in a Docker container exposes my code to any change in the software that makes up the Docker ecosystem itself - so if Docker changes significantly in 10 years, there is no guarantee of backward compatibility. Thus, there's not guarantee for future-proof reproducibility, which I'm aiming for.

  • Assembly: I could of course code the whole thing in assembly (or compile it down to assembly if I code in a higher language, as I would), using a set of CPU instructions that is as restricted as possible that has high likelihood to stay fixed over the time span (e.g. a carefully chosen subset of x86). But that would introduce a number of problems:

    -- It might be the case that things might break in such a process (already converting Python GUI applications to C can introduce weird bugs), making it harder to code the application.

    -- An even bigger issue: I cannot read or make changes to my application. For that I would have to supply the (high-level) source code, but then there's again the issue the a future run-time will not support that source code.

Question 1: What would a good trade-off be in terms of future-proof reproducibility?

Question 2: I guess one good approach is to think 2-3-5-10 years backward. What existed back then that I could still painlessly execute but also read today? (Perhaps coding in C, as it is well supported and its standard barely seems to change with time?)

Question 3: How should I deal with 3rd party libraries? Assume C would be the best answer to the previous questions. To make everything work, I would need all the 3rd party libraries I use to also be in C and also ship them together with my application. Is there a better solution for that?

candied_orange
  • 102,279
  • 24
  • 197
  • 315
user7088941
  • 569
  • 2
  • 12
  • If you are really willing to, coding an application with all libraries statically linked so the only non-binary dependency is the linux kernel syscall interface should be pretty stable. Assuming you don't have any dependencies on bugs. This does limit you to only behaviour that does not depend on non-kernel shared resources (no GUI, no sound, no kernel modules exposing extended apis through /dev etc) – user1937198 Aug 07 '22 at 15:15
  • If you need GUIs there is not going to be a stable way, as the underlying pipelines are not very stable. Your best bet right now would probably be to link against a rendering toolkit that renders to OpenGL and then dynamically link to an OpenGL library on your system. – user1937198 Aug 07 '22 at 15:26
  • Even writing it in x86 assembly would not be as future-proof as you might think. It would make it hard(er) to run your software on Apple Silicon (which is based on ARM), for example. – Jesper Aug 08 '22 at 07:42
  • @user1937198 Hm... interesting point regarding the GUI (which I might need to have=. Is there a reason for the mentioned instability? I guess you propose OpenGL because it has been around a sufficiently long time? ... – user7088941 Aug 08 '22 at 15:01
  • @Jesper Yes, Philipp Kendall also pointed that out. It seems in a way interesting that the "middle" is the most stable and the lowest level (assembler) and highest level (newest, feature rich languages) change the most ... – user7088941 Aug 08 '22 at 15:33
  • @user1937198 If you elaborate on your comments in an answer, I could also upvote that. This would complement existing, high-level answers nicely by outlining a concrete guideline to achieve a high degree of future-proofness. – user7088941 Aug 08 '22 at 15:35
  • Just today I found my company uses a framework that doesn’t support cross compiling. One problem is I can’t build an Arm Mac version on an x86 Mac and vice versa. I can get around that (but it’s a pain). I can _not_ build an iOS version. Turns out there is a project on GitHub that is supposed to fix the problem. Last checkin October 2018. Not holding my breath. – gnasher729 Aug 08 '22 at 18:52
  • Document it to absolute details, along with pseudo code. Then, write a "reference" implementation of your "Concept". People like "port" and "emulate" ancient things (retro games for example). – S.D. Aug 09 '22 at 12:19
  • "painlessly executable and readable on as many operating systems as long as possible." - your goal is already unachievable. You might as well work on making a perpetual motion machine. – whatsisname Aug 09 '22 at 18:55
  • @whatsisname there is a subtle difference: a perpetual motion machine violates thermodynamic principle. My software goal does not ;) – user7088941 Aug 12 '22 at 00:55
  • The docker concern only applies to builds. Running the code will be extremely stable. the only concerns that I would have is distribution (download location) and CPU architecture changes. Even so, when 64-bit became the dominant architecture, 32-bit compatibility still is a core feature in processors today. – tuskiomi Aug 17 '22 at 19:02

7 Answers7

9

Disclaimer: I am a guy who leads a team which maintains > 100 programs, where most of them have a long history, several of them much longer than just 10 years. From that experience, I can tell you that your best bet to achieve "future-proofness" is not to make wild guesses how the future might look like and try to design something like future-proofness into the software. Making predictions is hard - especially when they are concerned to the future.

Instead, if you want programs to live as long as possible, you need active maintainers, people who keep the software alive, and if necessary, port older code to newer technologies. If your program is of any interest to someone else in 10 years, chances are not too bad there could be someone filling out this role.

Don't get me wrong, this does not mean you cannot take any precautions. Even with active maintainers, you probably want to keep the maintenance effort low. @Candied_Orange's answer gives some good advice here (note all the recommendations there do implicitly assume there are maintainers in place). Let me add that making conservative technology choices is usually a good idea. You can google, for example, for the top 20 programming languages / ecosystems in 2012, compare them to the ones popular today and see which are still alive and actively supported by their vendors. Most of the language environments which are already older than 10 years have a high chance to survive the next 10 years as well (that's analogous the Doomsday argument, which is surely debatable, but probably better than nothing).

Of course, you should avoid technologies which are already abandoned by their vendors because they have already a clear successor, like VB.6, Python 2, Perl 5, Objective-C or technologies which are completely outdated like Cobol or Fortran. I would also recommend to avoid "the latest and the greatest", like Rust or Kotlin, and to avoid "cutting-edge" features in programming languages like C++, since there is too much movement there.

Same holds for 3rd party libs: if you rely on libraries which were mostly stable for the last 10 years, chances are high they will be there in 10 years from now. I would, however, recommend to validate who provides the libs: especially, if those are single persons or only very small organizations, you should make a risk asessment of what will happen in case they abandon the development. Is the lib open source and are you able to maintain it by yourself in case the authors cease development? Is it so popular someone else will probably maintain it? Or when it is closed source: is it provided by a bigger vendor who is likely to maintain it 10 years from now, and values backwards compatibility? If you are unsure, it is probably best to look for a different lib.

Concerning "readability" (you mean of the media where you store your source code, I guess)? That's mostly the same - you need to have an active maintainer who makes regular backups, copies the data to a new storage media from time to time, and keeps the archives alive.

For example, I have here a C++ program I wrote for my diploma dissertation almost 28 years ago. It was once stored on a floppy disk, in between I used Zip drives, CD and DVD roms. Today I keep my archives on external harddisks which are regularly renewed. But the whole source code is still readable, and it compiles with only minor modifications under GCC as well as under MSVC. However, that program makes almost no use of anything but the C standard libs, and it avoided C++ features like templates which were "cutting-edge" at the time when I wrote that program.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • Actually a bit of a depressing answer, that there is no 100% certain way around achieving future-proofness. But I do appreciate your insight very much! Thank you! Do you have any suggestion how to best assess the stability of a given library? Perhaps checking out (if it is on github) how many Github commits there are? But what if it is not on Github? – user7088941 Aug 08 '22 at 15:19
  • Regarding readability: Sorry, I think I confused you. Rather I meant the code being readable in the sense of understandable. The idea that I had in mind was that if I code super defensively, as Philipp Kendall suggested the code is not just harder to write but also to read. So the higher the likelihood the code will run in the future, the harder the code will be to read. The question is what strikes a good balance. – user7088941 Aug 08 '22 at 15:26
  • 2
    @user7088941: *"So the higher the likelihood the code will run in the future, the harder the code will be to read."* - sorry, but that is plain wrong. Human readability and "executability of code in 10 years" are orthogonal things. But when you want to be able to keep code actively alive, to fix problems you don't know today, *then* readability shows its value - so for this, the opposite is true- the more readable the code, the more likely it will be someone can fix any upcoming issues. – Doc Brown Aug 08 '22 at 15:43
  • @user7088941: and libs - I added some words to the paragraph above. "Github commits" is definitely not a good measure for assessment of a lib, but popularity can say a lot - a lib which is used by several thousand projects is likely to find a new maintainer in case the author finds new priorities in life. – Doc Brown Aug 08 '22 at 15:52
  • According to perl.org the latest stable Perl version is 5.36.0, so I wouldn't say Perl 5 already has a clear successor, if you are thinking about Perl 6 it is now being rebranded as Raku and the same considerations as for Rust or Kotlin apply. – Helena Aug 12 '22 at 17:52
7

Choose a popular programming language

The more people that know the language the better the odds someone who wishes to replicate your work will know the language.

By popular I mean new graduates in your recruiting pool are likely to understand your code base without needing to be sent off to a training class on the language.

While not technically part of the language, some frameworks can cause this same problem.

Separate the core of your project from infrastructure details

Unlike choice of language, most infrastructure choices don't have to be spread throughout the code base. By keeping them out of your core they are easier to change if needed. Beware of frameworks. Prefer libraries.

For example, if I can look at your code and tell that you're using the Foo framework then that code has no more "half-life" than the Foo framework has. If you're not willing to do that to your core code then push Foo out into isolated code that lets your core use Foo like a library through a minimal interface. Then if Foo crashes and burns you don't have to re-implement the whole thing. Just what you need.

Make your dependencies easy to substitute

Talk to your dependencies through simple abstractions that make your needs clear and keep your core from knowing what it's talking to and future people can drop something new behind that abstraction without your core knowing or caring.

Write readable code

None of that works if people can't follow what's happing. Give things good names. Have people read your code today. See how much they understand without you explaining it. Be willing to rewrite, restructure, and rename to make your code understandable. Do this soon after you write even a tiny bit of code or you're going to forget the code and become unwilling to make significant changes.


Question 1: What would a good trade-off be in terms of future-proof reproducibility?

The pattern to follow here is very simple: make it work, make it good, repeat. The faster you make it work the more time you can spend making it easy to reproduce. Decompose and do this on the smallest useful bits you can. You'll get better as you go.

Question 2: I guess one good approach is to think 2-3-5-10 years backward. What existed back then that I could still painlessly execute but also read today? (Perhaps coding in C, as it is well supported and its standard barely seems to change with time?)

A popular language is a good choice. Also, avoid "fads". The harder someone is selling it the less likely it's good enough to stand the test of time.

Question 3: How should I deal with 3rd party libraries? Assume C would be the best answer to the previous questions. To make everything work, I would need all the 3rd party libraries I use to also be in C and also ship them together with my application. Is there a better solution for that?

Boil it down to your requirements. Your core shouldn't care if it's needs are solved by a 3rd party library or done by one of your replicators. Just make clear what the cores needs are.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
  • This is a good answer, but could you illustrate your point with some examples? It is a bit too abstract at time for me. For example you say " most infrastructure choices don't have to be spread throughout the code base"; what would be an example of an infrastructure choice? Also if I code a long, complicated algorithm, that is related to my research and the thing outputs a picture at the end, what would be the abstractions and decompositions that I could use? I guess there is not much that I could do here ... – user7088941 Aug 08 '22 at 15:29
  • @user7088941 better? – candied_orange Aug 08 '22 at 15:43
  • I think your first advice could be improved: popularity is good, but popularity plus a certain degree of maturity is better. Often, some new things earn a short-term popularity because they are hyped, but often those things vanish as quick as they have appeared. – Doc Brown Aug 09 '22 at 20:09
  • @DocBrown better? – candied_orange Aug 09 '22 at 20:20
  • @candied_orange: not really what I meant, but alas, I think it is good enough as it is. – Doc Brown Aug 09 '22 at 20:27
  • @DocBrown My reasoning is if it's taught in your local CS program it's likely to be both mature and have staying power. I've rarely seen the hype make it into the core curriculum. Still not what you had in mind? – candied_orange Aug 09 '22 at 20:34
  • @candied_orange: in my experience, CS programs can be sometimes edgy. My first university programming course, for example, was based on SICP with Scheme (>30 years ago), Not what I would really recommend to the OP today, though I guess chances are not too bad my old programs would still run today, if I had kept them. – Doc Brown Aug 09 '22 at 21:00
  • @DocBrown yes and MIT wanted everyone to know LISP. Remember, the requirement is that grads can read your code base without needing another class. They may know more than that. But if LISP really is all they know then examine how much you hate LISP. Either that or start taking the dean out to dinner. – candied_orange Aug 09 '22 at 21:34
3

I think you're right to look back (say) 10 years into the past. The languages which were around then and are still mostly unchanged today are the "boring" ones: C, C++, Java.

If you took a program in one of those languages from 2012 which was designed to be future-proof, I'm pretty sure you could get it to compile and run today. The problem you have is that "designed to be future-proof" is going to significantly slow down development, possibly by an order of magnitude. You will want to:

  • Use as few third-party libraries as possible, preferably none. In this very specific case, you do want to reinvent the wheel so you know exactly what your dependencies are.
  • Code defensively for future architecture changes. For something like C, that means making zero assumptions about the platform on which you are running - e.g. an int is not 32-bits; Java is perhaps slightly more constrained in its specification so maybe easier to code to.

You will need to decide whether the additional development time (for both present you and future you) is worth it.

(As an aside, x86 assembly is I suspect a terrible choice for future compatibility; the world is moving to Arm, and fast)

Philip Kendall
  • 22,899
  • 9
  • 58
  • 61
  • 2
    That's a pretty incomplete subset of the languages which were available and popular >10 years ago. Add at least Perl, Python, C#, Cobol, Fortran, PHP, Javascript, Pascal, Ada to the list. And those (as well as the ones you mentioned) have heavily changed, but most of them in a mostly backwards-compatible fashion. – Doc Brown Aug 07 '22 at 15:31
  • For the interpreted languages, there's an additional risk of having to get the interpreter up and running; the JVM to me is to some extent a special case as it has been shepherded for (extreme) backwards compatibility over the past 20 years. I _might_ be prepared to accept the .NET CLR into that group these days now it's no longer Windows only. Point accepted the other compiled languages, but I see no real value in writing (say) Fortran over C. – Philip Kendall Aug 07 '22 at 15:38
  • 1
    Perl 5 is probably sufficiently stable with ~25 years of interpreter development similar to the JVM, but its also a horrible language for new development. – user1937198 Aug 07 '22 at 15:47
  • 1
    You don't have to reinvent the wheel - you can simply copy and paste the relevant parts of the source code from a library into your own source directory, with any necessary attribution and copyright licenses. Then it doesn't matter if that library stops being available. – bdsl Aug 07 '22 at 20:25
  • @bdsl The risk there is that it doesn't compile (correctly) in the future because it was doing "something funky", including but not limited to making assumptions about the platform on which its running, relying on compiler specific behaviour, relying on experimental language features which get removed, JNI / pInvoke etc. – Philip Kendall Aug 07 '22 at 20:30
  • @DocBrown I was also starting to think about Perl. Or even Lisp. There seem to be these languages that never die as they have a hard-core following of (mostly academic?) enthusiasts that keep them alive for decades. But I will have to look how stable the libraries for these languages are, as it is my impression that the languages are mostly used in a "pure" way and the ecosystem around them is terribly big. – user7088941 Aug 08 '22 at 15:14
  • @bdsl @ PhilipKendall Spot on with the idea of coding defensively. :) But could you please elaborate a bit on why copying the relevant library might not be a good idea? I guess I would need to go over that code as well and refactor it so that it is "defensive". Also, what is the JNI / pInvoke debacle about? I google it a bit but couldn't really figure out what the issues was there. – user7088941 Aug 08 '22 at 15:14
  • @user7088941: a problem with Lisp is that it will probably make it harder to find active maintainers. Perl - is definitely an option, but as readability will help people to maintain a program over years, I would probably prefer one of the languages mentioned by Philip, or Python, or C#. But YMMV. – Doc Brown Aug 08 '22 at 16:02
  • @user7088941 I never called them a "debacle"; JNI / pInvoke just mean you're touching native code, which in turn means an additional layer of complexity you have to manage as you have to get that native code running on your new platform. – Philip Kendall Aug 09 '22 at 10:01
1

I think you are approaching this wrong.

10 years isn't a long time for Microsoft, IBM or Oracle. Stuff you wrote in VB6 or Java 10 years ago will still run on the latest version of windows today. This is because some big bank or government somewhere is still running windows 3.1 whilst also spending millions and demanding compatibility.

If you want your software to still run after 10 years on windows and linux then write it in java or .net core.

Sure the official support period for a specific version might be shorter than that, but the weight of these slow moving big spenders is behind you.

Ewan
  • 70,664
  • 5
  • 76
  • 161
  • 2
    *"Stuff you wrote in VB6 or Java 10 years ago will still run on the latest version of windows today."* - Indeed, chances are not bad. At least such programs can be made running with reasonable effort, not necessarily 100% out-of-the-box. For example, in VB6, it is not unlikely that some COM components have been classified as "unsafe" today, and Windows blocks them (which can often be circumvented by patching the registry, or switching to a newer version). But that's what I am saying: software needs a maintainer if they should be kept alive over years. – Doc Brown Aug 08 '22 at 08:50
  • Maybe for software which makes money, but I think here we are talking about software you write once and its finished. – Ewan Aug 08 '22 at 08:57
  • That's exactly my point: when you want to operate software over years, the mindset that it is finished isn't helpful. – Doc Brown Aug 08 '22 at 10:14
  • @DocBrown @ Ewan Yes, it's that kind of software that I will not return to. The reason is that academia work very differently from industry, i.e. people maintain stuff in their free time - and I'm 100% that at some point I will not want to maintain the software anymore. So even if that mindset is not helpful, it's the unfortunate reality. But to mitigate that, at least I want to give my software *the best possible chance to live as long as possible*. – user7088941 Aug 08 '22 at 15:06
  • @Ewan I'm still not sure what you view "wrong" in my approach? I think after 10 years the research itself will be superseded by better methods, a new paradigm etc. Though I do think your idea of basically piggy-backing the software stack a big organization uses is a very good idea. ;) – user7088941 Aug 08 '22 at 15:17
  • It seems like you are looking for the most basic universal programming language or OS to write your program in. Where as I'm saying you need to look for the systems with the most.. i'm not sure how to put it. inertia? money? regardless of the complexity. – Ewan Aug 08 '22 at 19:47
  • Also for code to be usable by academics in 10 years time, consider languages which are consistently popular among academics; the real barrier likely isn't going to be technical, but the skillset of whoever picks up your work. If your code doesn't work but is written in a 'de-facto' language of that particular field (Maybe R? MATLAB? Python?) chances are the people who are picking up that code will probably be used to "archaeology" in that language and have no real problem figuring out how to make it work. – Ben Cottrell Aug 09 '22 at 17:41
0

At one point I was working in a small company which can be best described of making a living of doing Agile COBOL development. A customer specific request of an alteration of their standard product could be in production in two weeks. Their product was written in OPM COBOL running on a AS/400 which is very different from customer operating systems like Windows and Linux, and which was conceived in the late 80'es. Before the internet! So it was my job to write Java-code (running on the AS/400 which it was very good at) to talk to other systems.

Question was, how should this non-COBOL code be written so it could be maintainable for at least a decade? The choice of Java was already given, but which technologies and libraries to use? (This was before the arising of Maven, when that came the job became a lot simplere because things stay on Maven Central for basically ever). At that point there was a lot of projects gathering under the Apache umbrella due to the success of the Apache http server.

I found out the following:

  • The Java projects under Apache could not be relied on to live as long as I wanted them to. Several was severely neglected and others downright abandoned.
  • The specifications from Sun about how to implement servlets and Java Enterprise turned out to be very sturdy and implemented by many vendors giving choices. Coding to the reference implementation (Glassfish with JavaServer Faces) was perhaps the best long-term decision made.
  • Java is a very good choice for long term code. Sun had it in their DNA to be as backwards compatible as possible, and Oracle has honored that after their purchase. There are still Java 1.0 jars out there that run unmodified with the latest version of Java.

Additionally:

  • You need a local copy of everything you use in your build. Network resources come and go (just ask the Wayback Machine). You must be able to build even if your internet connection is down. Use internal proxies for artifacts so they have copies that go in your backup system.
  • Automate your builds. If your code can build and run from a blank template, you don't rely on individuals machines. Docker helps too.

That said. This mindset is worth a lot to the right employer, because code lives a lot longer than most people think.

-2

If its not too complex for sure JavaScript and HTML (and preferably within one file).

Browsers need to have A LOT of backwards compatibility.

I’m pretty sure if you keep to normal logic all your forms and javascript will just work fine in 10 years on all browsers and all OSes and can be run with one click.

Update: I think people in the comments are confusing the HTML/JavaScript ecosystem to JavaScript the language. To be clear: don't use any preprocessors (like less or sass) or package managers like NPM. They break A LOT. If you use any libraries, try to keep it simple and of course include the code within your project. Don't refer to external files.

I'm very confident that even sites from 1995 will still run today. Although deprecated even frameset still works.

Dirk Boer
  • 381
  • 2
  • 9
  • 6
    if I had to name one tech that breaks every year.... – Ewan Aug 07 '22 at 20:40
  • What are you talking about? Plain html/javascript (without any libraries) never break. Almost everything is backward supported. Even marqueue is still working. I think you are mixing it up with javascript libraries. – Dirk Boer Aug 07 '22 at 23:33
  • 1
    If only JS would work fine and in all browsers the day the programmer wrote it... – nvoigt Aug 08 '22 at 06:17
  • I think you are mistaken with the IE6 times. And even then if you don’t do complicated interface logic but only have a few simple forms it still works in IE6 - a browser from the year 2001. – Dirk Boer Aug 08 '22 at 11:54
  • Are there perhaps any papers out there with studies that indicate how quickly javascript becomes backward incompatible? At first I thought "this is a really clever idea", but then these outer comments piled in so I'm not sure anymore. – user7088941 Aug 08 '22 at 15:36
  • @nvoigt I have never coded in Javascript but is it really that bad. Isn't there something like small core of functions in Javascript that browsers always interpret in the same way and on which I could therefore rely? – user7088941 Aug 08 '22 at 15:37
  • @user7088941 it's not. It is the "hate" hype. Unless very specific security related issues everything is backwards compatible for browsers. Interfaces and aligning was a bit of a problem in the past between browsers, but if it's mainly a functional tool that is anyway not a problem. Normal coding logic works perfect between browsers. Nowadays it's really rare to write browser specific code unless you do cutting edge things. If they can name anything substantial beside these (hate-train) one-liners I am happy to change my opinion, but I'm pretty sure they have not. – Dirk Boer Aug 08 '22 at 16:18
  • 1
    Browsers do indeed have decades-long backwards compatibility, but only if someone writes pure standards-compliant HTML, CSS and JS, so no server-side code, without depending on any build tools (so no npm, webpack, jsx), and hosting all their dependencies from the same static http server (e.g. downloading them from unpkg.com). I think for having something remain functional the longest this is the way to go, although the instructions for running the (unchanged) code will change over time as static http servers come and go. – Joeri Sebrechts Aug 09 '22 at 11:44
  • @DirkBoer I ended up upvoting your answer :) Could you though please outline any other issues that Joeri Sebrechts hasn't yet mentioned that need to be taken care of when using Javascript? It seems that the reason people were not happy with your answer was the fact that you didn't mention these various contraints pointed out by Sebrechts I would need to take into consideration when going down this route. – user7088941 Aug 12 '22 at 01:06
  • Hi @user7088941, I fully agree with @JoeriSebrechts. You should not use any preprocessors or package managers or anything. These break A LOT. Also don't use any "cutting edge" things like "page transitions". Just simple JavaScript and HTML - sites that have been made with that in 1995 will still work today (including ``) – Dirk Boer Aug 12 '22 at 10:19
  • 1
    I think this answer is just trading one platform (a spefic browser family) for another (a specific operating system family). There is no reason for me to believe browser platforms provide more backwards compatibility than native operating systems. – Doc Brown Aug 12 '22 at 16:20
  • @DocBrown it is not “one browserfamy” - as long as you keep it simple javascript and HTML it is all browserfamilies on all operating system families. And the execution engine / “compiler” is naturely included. Name me one other language I can edit and execute on both Windows and Mac without installing extra tools? – Dirk Boer Aug 12 '22 at 21:38
-3

I think your premise is wrong. In ten years no one will be interested in running your code. More likely someone will want to redo it on a different platform, implementing logic that matters then.

If you want to make people's lives easier you would do better looking at what code exists in the organization you work for now and stick with that. If they use Java, so do you. If they make a web page for everything, so do you. If at any time things will need to be changed in the future, your code will just be one of many similar parts that need to be converted rather than the oddball of someone who insisted to do things differently.

Martin Maat
  • 18,218
  • 3
  • 30
  • 57
  • 7
    The [replication crisis](https://en.wikipedia.org/wiki/Replication_crisis) is a real problem in scientific research, particularly in rapidly evolving fields like machine learning. One of the prerequisites to being able to reproduce the research is the ability to run the code used for it, so at least in some fields, this _is_ important (and the OP's question is specifically about a research project). – Philip Kendall Aug 07 '22 at 17:08