160

I'm currently developing a web application for government land planning. The application runs mostly in the browser, using ajax to load and save data.

I will do the initial development, and then graduate (it's a student job). After this, the rest of the team will add the occasional feature as needed. They know how to code, but they're mostly land-planning experts.

Considering the pace at which Javascript technologies change, how can I write code that will still work 20 years from now? Specifically, which libraries, technologies, and design ideas should I use (or avoid) to future-proof my code?

Dan
  • 1,508
  • 2
  • 10
  • 12
  • 96
    I started programming in Fortran in late 1966, so I've had plenty of time to think about exactly that kind of issue. If you ever come across an even-50%-reliable answer, please let me know. Meanwhile, just think of the almost-certain inevitable obsolescence as "job security" :) – John Forkosh Oct 13 '16 at 08:19
  • 11
    Nothing last forever in Software Engineery. Only HOST at banks and because nobody dares to update such critical systems. Well, I guess the program running in the Voyager also counts. – Laiv Oct 13 '16 at 08:22
  • 9
    @Laiv Some time back, I worked on money transfer applications for Bankers Trust using Swift messaging running on Vax/VMS. A few years later, Swift eol'ed (end-of-life'ed) all VMS support. Boy, did that cause some problems ... and provided me with yet another contract at BTCo. Like I said above, "job security":). Anyway, my point is that even critical financial market applications aren't immune to obsolescence. – John Forkosh Oct 13 '16 at 08:36
  • 1
    "planned obsolescence" for the good of all of us. – Laiv Oct 13 '16 at 09:04
  • 102
    How about "Write code that the next developer can understand"? If and when the code becomes obsolete to the point that they will need to find a programmer to update it, the best scenario is that they will understand what your code is doing (and maybe why certain decisions were made). – David Starkey Oct 13 '16 at 13:30
  • 4
    The doubt the front-end can go unchanged for 20 years unless it was on the mainframe using cobalt. I would say, develop the backend with services and have all the business logic there and allow the front-end to consume and use the services. At least that way the front end could change as it needs as long as you build the backend strong enough. Check out MSP its a mortgage app, its so old and stable.. I doubt you could get that kinda durability out of a frontend application. In 20 years it would look very ugly and dated just like MSP. – Tony Oct 13 '16 at 17:50
  • 4
    FWIW, I recently found out that some telecom billing software I wrote back in the mid-90s is still in use. I guess that hack I put in for Y2K compliance worked. – TMN Oct 13 '16 at 19:30
  • 38
    Just use plain old HTML, no JS, no plugins, nothing fancy. If it works in Lynx, it's good for all time. – Gaius Oct 13 '16 at 20:17
  • Not really an answer but just an idea: if you can set up an automatic build system that generates a working VM and a Docker container with the software installed in them, they might be a very good "backup" solution in case the software needs to be run in the future. – Andrea Lazzarotto Oct 14 '16 at 11:41
  • 2
    Make sure you include everything that is needed to rebuild and deploy the app within your source code. Nothing that will require someone to download and install at a later date, because it probably won't be available 5 years from now. Tend towards open source, because it'll be simpler to recreate a dev environment in the future. Avoid technologies tied to a specific vendor (COM+, OracleDb, etc). – Andrew Lewis Oct 14 '16 at 14:56
  • 1
    I don't believe that the internet as we know it will last that long. Html should have been replaced or altered a decade ago javascript will be replace by typescript or alike. The day the big players will feel its time to move to a well designed language based on all experience of the past 20 years cant be far... Browsers will run two engines one for the old sites and one for the new sites and after a couple of years they will kill there support on the old sites... If html will still rule the internet in 2040 it will be very sad. – Asaf Oct 14 '16 at 21:08
  • 1
    realistically I think you are looking at this the wrong way. web tech and non-critical government systems come and go. there is a strong case for doing this as directly as possible and assuming it will need attention or get replaced as requirements and technologies change. the only part I would be worried about future proofing is the data. choose something relational and proven over 10+ years and make sure you can export it to something that makes sense to other actors in the domain if need be – Bill Oct 14 '16 at 21:55
  • 8
    "Data matures like wine, applications mature like fish." – Captain Hypertext Oct 15 '16 at 19:24
  • Keep things as simple as you can, using few technologies/libs/etc. Use progressive enhancement to enable graceful degradation. – Tanath Oct 16 '16 at 03:24
  • 3
    One tip: don't have errors in your HTML. If there's one thing that browsers have been treating differently over the years, it's errors. – Mr Lister Oct 16 '16 at 18:27
  • Use HTML, CSS and vanilla JS. – m4n0 Oct 17 '16 at 17:56
  • 1
    `(it's a student job)` Seriously: It's very noble that you think that much about this. But lets recap: They hired a student to plan a (apparently) business critical application which should last for the next 20 years? If they immediately offer you a perpetual position where you can start after you graduated you could invest more time into this. Otherwise it's a nice task but don't overthink. – Noir Oct 17 '16 at 19:28

8 Answers8

182

What is even more important than your code surviving for 20 years is that your data survives for 20 years. Chances are, that's the thing worth preserving. If your data is easy to work with, building an alternate system on top of it with newer technology will be easy.

  • So start with a clear and well documented data model.
  • Use an established, well supported database system, such as Oracle[1] or SQL Server.
  • Use basic features, don't try to squeeze in flashy new ones.
  • Prefer simple over clever.
  • Accept that future maintainability can come at the expense of aspects like performance. For instance, you might be tempted to use stored procedures, but these might limit future maintainability if they prevent someone from migrating the system to a simpler storage solution.

Once you have that, future-proofing the app itself is simpler, because it's a wrapper around the data model, and can be replaced if, in 10 years, no one uses Javascript anymore, for instance, and you need to migrate the app to WASM or something. Keeping things modular, less interdependent, allows for easier future maintenance.


[1] Most comments to this answer take a strong stance against using Oracle for a DB, citing a lot of perfectly legitimate reasons why Oracle is a pain to work with, has a steep learning curve and installation overhead. These are entirely valid concerns when choosing Oracle as a DB, but in our case, we're not looking for a general purpose DB, but one where the primary concern is maintainability. Oracle has been around since the late 70's and will probabl be supported for many years to come, and there's a huge ecosystem of consultants and support options that can help you keep it running. Is this an overpriced mess for many companies? Sure. But will it keep your database running for 20 years? Quite likely.

Avner Shahar-Kashtan
  • 9,166
  • 3
  • 29
  • 37
  • 142
    I'm sorry, but I have to say this. If you use Oracle, you're shooting everyone in the foot with regards to "easy to work with." Oracle is *not* easy to work with in the slightest. A great deal of functionality that SQL Server, PostgreSQL, and probably even MySQL make simple, Oracle either flat out doesn't have or makes overly difficult. I never have as many stupid problems with other DBs as I have with Oracle; even just setting up the client is a huge pain in the butt. Even Googling things is hard. If you want "easy to work with," stay away from Oracle. – jpmc26 Oct 13 '16 at 22:17
  • 2
    Migration across different SQL vendors, alongside backup and restore procedure, may also need to be planned up-front. In a very contrived and unfortunate scenario, data may be backed-up when the system was running with one SQL vendor, and then decades later it may have to be restored (or somehow recovered) into a different system running a different SQL vendor. This may even call for an export into a vendor-neutral data format as part of backup procedure in order to prevent this scenario. That said I'm not a pragmatic person - I worry too much. – rwong Oct 13 '16 at 22:53
  • 4
    +1 for keeping the data as simple as possible. Use standard SQL for this e.g. use _OUTER JOIN_ instead of the oracle specific _+_ operator. Use simple table layouts. Dont normalize your tables to the absolute maximum level. Decide if some tables can have redundant data or if you really must create a new table so that every value exists only once. Are stored procedures _vendor specific_? If yes then dont use them. Dont use the hottst feature of your current language of choice: I've seen more COBOL programs _without OOP-Features_ then with them. And thats totally ok. – some_coder Oct 14 '16 at 06:46
  • 3
    @jpmc26 I agree with your sentiments about Oracle, but as I said, "easy to work with" isn't necessarily the main requirement here. I prefer a solidly supported platform here, even if it's a pain to work with. Because when amortized over 20 years, it's not too bad. – Avner Shahar-Kashtan Oct 14 '16 at 07:15
  • 2
    @rwong I agree. That's why I want the data schema simple so it can be backup up, migrated, exported as CSV, converted to JSON and reimported into SQL. If the data schema is portable, it's maintainable. – Avner Shahar-Kashtan Oct 14 '16 at 07:16
  • @some_coder As a developer that loves to play with the newest tools, it goes against the grain, but when you want to plan for the long-term, you need to use *fewer toys*, and the simplest features. – Avner Shahar-Kashtan Oct 14 '16 at 07:17
  • @some_coder I totally agree, especially about Oracle's keenness to lock people in. One example, their Flashback technology, is hard to replace, without locking into a different tech. Any ideas? :) – Rob Grant Oct 14 '16 at 12:01
  • 1
    +1 Great post, most who have worked with it agree that oracle db sux, though. – Mark Rogers Oct 14 '16 at 14:43
  • @jpmc26 - The reason you don't like Oracle is because it gives you enough rope to hang yourself and many inexperienced developers do just that. The databases you mention as being easier to work with lack the robustness of Oracle and in many cases do not implement the ANSI SQL standard as well. There is certainly a learning curve that comes with that additional functionality but claiming that it's a *poorer* database because of the complexity is like saying that C++ is a poorer language than Java because it doesn't have a Garbage Collector. –  Oct 14 '16 at 15:32
  • 1
    @DanK Untrue. It doesn't provide good collection functionality. It's query planner isn't very smart; just in the past 2 days it was choosing to do a full join between two tables instead of *filtering* on other tables first. (I had to use `/*+ ORDERED */`. Updating stats did not help.) It has stupid bugs like not being able to parse the WKTs it generates or causing errors by reordering queries in such a way that invalid arguments got passed into a function. The .NET managed client doesn't work with Oracle 11 under FIPS mode because the *server* is missing the encryption algorithms. – jpmc26 Oct 14 '16 at 15:43
  • 2
    @DanK And a lot of the info about problems is hidden behind a very expensive pay wall. All of these are problems that have wasted my time in the last 2 or 3 years, and none of them are "having enough rope to hang yourself." – jpmc26 Oct 14 '16 at 15:43
  • 1
    Guys, you're missing the point here. It's not discussing the relative merits of Oracle as a general purpose RDBMS. We're talking about a scenario where long-term support and predictability are key, and Oracle *does* provide that. Bad query planners or full ANSI SQL implementations aren't the issue here. – Avner Shahar-Kashtan Oct 14 '16 at 15:46
  • @AvnerShahar-Kashtan No, I understand what you mean. I was debunking DanK's comment, mostly. I understand that Oracle provides *very* expensive "support," but my point is that ultimately, the poor functionality will make it *so much* harder to maintain the application in the future, that it isn't a good choice for an application that needs to last this long. Especially when there is a viable alternative from an equally strong company. – jpmc26 Oct 14 '16 at 16:07
  • 8
    Indeed avoid Oracle. The only DB in existence today that is likely to not look like a bad choice in 20 years is Postgresql. – Joshua Oct 14 '16 at 18:55
  • 3
    I'd like to add that great open source DBMS are preferable because there is a good chance they won't die. If Oracle stops making money in 10 years, then in 11 it will be gone. PostreSQL seems like the best horse to bet on. – Shautieh Oct 15 '16 at 06:32
  • Or if it will be fast enough for you, just store the data in json files! Will not work for a transaction type system, but will work for lots of website when the data is not often changed. – Ian Oct 18 '16 at 11:53
136

Planning software for such a lifespan is difficult, because we don't know what the future holds. A bit of context: Java was published 1995, 21 years ago. XmlHttpRequest first became available as a proprietary extension for Internet Explorer 5, published 1999, 17 years ago. It took about 5 years until it became available across all major browsers. The 20 years you are trying to look ahead are just about the time rich web applications have even existed.

Some things have certainly stayed the same since then. There has been a strong standardization effort, and most browsers conform well to the various standards involved. A web site that worked across browsers 15 years ago will still work the same, provided that it worked because it targeted the common subset of all browsers, not because it used workarounds for each browser.

Other things came and went – most prominently Flash. Flash had a variety of problems that led to its demise. Most importantly, it was controlled by a single company. Instead of competition inside the Flash platform, there was competition between Flash and HTML5 – and HTML5 won.

From this history, we can gather a couple of clues:

  • Keep it simple: Do what works right now, without having to use any workarounds. This behaviour will likely stay available long into the future for backwards-compatibility reasons.

  • Avoid reliance on proprietary technologies, and prefer open standards.

The JavaScript world today is relatively volatile with a high flux of libraries and frameworks. However, nearly none of them will matter in 20 years – the only “framework” I'm certain that will still be used by then is Vanilla JS.

If you want to use a library or tool because it really makes development a lot easier, first make sure that it's built on today's well-supported standards. You must then download the library or tool and include it with your source code. Your code repository should include everything needed to get the system runnable. Anything external is a dependency that could break in the future. An interesting way to test this is to copy your code to a thumb drive, go to a new computer with a different operating system, disconnect it from the internet, and see whether you can get your frontend to work. As long as your project consists of plain HTML+CSS+JavaScript plus perhaps some libraries, you're likely going to pass.

amon
  • 132,749
  • 27
  • 279
  • 375
  • 4
    Large scale applications are unmaintainablr in vanilla js, as of now. ES6 already somehow fixes the issue, but there is a reason why flow or TypeScript are gaining popularity. – Andy Oct 13 '16 at 12:00
  • 34
    @DavidPacker Absolutely, TypeScript etc. are great and make development easier. But as soon as I introduce a build process, all the tools required for the build process become dependencies: NodeJS, Gulp, NPM – who says NPM will still be online in 20 years? I'll have to run my own registry to be certain. This is not impossible. But some point, it's better to let go of things that make development easier only immediately, but not in the long run. – amon Oct 13 '16 at 12:29
  • I am not a JS developer, if you are, how are you maintaining huge codebases written in JS and how do you introduce new developers to those when JS has no types? – Andy Oct 13 '16 at 12:49
  • 31
    @DavidPacker There are many dynamic languages, and surprisingly, many successful systems have been built with Smalltalk, Ruby, Perl, Python, even with PHP and JS. While statically typed languages tend to be more maintainable whereas dynamic languages tend to be better for rapid prototyping, it's not impossible to write maintainable JS. In the absence of a compiler, high median skill in the team, craftsmanship, and extra emphasis on *clear code organization* becomes even more crucial. I personally think types make everything easier, but they're no silver bullet. – amon Oct 13 '16 at 13:09
  • 4
    Did I just read "take usb and test on different machine"? Why not just spin up virtualbox or just use incognito mode (with ethX disabled). – Kyslik Oct 13 '16 at 15:58
  • 1
    @Kyslik Yes, you did :) But you're right, a VM is absolutely sufficient. The point is – test your build & release process in a completely new environment to be sure that you didn't miss any dependency. Moving from Linux to Windows or vice versa is bound to uncover all kinds of wrong assumptions. It has happened too often that I installed some extra library or tool and then forgot about it – that mustn't happen if the software is supposed to last a decade or two. – amon Oct 13 '16 at 16:50
  • 5
    I’m not certain vanilla JS *will* be a sure thing 20 years from now. Its history was rocky and experimental, and it’s picked up a fair amount of cruft along the way, even as it has emerged as a delightful and effective language (I personally prefer JavaScript or TypeScript myself). It’s not hard to imagine that vendors may well want to ditch some or all of that cruft, whether it means starting to offer a new alternative language—as Google seemed to be proposing with Dart, however much that doesn’t seem to have gone anywhere—or by deprecating and then eliminating portions of JS. – KRyan Oct 13 '16 at 17:19
  • 1
    jQuery will obviously outlive vanilla JS :-) – Alexander Derck Oct 13 '16 at 18:03
  • 2
    @KRyan WebAssembly. In 5 years or so I think it's quite possible you'll be able to run pretty much any language you want on the front-end. I don't think that'll mean JS is going away though, not for a long time. At the very least I think you'll still be able to compile JS to WebAssembly 20 years from now and run it that way. – Ajedi32 Oct 13 '16 at 18:23
  • @amon Did you mean to say JavaScript was released in 1995? Java and JavaScript were both released in that year, but JavaScript seems more relevant to the rest of your answer. – thelem Oct 13 '16 at 19:57
  • 1
    @DavidPacker, Maybe you should try something before you say things about it. My current project, a web application, uses javascript for the front and backend and has ~300k lines of code (it's 107mb in git) of which I've personally written 30k on both sides of the stack. True some of that is Node, NPM and and Grunt, but overall it's not terribly hard to maintain. File structure has the largest impact, and dependency injection pretty much takes care of the rest. – Ryan Oct 13 '16 at 20:37
  • 2
    @deadMG Vanilla JS is not nearly as bad as you are suggesting. You can definitely make maintainable and easy to read Javascript. You also just provided an anecdote, so that's not helpful. ES6 is only a superset of the current version of JS, so it's not like ES6 is some magical new language. It has some cool new features, but you don't need them. That said, I think it might be safe to transpile from a newer ecmascript, ES6 for example, in the beginning, as the idea is ES6 will inevitably be natively supported. – Ben Oct 14 '16 at 02:13
  • 2
    @KRyan No browser vendor wants to drop vanilla JS, because it will break the web (for users of that browser) and they'll have zero users within a week. – user253751 Oct 15 '16 at 11:07
  • 1
    VanillaJS is not only as bad as I suggested, it's much worse. ES6 doesn't really help matters all that much. If you want code that would actually be better maintained than replaced, you need at least Typescript. – DeadMG Oct 16 '16 at 11:53
  • @Ryan I am not a JS developer but that does not mean I haven't written JS. I don't like vanilla JS because it's not self-documenting, a feature I expect from a well established language. Sure, you may make even large scale applications written in vanilla JS readable, that is if you provide a documentation for the code. But who guarantees the documentation will be up-to-date? Nobody. If the code documented itself (parameter and return types, methods) there would be no need for additional documentation as the code itself is one. – Andy Oct 16 '16 at 12:19
  • @DavidPacker In which way having explicit parameter and return type can help maintenance? Do you think that genericity makes programs unreadable? Vanilla JS is perfectly readable, it's more the interactions with the doms which make things complicated to follow. – Shautieh Oct 17 '16 at 08:16
  • @DavidPacker "self documenting code", you can have a self documenting function, class at best, but if your class is part of a design pattern, say decorator for instance like Java's stream/reader/writer, it can still be really hard to know how to use that class in the big picture that the application is. Even if i prefer static typing, when your variable is called "name", "nbLine", no need of static typing to know what it is. And if you can't really name something properly that would let know the proper type, maybe use a little bit of Hungarian notation. – Walfrat Oct 17 '16 at 11:32
  • Another things, if your application generates files that must be readable in 20 years, prefer plaintext/csv/xml to only contains the datas and add a little application to properly show it to the others. Maybe we won't write .docx/odt files in 20 years – Walfrat Oct 17 '16 at 11:36
  • 1
    @Walfrat With static typing and modern IDEs you can at least inspect the methods and attributes of said class. One cannot say the same about javascript, for example callbacks are a huge unknown and you cannot let a user of an API know in the code what the parameter passed to a callback, a user may define, (or even multiple parameters) are going to be and what attributes the parameter(s) has or have respectively. You need a documentation for that. Another document. The more things you need to describe a code the more likely you're going to make a mistake somewhere. – Andy Oct 17 '16 at 11:46
  • @DavidPacker yes of course, i just wanted to precise than "self documenting code" is in fact quite narrow as you can't really go beyond class level. – Walfrat Oct 17 '16 at 11:49
  • @DavidPacker, you are under a false assumption that it's a languages job to be self documenting and that the only way to be self documenting is to have types. ***It is a programmers job to write self documenting code.*** Having types in c++ doesn't stop me from doing something stupid like declaring `int a = 500;`. What's `a` used for it's an `int` but what does it represent? How's that better or more self documenting then `var taxIncrementAmount = 500;` In either case to have truly self documenting code the ***programmer*** is responsible not the language. – Ryan Oct 17 '16 at 17:19
  • 1
    @Ryan Having a function like [this](http://pastebin.com/m3mH3NBb) how do you know beforehand what the parameters the `onFinishedLoadingCallback` is going to take without looking into the body of the `loadDataFromApi` function and without looking at a documentation? You don't. Compare that to a [C# version](http://pastebin.com/CLvMQpRG), as a developer looking at the interface of the method I immediately see what I can expect to recieve in the callback without having to see the body at all. I find that much more convenient. – Andy Oct 17 '16 at 17:39
  • I would hardly call Flash dead. Heck, just this morning I watched the new episode of *The Flash* on the official, Flash Player-based, website. – Mason Wheeler Oct 19 '16 at 14:13
38

The previous answer by amon is great, but there are two additional points which weren't mentioned:

  • It's not just about browsers; devices matter too.

    amon mentions the fact that a “web site that worked across browsers 15 years ago will still work the same”, which is true. However, look at the websites created not fifteen, but ten years ago, which, when created, worked in most browsers for most users. Today, a large part of users won't be able to use those websites at all, not because browsers changed, but because devices did. Those websites would look terrible on small screens of mobile devices, and eventually not work at all if developers decided to rely on JavaScript click event, without knowing that tap event is also important.

  • You're focusing on a wrong subject.

    Technology changes are one thing, but a more important one is the changes of requirements. The product may need to be scaled, or may need to have additional features, or may need its current features to be changed.

    It doesn't matter what will happen to browsers, or devices, or W3C, or... whatever.

    If you write your code in a way it can be refactored, the product will evolve with technology.

    If you write your code in a way nobody can understand and maintain it, technology doesn't matter: any environmental change will bring your application down anyway, such as a migration to a different operating system, or even a simple thing as natural data growth.

    As an example, I work in software development for ten years. Among the dozens and dozens of projects, there were only two I decided to change because of technology, more precisely because PHP evolved a lot over the last ten years. It wasn't even the decision of the customer: he wouldn't care less if the site uses PHP's namespaces or closures. However, changes related to new requirements and scalability, there were plenty!

Arseni Mourzenko
  • 134,780
  • 31
  • 343
  • 513
  • 4
    Adoption to different screen sizes is a general problem. Mobile is the hyped thing at the moment, but if you are looking at this website in a full screen browser window on a screen with enough resolution, there's a lot of empty (wasted) space. Changing layouts and how information is presented to best use the available pixels never really happened in a smart way. Mobile made this obvious. But thinking in the other direction might be more important for the question at hand. – null Oct 13 '16 at 12:46
  • 9
    @null: while I agree with your comment, StackExchange websites may not be the best illustration of your point. Given the data to display, I believe StackExchange designers/developers did a great job of displaying it as it needs to be displayed, including on large monitors. You can't make the main column wider, because text would become much more difficult to read, and you can't use multiple columns because it won't look nice for short questions and answers. – Arseni Mourzenko Oct 13 '16 at 12:56
  • Another good example is the 'hover' event that was often used in menu systems. Many of those menus fail miserably with touch devices. – Justas Oct 13 '16 at 15:59
  • You're 110% (or more) right about devices, and I can provide you with decades-older examples. Back in the late 1980's I worked on CICS applications running on IBM mainframes and synchronous 3270 terminals. The CICS region is kind of analogous to server-side apps, sending screen-fulls of data at a time to the synchronous terminals, which are thus analogous to dedicated-device-browsers. And CICS programming was maybe 80% Cobol, 20% PL/1. Both those languages are mostly obsolete nowadays, and the appearance of Unix workstations (Sun and Apollo) in the early 1990's pretty much killed CICS entirely – John Forkosh Oct 17 '16 at 11:06
32

You do not plan to last 20 years. Plain and simple. Instead you shift your goals to compartmentalization.

Is your app database agnostic? If you had to switch data-bases right now, could you. Is your logic language agnostic. If you had to rewrite the app in a totally new language right now, could you? Are you following good design guidelines like SRP and DRY?

I have had projects live for longer then 20 years, and I can tell you that things change. Like pop-ups. 20 Years ago you could rely on a pop-up, today you can not. XSS wasn't a thing 20 years ago, now you have to account for CORS.

So what you do is make sure your logic is nicely separated, and that you avoid using ANY technology that locks you in to a specific vendor.

This can be very tricky at times. .NET for example is great at exposing logic and method for it's MSSQL database adapter that don't have equivalents in other adapters. MSSQL might seems like a good plan today but will it remain so for 20 years? Who knows. An example of how to get around this to to have a data layer totally separate from the other parts of the application. Then, worst case, you only have to re-write the entire data layer, the rest of your application stays unaffected.

In other words think of it like a car. Your car is not going to make it 20 years. But, with new tires, new engine, new transmission, new windows, new electronics, etc. That same car can be on the road for a very long time.

coteyr
  • 2,420
  • 1
  • 12
  • 14
  • 2
    "If you had to switch data-bases right now, could you" This is nigh impossible to accomplish if you do anything more than CRUD on one row at a time. – jpmc26 Oct 13 '16 at 22:26
  • 1
    Plenty of ORMs are database agnostic. I could given any one of the projects I am working on gaurentee that I could switch from SQLLite, to MySql and Postgre with no effort. – coteyr Oct 13 '16 at 23:21
  • 5
    And ORMs cease to be very good tools for the job when you do more than simple CRUD on a single record at a time. That's why I qualified it. I've tried. As query complexity grows, even the best ORMs become more trouble than just writing the query, and even if you force your query into them, you pretty quickly find yourself using database specific features or optimizations. – jpmc26 Oct 13 '16 at 23:22
  • Preference I guess. I have yet to have a problem with complex tasks and ORMs – coteyr Oct 13 '16 at 23:25
  • 1
    Define "complex". Was this a bulk operation? Did it include window queries? Subqueries? CTEs? Unions? Complex grouping conditions? Complex math on each row and the aggregates? How many joins in a single query? What kinds of joins? How many rows were processed at once? Admittedly, saying *anything* over single row CRUD (Mind you, this means one row per query, not per web request or whatever.) is a bit of hyperbole, but the road to when the ORM becomes more trouble than it's worth is much shorter than you think. And the steps to making a query perform well are very frequently database specific. – jpmc26 Oct 13 '16 at 23:34
  • @jpmc26 - other than CTEs, you can do all of those things in most modern ORMs. And most ORMs have an escape-to-SQL operation, and CTEs are standard SQL, so you should be able to integrate the two and make a working query that is portable between most modern database systems. – Periata Breatta Oct 14 '16 at 13:21
  • 4
    "Is your app database agnostic? If you had to switch data-bases right now, could you?. Is your logic language agnostic. If you had to rewrite the app in a totally new language right now, could you?" - This is ABSOLUTELY TERRIBLE advice! Don't constraint yourself artificially to whatever you think the largest common denominator of programming languages or databases is - this will force you to reinvent the wheel constantly. Instead, try to find the NATURAL way to express the desired behaviour in your programming language and database of choice. – fgp Oct 14 '16 at 13:38
  • This answer seems to be in the realm of academia / R&D, not typical business. The industry doesn't care one bit about how future-proof code is. The only thing that industry cares about is instant financial gratuity. Usually the quickest route to such gratuity is picking a technology and going with it. – Luke A. Leber Oct 16 '16 at 22:42
12

The answers by @amon and some others are great, but I wanted to suggest you look at this from another perspective.

I've worked with Large Manufacturers and Government Agencies who were relying on programs or code-bases that had been used for well over 20 years, and they all had one thing in common -- the company controlled the hardware. Having something running and extensible for 20+ years isn't difficult when you control what it runs on. The employees at these groups developed code on modern machines that were hundreds of times faster than the deployment machines... but the deployment machines were frozen in time.

Your situation is complicated, because a website means you need to plan for two environments -- the server and the browser.

When it comes to the server, you have two general choices:

  • Rely on the operating system for various support functions which may be much faster, but means the OS may need to be "frozen in time". If that's the case, you'll want to prepare some backups of the OS installation for the server. If something crashes in 10 years, you don't want to make someone go crazy trying to reinstall the OS or rewrite the code to work in a different environment.

  • Use versioned libraries within a given language/framework, which are slower, but can be packaged in a virtual environment and likely run on different operating systems or architectures.

When it comes to the browser, you'll need to host everything on the server (i.e. you can't use a global CDN to host files). We can assume that future browsers will still run HTML and Javascript (at least for compatibility), but that's really a guess/assumption and you can't control that.

  • 11
    You have to consider security too. A 20-year old unsupported OS will probably be full of security holes. I worked for a company and inherited this problem. Government agency, ancient OSs (all long virtualised, fortunately), but this was a huge problem, and upgrading was nigh impossible due to having to completely rewrite the software (hundreds of individual spaghetti-code PHP scripts, each of which had the database calls hardcoded, using deprecated functions that the new driver didn't support /shudder). –  Oct 13 '16 at 18:37
  • If you go the OS route, at best you can hope that security patches were applied, and that future maintainers will be able to shield stuff at the networking layer. In order to plan for stuff to work like this in the long term (esp in the absence of a large budget, as the OP is a student) you basically need to accept that your application and server will eventually become insecure. For example, in 20 years there will eventually exist known exploits for the SSL version on the server... but that OS may not be compatible with openssl versions in10 years. This is all about minimizing tradeoffs. – Jonathan Vanasco Oct 13 '16 at 20:50
  • @FighterJet, you can always run a firewall on a supported OS, then you have few risks apart of SQL injects etc that you should have coded for anyway. – Ian Oct 18 '16 at 11:56
  • @Ian: I wish. There was a firewall. But I didn't write the code, I inherited it. And yes, there were thousands of SQL vulnerabilities that I wish I could have fixed, but the real problem was that the code depended on a particular version of PHP4 (which has been deprecated for forever and is chock-full of security holes) and a particular version of the database driver (which didn't work on newer OSs), which prevented us upgrading to a newer version of the database... the point is, relying on something staying the same doesn't always work. Let's just say I'm glad I don't work there anymore. –  Oct 18 '16 at 16:43
  • 1
    @FighterJet That's actually a really good example of what I had meant to talk about. You ended up inheriting code that only works on a particular version of PHP4 and a driver that only runs on a particular OS... so you can't upgrade the server. I wouldn't advocate anyone doing that, but it happens. -- a lot. FWIW, I do agree with you but I wanted my answer to foster thinking around those types of scenarios, not make a recommendation. – Jonathan Vanasco Oct 18 '16 at 17:23
  • @FighterJet, you can add a Firewall without changing any of the code, setup a VLAN on the network to stop any packets that does not come var the firewall. – Ian Oct 18 '16 at 17:30
  • @Ian, we did have a firewall, like I said. Our network administrator actually did a great job with that. He even went so far as to virtualise all of the old servers, which meant that we didn't have to deal with replacing the hardware and transferring the old OS to the new server. Virtualisation makes it super easy to do full daily backups and to spin up a clone if something goes wrong. Which is another good point. You need to have a good data back-up strategy. –  Oct 18 '16 at 19:31
6

The core of most applications is the data. Data is forever. Code is more expendable, changeable, malleable. The data must be preserved, though. So focus on creating a really solid data model. Keep the schema and the data clean. Anticipate, that a fresh application might be built on top of the same database.

Pick a database that is capable of enforcing integrity constraints. Unenforced constraints tend to be violated as time passes. Nobody notices. Make maximum use of facilities such as foreign keys, unique constraints, check constraints and possibly triggers for validation. There are some tricks to abuse indexed views to enforce cross-table uniqueness constraints.

So maybe you need to accept that the application will be rewritten at some time. If the database is clean there will be little migration work. Migrations are extremely expensive in terms of labor and defects caused.

From a technology perspective it might be a good idea to put most of the application on the server and not in a JavaScript form on the client. You'll probably be able to run the same application in the same OS instance for an extremely long time thanks to virtualization. That's not really nice but it's a guarantee the app will work 20 years from now without any expensive maintenance and hardware costs. Doing this you at least have the safe and cheap fallback of continuing to run old, working code.

Also, I find that some technology stacks are more stable than others. I'd say that .NET has the best possible backwards compatibility story currently. Microsoft is dead serious about it. Java and C/C++ are really stable as well. Python has proven that it is very unstable with the Python 3 breaking changes. JavaScript actually seems quite stable to me because breaking the web is not an option for any browser vendor. You probably should not rely on anything experimental or funky, though. ("Funky" being defined as "I know it when I see it").

usr
  • 2,734
  • 18
  • 15
  • [about .net backwards compatibility story](http://programmers.stackexchange.com/questions/149139/what-is-net-framework-backward-compatibility) - I don't think I've seen a java app that would ask for an older version of java, as in contrast. That might change with Java 9 or beyond, but haven't seen it happen yet. – eis Oct 17 '16 at 11:29
  • It is amazingly compatible in practice, and installing an older version side by side is not an issue. Also note, that the .NET BCL is in my estimate 10-100x larger than the Java built-in classes. – usr Oct 17 '16 at 12:11
  • backwards compatibility means that there should be no need to install also an older version. But we digress from the original question, this is not really relevant to OP. – eis Oct 17 '16 at 12:23
0

The other answers do make sense. However, I feel the comments on the client technology is over complicating things. I've been working as a developer for the past 16 years. In my experience, as long as you keep your client code intuitive, you should be fine. So no "hacks" with frames / iframes, etc.. Only use well defined functions in the browsers.

You can always use compatibility modes in browsers to keep them working.

To prove my point, only a few months ago I fixed a millennium bug in the javascript code for a customer, who has been running their web app for 17 years. Still works on recent machines, recent database, recent operating system.

Conclusion: keep it simple and clean and you should be fine.

  • 1
    Frames and iframes are very well defined in the HTML spec. What makes them unsuitable? – curiousdannii Oct 14 '16 at 11:55
  • 3
    @curiousdannii: It is not so much the use of iframes (frames are no longer supported in HTML5), as the use of frames and iframes to load content asynchronously through scripting, etc.. It can work great right now, but it will always be subject to security changes. – Jonathan van de Veen Oct 14 '16 at 12:30
-2

A few axioms:

  • Truth survives. In this context, it would be algorithms and data models - that which truthfully represents the "what" and the "how" of your problem space. Although, there is always the potential for refinement and improvement, or an evolution of the problem itself.
  • Languages evolve. This is as true for computer languages as it is for natural languages.
  • All technology is vulnerable to obsolescence. It just may take longer for some technologies than others

The most stable technologies and standards (those least vulnerable to obsolescence) tend to be those which are non-proprietary and have been most widely adopted. The wider the adoption, the greater the inertia against almost any form of change. Proprietary "standards" are always vulnerable to the fortunes and whims of their owner and competitive forces.

Twenty years is a very long time in the computer industry. Five years is a more realistic target. In five years' time, the whole problem your application is meant to solve could be completely redefined.

A few examples to illustrate:

C and C++ have been around for a long time. They have implementations on just about every platform. C++ continues to evolve, but "universal" features (those available on all platforms) are pretty much guaranteed to never be deprecated.

Flash almost became a universal standard, but it is proprietary. Corporate decisions to not support it on popular mobile platforms have basically doomed it everywhere - if you're authoring for the web, you want your content available on all platforms; you don't want to miss the major market mobile has become.

WinTel (Windows/x86) despite being proprietary to Microsoft and Intel, having started out on a less-than-optimal platform (16 bit internal / 8 bit external 8088 vs contemporaneous Apple Macintosh 32 bit internal / 16 bit external 68000), and erosion to Apple in the consumer market remains a de facto choice for business platforms. In all that time (25 years), a commitment to backward compatibility has both hobbled future development and inspired considerable confidence that what worked on the old box will still work on the new one.

Final thoughts

JavaScript might not be the best choice for implementing business logic. For reasons of data integrity and security, business logic should be performed on the server, so client-side JavaScript should be limited to UI behavior. Even on the server, JavaScript might not be the best choice. Although easier to work with than other stacks (Java or C#) for small projects, it lacks the formality which can help you write better, more organized solutions when things get more complex.

Zenilogix
  • 309
  • 1
  • 3