54

The team I'm in creates components that can be used by the company's partners to integrate with our platform.

As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.

Some examples:

  • We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
  • We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.
  • We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.
  • All of the code for mapping to/from XML is written "by hand", again for the same reason.

I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.

robinwit
  • 650
  • 1
  • 5
  • 10
  • 20
    Is there a justification for this (e.g., external requirement) or is it being done out of ignorance? – Blrfl Apr 08 '19 at 11:22
  • 3
    No real justification except what I mentioned in the first bullet point and the general "dependencies are bad", which is obviously true (to a certain degree). – robinwit Apr 08 '19 at 11:29
  • 6
    Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide. – Filip Milovanović Apr 08 '19 at 11:57
  • Related to the old API levels: https://softwareengineering.stackexchange.com/q/364259/20756 – Blrfl Apr 08 '19 at 12:27
  • 73
    "Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't. – UKMonkey Apr 08 '19 at 13:00
  • 4
    @UKMonkey: allow me to rephrase: we don't link with any third party libraries. :-p – robinwit Apr 08 '19 at 16:11
  • 2
    @Bertus I thought you tagged this with .net? Unless you're working for microsoft - that would make it 3rd party. It's just the logical conclusion of "we don't want to use 3rd party stuff" ... and I would happy point this out to someone to highlight how daft the "don't use 3rd party libs" is. – UKMonkey Apr 08 '19 at 16:13
  • 1
    I think it's great to make your own API's when ever possible. Today we've really gotten lazy and mindset of cobbing and hacking things together to make stuff work, like running linux and javascript on arm to get a robotic something or other working. Not saying it's wrong, in time constrained and other areas that makes sense. But it's nice to do stuff yourself. Cause you can make it the way you want it. – marshal craft Apr 08 '19 at 16:31
  • 8
    That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :) – marshal craft Apr 08 '19 at 17:13
  • 2
    I read the title and thought this was [Parenting](https://parenting.stackexchange.com/). – user1717828 Apr 09 '19 at 16:32
  • 5
    You are right that you're wasting effort by re-inventing commodity software. You are wrong in that this is nowhere even close to "avoiding all dependencies". The Excel team at Microsoft once wrote their own C compiler to avoid taking a dependency on the C team *at Microsoft*. You are taking enormous dependencies on operating systems, high-level frameworks, and so on. – Eric Lippert Apr 09 '19 at 16:49
  • I'm afraid I don't see the rationale behind this. .NET, in all versions, has a very good support for resolving the "DLL hell". So why not bundle your dependencies with your library? Even if they are 3rd party, there's not going to be any conflicts with whatever your clients use. – Vilx- Apr 09 '19 at 20:59
  • Alternatively, if they are open-source (and most things are), you can simply take that source code and include in your own library. Not even a separate DLL then. And saves a lot of effort. – Vilx- Apr 09 '19 at 21:00
  • 1
    Hmmm. Let's check the official documentation for Microsoft's existing [JavaScriptSerializer](https://docs.microsoft.com/en-us/dotnet/api/system.web.script.serialization.javascriptserializer): "[Json.NET](https://www.newtonsoft.com/json) should be used for serialization and deserialization." – Brian Apr 09 '19 at 21:23
  • 1
    YAGNI applies equally to independence from third party libraries. The degree to which "dependencies are bad" is infinitesimal. – StackOverthrow Apr 09 '19 at 22:06

5 Answers5

94

... We are forced to stay on the lowest API level of the framework (.NET Standard) …

This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.

.NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.

Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.

Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.

So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.

Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.

So yes, you definitely are taking this too far in my view.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
David Arno
  • 38,972
  • 9
  • 88
  • 121
  • 16
    "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny `eval` wrapper with some sanity checks that are easily bypassed? – John Dvorak Apr 09 '19 at 15:19
  • 4
    "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so. – Voo Apr 09 '19 at 15:43
  • Aside from future platforms probably supporting more rather than less, developing for all the things that *might* happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel **will** save more time than adapting to some new situation when that is needed is going to cost. – Jasper Apr 15 '19 at 00:10
51

We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.

The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.

We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API. We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library. All of the code for mapping to/from XML is written "by hand", again for the same reason.

This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.

If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.

In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.

berry120
  • 1,747
  • 14
  • 17
  • 3
    And even if you can't, switching to another library is still easier and better than rolling your own. – Lightness Races in Orbit Apr 09 '19 at 23:45
  • 5
    Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions. – Lightness Races in Orbit Apr 09 '19 at 23:45
  • "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0 – Voo Apr 10 '19 at 10:11
11

On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.

For example, they may have signed a contract with their customers promising not to use open source products.

However, as you point out, these features are not without cost.

  • Time to market
  • Size of package
  • Performance

I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.

If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.

If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Ewan
  • 70,664
  • 5
  • 76
  • 161
  • 11
    Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library. – robinwit Apr 08 '19 at 11:28
  • Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood – Ewan Apr 08 '19 at 11:38
  • 12
    "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework. – Stephen Apr 09 '19 at 01:02
  • And still people do it – Ewan Apr 09 '19 at 01:02
7

Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.

  • 2
    This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days. – StackOverthrow Apr 09 '19 at 22:11
0

Basically it all comes down to effort vs. risk.

By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.

  • Strengths: Less effort, because you don't have to code it yourself.
  • Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.
  • Opportunities: Time to market is smaller. You might profit from external developments.
  • Threats: You might upset customers with additional dependencies.

As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.