0

After 50 years of software engineering, why are computer systems still insecure? I don't get it.

Two questions: (i) What's so hard about just denying or restricting networked access to bad actors who lack passwords? It's not like these bad actors arrive with crowbars and dynamite; they only have bits and bytes, right? (ii) Once a bad actor has achieved networked access, why haven't operating-system kernels been re-engineered to make privilege escalation unfeasible?

I am not looking for a book-length answer but merely for a missing concept. If you can see the flaw in my thinking and can shed a little light on the flaw, that will be answer enough.

Is there some specific reason top scholars have not yet been able to solve the problem? Is there a sound reason we still have, say, bootstrapped compilers and unauditable microprocessor designs, despite the long-known security risks?

Is there some central observation, answerable at StackExchange length, that ties all this together? Why are computer systems still insecure?

Update: Commenters have added some interesting links, especially "Is Ken Thompson's compiler hack still a threat?"

thb
  • 747
  • 5
  • 12
  • Okay, I've got a closing-vote for *too broad.* I saw that coming, so could you help me to narrow the question? This question has been bothering me for a long time, I think it's important, and no concise answer seems to be available on the Internet; so kindly help me to fix the question so that the question can be answered. Thanks. – thb Jan 07 '17 at 14:25
  • 5
    This reads more like a long rant, less than a focussed question. Without knowing your intent, it's hard to edit the question into shape for you. I'm also not sure whether this site is the ideal venue for security-related questions. One concept you are missing is the fundamental security asymmetry: an attacker has to get lucky only once, a defender has to succeed every single time. It's not like the defenders know all the bugs in their systems and decide to ignore them. Complete prevention of all bugs is impossible. – amon Jan 07 '17 at 14:35
  • @amon: Okay. Let me work on the question. "Rant" implies length, so maybe I can delete most of the question. Whether that will fix it, I don't know, but I'll try. – thb Jan 07 '17 at 14:38
  • @amon: Regarding your comment, that was precisely the point at which I was getting. Having written and read lots of code, I am well aware that complete prevention of all bugs is impossible. What I don't understand is why the system cannot be engineered (i) to fundamentally isolate such bugs from the network interface and (ii) to render privilege escalation impracticable. Consider: recent programming languages like Rust make it impossible to inadvertently commit certain memory errors; why can't recent computer engineering do the same for certain *security* errors? – thb Jan 07 '17 at 14:49
  • 1
    @thb Physical isolation ("air-gap") is a legitimate security strategy but there are many situations where it cannot be achieved. On the other hand, information security exists on a continuum that reaches as far as human intelligence ("HUMINT") in the context of intelligence gathering (spying and reconnaissance), which is that there is no leak-proof ways of preventing information leaks as long as human actors are involved. Even "for your eyes only" systems can leak information. – rwong Jan 07 '17 at 15:12
  • 1
    But questions about bug-proofing (or software defect reduction) have been asked and answered before on this site; if you have a focused question on software defect prevention that hasn't been asked/answered before, perhaps you can open a new question. – rwong Jan 07 '17 at 15:13
  • @rwong: Good comments. No, my question was not about bug-proofing. At least, I don't *think* that that's what it was about. For years, I have had this nagging feeling that there is something fundamentally wrong with kernel and/or computer architecture. Underneath, my question is not about preventing the bugs, but about why bugs that are not prevented cannot be security-isolated *by system design* without an air gap. But my underneath-question is too big a question for StackExchange, isn't it? – thb Jan 07 '17 at 15:28
  • 2
    And also the interchangeability between a brute-force and a wrench: https://xkcd.com/538/ – rwong Jan 07 '17 at 15:36
  • 2
    See http://softwareengineering.stackexchange.com/q/339652/21172 (the question immediately after yours) for a hint. – kdgregory Jan 07 '17 at 16:16
  • 1
    And the interchangeability between (brute-force, wrench) can be found in many different places. Instead of seeking privilege escalation at the OS level (which could have been fixed with formally-proven secure OS development), people just seek easier, softer targets, such as hacking the visibility of people's online profiles on various social networks. After you secure the more "securable" areas, people just switch targets. – rwong Jan 07 '17 at 17:26
  • 1
    related: [Is Ken Thompson's compiler hack still a threat?](http://softwareengineering.stackexchange.com/questions/184874/is-ken-thompsons-compiler-hack-still-a-threat) – gnat Jan 07 '17 at 18:14

4 Answers4

13

At its core, the problem is that software is complex. For any site, you have all of the JavaScript to make the site run. You have the server to handle requests. You have the cache to handle in flight data. You have the CDN to store all of the content. You have some database to store all of the data. You have backup servers where the data goes. You have logging servers where info can end up. You have all of the libraries written by others, but used by all of these parts. You have the web servers, written by others. You have the operating system, and all of the things installed there.

All an attacker needs to do is find one mistake in any of this code, and the gig is up. Programmers are human, so invariably, given a million opportunities to fuck up, we will.

But that is just the technical side. Even if all of the code is secure, users still have passwords, and they're usually bad. There's still the ability to call up tech support and ask for "your" password reset, gaining control of an account that isn't yours. You can still bribe someone who does have access (as of 2000, 80% of intrusions were made by people on the inside - vengeful programmers, bored secretaries, greedy salesguys). There's still social engineering people into believing that fake email is a legitimate request for a password reset.

The problem isn't solved because there isn't a single problem, they're not all technical problems, and most of them are damned hard in the general case.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • +1. Useful answer. Appreciated. I had in mind something analogous to memory safety (as in Rust), which allows automated tools to quickly audit millions of lines of code. That's an *architectural* solution to the memory problem. Setting aside the human factor (tech support, password reset), I gather that we have no good architectural solution to the security problem. This intrigues me. I have never read why such an architectural solution, a solution which corrals your "million opportunities," should not exist. That's too big a question for SE, but underneath, it's what I had in mind. Thanks. – thb Jan 07 '17 at 15:39
  • I should have said, "I *had* never read why...." Your answer does, of course, give an interesting, concise explanation. – thb Jan 07 '17 at 15:46
  • 1
    @thb: maybe you simply missed to read the right articles? The buzzword you might be looking for is ["security by design"](https://en.wikipedia.org/wiki/Secure_by_design), I am sure if you google it, you will find some resources. – Doc Brown Jan 07 '17 at 16:42
  • @DocBrown: yes, thanks. In my late 40s, I begin to find that buzzwords increasingly don't reach me. I had kept searching for "inherently secure kernel" and words like that. *Security by design.* That's probably the word I had wanted but hadn't known. – thb Jan 07 '17 at 16:57
  • 2
    @thb - you seem to have a number of assumptions that are coloring your view of this. Most security problems aren't _exploits_. They aren't bugs. They're code doing exactly what the code is supposed to do, but the data just happens to be sensitive, or the person viewing it just happens to be not who the code trusts it to be. – Telastyn Jan 07 '17 at 17:10
  • I have configured an obscure Internet-facing virtual server in a data center. The server runs Debian and presents SMTP, POP3, HTTP and SSH. The server is much too obscure to merit human-level hacking; but I assume that the U.S., Russia, China and Israel each have placed agents as Debian Developers. These agents, I assume, can simultaneously compromise all Debian servers, including mine. Debian cannot audit all its code, so it seems to me that a fundamental design problem subsists. Couldn't the system be redesigned to render an audit of most Debian code unnecessary? This is what I wonder. – thb Jan 07 '17 at 18:35
  • @thb - whaaat? I mean, most (all?) of the Debian distro is open source. It is trivial for anyone to be "agents" there. But since the code is open, it's also trivial for anyone in the world to investigate the code for vulnerabilities. Human level hacking isn't necessary, since automated will happily root your public machine. Stop wondering, do some research. – Telastyn Jan 07 '17 at 18:45
  • I think that you and I may have a misunderstanding. As it happens, I have been a Debian Developer since 2005, though I have not personally been involved with Debian's security team, which has done an immense quantity of work (tens of thousands of hours) chasing down some of the very kinds of bugs you cite. At any rate, research was done and your fine answer is valued. If there was a misunderstanding, it's probably because I posed the question suboptimally, so I appreciate your wrestling with it. Thanks. – thb Jan 07 '17 at 19:41
3

First people and companies buy insecure computers then try to manage the problem. They still sell no matter the lack of security.

Second people that understand the vulnerabilities have confidentially contracts that makes it difficult to share the ideias with hardware and os developers. Os and hardware developers do not have a good understanding of the business needs.

I for one am a big fish but I can't use my company email then in some communities people does not realize I have experience with big things and serious problems that merits attention and thinks I am nuts. There is many professionals like me.

Third, patents and authoring rights do avoid good ideas to be shared. See how fast China is developing and how far behind US is getting. the patent system was taught as a mean to protect the small inventor, but in the hand of big companies become a way to force their products forward by filing bogus claims against their competitors.

Forth many people that discover the system failures live from it and will not reveal the secrets unless paid for. And just in the last decade or so p2own start paying prizes.

Sixth the incremental security is profitable.

Seventh this is a complex problem that would require a huge effort and most of the problems was discovered after each generation of computers and software are put in place.

Lucas
  • 298
  • 1
  • 4
3

The reason is that it's hard to make computers both useful and secure. You can make them completely secure, by disconnecting them from the internet and taking other measures, but then they aren't so useful. So we start adding features and capabilities. We connect our PCs to the internet. The WWW is fun but static. So we create Javascript and allow people to do things like banking via a web browser. It's a pain to keep entering a password, so we create session IDs and store them on the client in cookies or hidden form fields. Then we let users simultaneously open multiple web sites. Uh-oh! Better make sure that other web sites can't read that other-site session id... and so it goes.

kevin cline
  • 33,608
  • 3
  • 71
  • 142
2

There's a simple other reason: people are lazy and not aware. It will not hit me. That's the same attitude you take when walking on the street. It's always the others that suffer. Take Mail for example. Point to point encryption is available since decades, but nobody really uses it. Not too long ago I signed my mails and got them back from Mickeysoft users claiming their Outlook refused to show them without being able to decode the signature. So I turned that off again. Meanwhile it seems I don't see complaints any more, but sending encrypted mails between trusted parties? No way.

Another observation: security is not absolute. It's always relative. And people (I said they are sort of lazy) tend to use the least arduous way. Which in turn means less security.