12

A friend of mine without programming knowledge asked me this question and I found it interesting.

I think it is not possible because it would require a really advanced artificial intelligence capable of analyzing the text of a problem, think about a solution and program it. Just thinking about a machine being able to program a simple calculator seems pretty advanced to me.

But maybe I'm wrong and I would like to know what do you think about it and if you are aware of any articles/researches on the subject, or if it already exists or if the possibility exists of selecting a specification, and getting the machine to self-program to this "spec?"

2023 EDIT: GPT-4 ;-)

grandouassou
  • 237
  • 2
  • 5
  • 4
    Define programming. I could build a program that would makes other programs. But would it really learn? – Pieter B Nov 09 '14 at 23:23
  • Yes the question is not about code generation but about real programming as we do as developers. – grandouassou Nov 09 '14 at 23:31
  • It depends on what the program is. A program with a lot of procedural business logic would be much more difficult (and much less feasible) than something functional and purely math-based. (That's an intuition, anyway, but I don't have any way to back that up, unfortunately.) –  Nov 10 '14 at 05:14
  • @florian: We ourselves are machines who have learnt how to do programming (assuming aliens/god created us :-) ). Of course we haven't yet acquired the ability to program DNA sequences etc, so if you create a machine that has to learn how to program eventually, you have to program it to "evolve" and eventually learn how it itself was programmed. – Nav Nov 10 '14 at 12:28
  • 2
    @maple_shaft: I made the question more objective by bringing it in line with the existing answers, and was wondering if it can be reopened in its current form. – Tom Au Nov 10 '14 at 16:42

5 Answers5

16

Joel actually answered this one a few years back. The actual meaning of "teach a machine how to program by itself" is "teach a machine how to take a spec and create a program that corresponds to that spec." And with that in mind:

The problem, here, is very fundamental. In order to mechanically prove that a program corresponds to some spec, the spec itself needs to be extremely detailed. In fact the spec has to define everything about the program, otherwise, nothing can be proven automatically and mechanically. Now, if the spec does define everything about how the program is going to behave, then, lo and behold, it contains all the information necessary to generate the program! And now certain geeks go off to a very dark place where they start thinking about automatically compiling specs into programs, and they start to think that they’ve just invented a way to program computers without programming.

Now, this is the software engineering equivalent of a perpetual motion machine. It’s one of those things that crackpots keep trying to do, no matter how much you tell them it could never work. If the spec defines precisely what a program will do, with enough detail that it can be used to generate the program itself, this just begs the question: how do you write the spec? Such a complete spec is just as hard to write as the underlying computer program, because just as many details have to be answered by spec writer as the programmer. To use terminology from information theory: the spec needs just as many bits of Shannon entropy as the computer program itself would have. Each bit of entropy is a decision taken by the spec-writer or the programmer.

So, the bottom line is that if there really were a mechanical way to prove things about the correctness of a program, all you’d be able to prove is whether that program is identical to some other program that must contain the same amount of entropy as the first program, otherwise some of the behaviors are going to be undefined, and thus unproven. So now the spec writing is just as hard as writing a program, and all you’ve done is moved one problem from over here to over there, and accomplished nothing whatsoever.

The only way to get around this would be to produce an actual sapient computer with enough intuition to do all the filling-in-of-blanks that you and I do automatically, all the time, when producing software... in which case you'd end up with a computer that programs itself about as well as a human developer. ;)

Mason Wheeler
  • 82,151
  • 24
  • 234
  • 309
  • 7
    I can write a complete spec for a sorting algorithm much more easily than I can come up with insertion sort, quick sort, or bucket sort. Yet you claim it's easy to transform the first into the second. – raptortech97 Nov 09 '14 at 23:39
  • Joel... Enough said! ;-) I understand that in order to do that, we would need a spec in a given language. But I am not totally convinced about the argument that the spec should be very detailed. We, as developers, are capable of developing a program without having a very detailed spec. Can't we develop an artificial intelligence that is able to take some "random" decision about the design of a program? – grandouassou Nov 09 '14 at 23:44
  • 6
    @florian So you want the program to do the spec-interpreting task human programmers perform? Then it becomes the age-old problem of "strong AI", which numerous intelligent people have researched for decades without any progress to show for it. There's a heated philosophical debate whether AI is even metaphysically possible, much less practically possible in the far future, and in my experience nobody but snakeoil salesmen predicts strong AI in the near future. –  Nov 09 '14 at 23:47
  • I don't really want it I was asking myself (as my friend asked me) if it was possible. I perfectly understand that AI does not really exists and that it just comes down to some "random" programmed choices. – grandouassou Nov 09 '14 at 23:52
  • @florian Don't get hung up on "so you want to". If your question is whether a program can take a vague natural-language spec, make sense of it, ask for clarification and then create a program, then "that'd be strong AI, and as you now strong AI isn't a thing". If your question is different, please clarify it. –  Nov 10 '14 at 00:03
  • No it is ok you perfectly summed up what my question was about. – grandouassou Nov 10 '14 at 00:09
  • Great answer. Remember that the program must also include "non-functional" requirements: technical requirements that don't come from the "business function" of the program -- e.g. logging. That's good news and bad news -- it means that the entropy of the program will always be larger than the entropy of the spec. That sounds bad, but the good news on the flip side is that the spec language can be simpler than the programming language. We usually start evaluating effort based on both functional and non-functional requirements, but a "complete" functional spec effort can be smaller than that. – sea-rob Nov 10 '14 at 00:37
  • @RobY: Why isn't logging a business function? Knowing what your program's been doing is an important part of the business, one way or another, in a lot of problem domains. At my last job we had a lot of important functionality dedicated to producing a reliable *audit trail* of all transactions, and that's basically the same thing as logging. – Mason Wheeler Nov 10 '14 at 00:40
  • wikipedia talks about non-functional requirements here: http://en.wikipedia.org/wiki/Non-functional_requirement Essentially, they are aspects of the architecture that support the "business" function, but aren't directly a part of the specification. The wikipedia page has a long list of "non-functional" requirements, which includes auditing. That's an old idea, though -- that term has been around forever. – sea-rob Nov 10 '14 at 01:01
  • 2
    @raptortech97: No, you cannot. Not 'complete' in the meaning of this question. In order for your spec to be so complete as to be capable of being mechanically transformed into an executable program, it essentially has to be written in a programming language. Otherwise, your spec will have undefined behavior, or you are just writing code in an MSWord document. – whatsisname Nov 10 '14 at 05:35
  • @florian: it's not just "random" design choices. The answer to your question basically boils down to: "is it hypothetically possible to make a machine with a "mind". – whatsisname Nov 10 '14 at 05:37
4

Sure, we do this all the time (for extremely limited subsets of problems). It is fairly trivial to imagine taking another step or two and tying something like Siri into the input of these code generators (or something like Wolfram Alpha) which in turn writes code and solves your problem. I would expect that something already exists somewhere to do the most basic of things.

The problem with writing complex software for business isn't making a program to write the code - it's writing a program to get the requirements.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • Thanks for the links. Even though Yacc & Xamarin are purely deterministic code generators. They don't create stuff from scratch. – grandouassou Nov 09 '14 at 23:47
  • @florian - nothing creates stuff from scratch. There's always some input, they're just more picky than most. – Telastyn Nov 09 '14 at 23:53
  • 1
    @Telastyn: comparing the input/output for a parser generator to the input/output for a human mind as being "more picky" is disingenuous at best. – whatsisname Nov 10 '14 at 05:38
2

I think @Mason Wheeler's answer holds the key idea. It goes like this:

The Shannon entropy of Tic-tac-toe is really small. So we call tic-tac-toe a "solved" or "deterministic" game. It's not really interesting once you get past grade school. Checkers has a higher entropy, if you consider the entropy of all the possible games you can play. But checkers, too, is a "solved" or "deterministic" game. If you move first, you should only win or draw. Chess has a much, much higher entropy, but no human has beat the best computer players since 2006. So, in a way, computers have mastered chess in a way that humans cannot. Big Blue analyzed wikipedia, and then played Jeopardy against human players, and beat them soundly.

What's next? What's the entropy of a novel, or Shakepeare's sonnets?

Similarly, in the programming space, what will probably emerge is an increasing set of competencies. Prolog addressed a set of computer problems where you set up problem and the computer resolved the answer. Someone will probably find classes of simple programming problems that a computer will be able to satisfy, etc. Then someone will build on that to produce "on demand customization" within some problem space. And so it goes.

I think the question turns into, how long does it take an AI to master a given amount of entropy... and how many computing resources are required? I think it's unimaginable that a computer could not master the entropy mastered by the best human brain -- there's nothing magic about brains -- but the question is, how many cores do you need, and how many centuries will it take to get there?

But... will a computer ever be able to do my job? Inconceivable!

sea-rob
  • 6,841
  • 1
  • 24
  • 47
  • I think it's fair to say though that Jeopardy is a bit of a special case: it boils down to recalling and associating facts. Take even the best specialists, and there will be nuggets of even public knowledge that they are unaware of, or unable to recall under pressure. On the other hand, to a computer which is capable of analyzing an encyclopedia (such as Wikipedia, Encyclopedia Britannica, or any other), no fact stated in that encyclopedia is more (or less) exotic than any other. – user Nov 10 '14 at 10:17
1

This is hard to answer because, just like with artificial intelligence, once we have accomplished this it will be because we will have written a program that does it. And critics will say, "well, this machine isn't really programming itself! it just follows exactly the program you gave it!"

Well yes. Whatever we will ever accomplish with computers, we will do by giving it some program and it will execute it. If that's an argument against, then we can't accomplish anything. And yet, at some point, people thought a chess playing computer would be obviously intelligent. Now they can, and we know exactly how, and we don't think that's intelligent. Submarines still can't swim.

So -- consider a few examples.

Since decades, we have had parser generators. You give them a description of a language, it is processed and the result is code for a parser for that language. We know exactly how it's done, but isn't that a computer programming itself?

Second -- editors that tell you you've made a mistake (syntax error, non existing variable, etc). It doesn't program anything itself, but it can tell you that you did something wrong. It's very much on the surface only.

Languages in which you can just click and drag UI controls, and the code that will actually make them work is generated automatically.

JIT compilers. Software that can recognize hotspots in the currently running software and replace some of that by highly optimized compiled code, effectively optimizing itself while it runs. I think this is an example of what can seem to be a machine programming itself, until you know exactly how it happens, and then it turns out to be just doing what the programmer told it to, as always.

General game playing. This is an interesting field of research, in which researchers write programs that can read descriptions of the rules of games, that the programs then play against each other. So instead of a tic-tac-toe program or a chess program, these are programs that read the rules of tic-tac-toe or chess or some new game made up on the spot, and can then play them. The program isn't programming itself, but it does play chess without the rules of chess having been hardcoded. There was a time when this would clearly be considered the computer teaching itself something.

We've taken lots of small steps in the general direction.

But I can't think of any programs that rewrite themselves based on the results of earlier runs, or that can recognize obsolete or inefficient routines in their own code. I think that one day we will have that, and we'll consider it nothing special at all, as it'll be just some feature of the latest compiler...

RemcoGerlich
  • 3,280
  • 18
  • 22
0

Not currently and not in the foreseeable future because you need all the awesome complexity of the human brain to create a program. And even then those brains need to be highly trained in order to do so properly, and even then not all of them are capable of doing the task though one could argue that with sufficient time you could train anybody to program.

I took it from the way you phrased your question that you are not talking about simple emulated tasks.

You asked for articles and this Science Blogs article on Developing Intelligence answers that question in many ways.