I found quite an extensive reading list on all coding-related machine learning topics.
As you can see, people have been trying to apply machine learning to coding, but always in very narrow fields, not just a machine that can handle all manner of coding or debugging.
The rest of this answer focuses on your relatively broad scope "debugging" machine and why this has not really been attempted yet (as far as my research on the topic shows).
I redacted a lengthy part of the answer. To summarize (it's important for the next part): going by the current machine learning methodology, anything a human can learn, a machine can as well. We are only limited by the physical realm (CPU speed, size of a machine, ...), not a supposed limited applicability of the learning algorithm itself.
what research has been done so far in applying machine learning to code development? How about debugging?
The issue here isn't that it's impossible, but rather that it's an incredibly complex topic.
Humans have not even come close to defining a universal coding standard that everyone agrees with. Even the most widely agreed upon principles like SOLID are still a source for discussion as to how deeply it must be implemented. For all practical purposes, it's imposible to perfectly adhere to SOLID unless you have no financial (or time) constraint whatsoever; which simply isn't possible in the private sector where most development occurs. SOLID is a guideline, not a hard limit.
In absence of an objective measure of right and wrong, how are we going to be able to give a machine positive/negative feedback to make it learn?
At best, we can have many people give their own opinion to the machine ("this is good/bad code"), and the machine's result will then be an "average opinion". But that's not necessarily the same as a correct solution. It can be, but it's not guaranteed to be.
Secondly, for debugging in particular, it's important to acknowledge that specific developers are prone to introducing a specific type of bug/mistake. The nature of the mistake can in some cases be influenced by the developer that introduced it.
For example, as I am often involved in bugfixing others' code at work, I have a sort of expectation of what kind of mistake each developer is prone to make. Given a certain problem, I know that dev A is likely to forget updating the config file, whereas dev B often writes bad LINQ queries. Based on the developer, I may look towards the config file or the LINQ first.
Similarly, I've worked at several companies as a consultant now, and I can clearly see that types of bugs can be biased towards certain types of companies. It's not a hard and fast rule that I can conclusively point out, but there is a definite trend.
Can a machine learn this? Can it realize that dev A is more likely to mess up the config and dev B is more likely to mess up a LINQ query? Of course it can. Like I said before, anything a human can learn, a machine can as well.
However, how do you know that you've taught the machine the full range of possibilities? How can you ever provide it with a small (i.e. not global) dataset and know for a fact that it represents the full spectrum of bugs? Or, would you instead create specific debuggers to help specific developers/companies, rather than create a debugger that is universally usable?
Asking for a machine-learned debugger is like asking for a machine-learned Sherlock Holmes. It's not provably impossible to create one, but often the core reasoning to be a debugger/Sherlock hinges on subjective assessments that vary from subject to subject and touch on an incredibly wide variety of knowledge/possible flaws.
The lack of quickly provable correct/incorrect outcomes makes it hard to easily teach a machine and verify that it's making good progress.