Warning: big post, some opinions, vague 'do what works best for you' conclusion
Generally, this is done as a means of implementing 'hexagonal architecture' around your database. You can have web applications, mobile applications, desktop applications, bulk importers, and background processing all consume your database in a uniform way. Certainly you could accomplish the same thing to some extent by writing a rich library for accessing your database, and having all of your processes use that library. And indeed, if you're in a small shop with a very simple system, that's actually probably a better route to go; It's a simpler approach and if you don't need the advanced capabilities of a more complicated system, why pay for the complexity? However, if you're working with a large, sophisticated set of systems that all need to interact with your database at scale, there's a lot of benefits to putting a web service between your applications and your data:
Platform independence & maintenance
If you have a database, and you write a Python library to interact with that database, and everybody pulls in that library to interact with the database, that's great. But let's say suddenly you need to write a mobile app, and that mobile app now needs to talk to the database as well. And your iOS engineers don't use Python, and your Android engineers don't use Python. Maybe the iOS guys want to use Apple's languages and the Android engineers want to use Java. Then you'd be stuck writing and maintaining your data access library in 3 different languages. Maybe iOS and Android devs decide to use something like Xamarin to maximize the code they can share. Perfect, except you're probably still going to have to port your data access library to .NET. And then your company just purchased another company who's web application is a disparate but related product, and the business wants to integrate some of the data from your company's platform into the newly acquired subsidiary's platform. Only there's one problem: The subsidiary was a start-up and decided to write the bulk of their application in Dart. Plus, for whatever reasons (reasons probably beyond your control) the mobile team that was piloting Xamarin decided it's not for them, and that they'd rather use the tools and languages specific to the mobile devices they'll be developing for. But while you were in that phase, your team had already delivered a large portion of your data access library in .NET, and another team in the company was writing some crazy Salesforce integration stuff and decided to do all of that in .NET since there was already a data access library for .NET and it seemed like a good idea because mobile was initially planning to use .NET as well.
So now, because of a very realistic turn of events, you have your data access library written in Python, .NET, Swift, Java, and Dart. They're not as nice as you'd like them to be, either. You couldn't use an ORM as effectively as you'd like to, because each language has different ORM tools, so you've had to write more code than you would have liked to. And you haven't been able to devote as much time to each incarnation as you would have wanted, because there's 5 of them. And the Dart version of the library is especially hairy because the you had to roll-your-own transactional stuff for a some of it because the libraries and support just wasn't really there. You tried to make the case that because of this, the Dart application should have only had read-only functionality for your database, but the business had already made up their minds that whatever features they were planning were worth the extra effort. And it turns out there's a bug in some of the validation logic that exists in all of these incarnations of your data access library. Now you have to write tests and code to fix this bug in all of these libraries, get code reviews for your changes to all of these libraries, get QA on all of these libraries, and releases your changes to all of the systems using all of these libraries. Meanwhile, your customers are displeased and have taken to Twitter, stringing together combinations of vulgarities you never would have imagined could be conceived, let alone targeted at your company's flagship product. And the product owner decides to be not very understanding about the situation at all.
Please understand that in some environments, the above example is anything but contrived. Also take into consideration that this sequence of events may unfold over the course of a few years. Generally, when you get to the point where architects and business people start talking about hooking up other systems to your database, that's when you're going to want to get 'putting a REST API in front of the database' onto your roadmap. Consider if early on, when it was clear that this database was going to start being shared by a few systems, that a web service/REST API was put in front of it. Fixing your validation bug would be a lot quicker and easier because you're doing it once instead of 5 times. And releasing the fix would be a lot easier to coordinate, because you're not dependent on several other systems releasing in order to get your change out there.
TLDR; It's easier to centralize the data access logic and maintain very thin HTTP clients than it is to distribute the data access logic to each application that needs to access the data. In fact, your HTTP client may even be generated from meta-data. In large systems, the REST API lets you maintain less code
Performance and scalability
Some people may believe that talking to the database directly instead of going through a web service first is faster. If you have only one application, that's certainly true. But in larger systems, I disagree with the sentiment. Eventually, at some level of scale, it's going to be very beneficial to put some kind of cache in front of the database. Maybe you're using Hibernate, and want to install an Infinispan grid as an L2 cache. If you've got a cluster of 4 beefy servers to host your web service separate from your applications, you can afford to have an embedded topology with synchronous replication turned on. If you try to put that on a cluster of 30 application servers, the overhead of turning on replication in that setup will be too much, so you'll either have to run Infinispan in a distributed mode or in some kind of dedicated topology, and suddenly Hibernate has to go out over the network in order to read from the cache. Plus, Infinispan only works in Java. If you have other languages, you'll need other caching solutions. The network overhead of having to go from your application to your web service before reaching the database is quickly offset by the need to use much more complicated caching solutions that generally come with overhead of their own.
Additionally, that HTTP layer of your REST API provides another valuable caching mechanism. Your servers for your REST API can put caching headers on their responses, and these responses can be cached at the network layer, which scales exceptionally well. In a small setup, with one or two servers, your best bet is to just use an in memory cache in the application when it talks to the database, but in a large platform with many applications running on many servers, you want to leverage the network to handle your caching, because when properly configured something like squid or varnish or nginx can scale out to insane levels on relatively small hardware. Hundreds of thousands or millions of requests per second of throughput is a lot cheaper to do from an HTTP cache than it is from an application server or a database.
On top of that, having a ton of clients all pointed at your database, instead of having them all pointed at a few servers which in turn point to the database, can make tuning the database and connection pooling a lot harder. In general, most of the actual workload on an application server is application stuff; waiting for data to come back from the database is often time consuming, but generally not very computationally expensive. You may need 40 servers to handle your application's workload, but you probably don't need 40 servers to orchestrate fetching the data from the database. If you dedicate that task to a web service, the web service will probably be running on far fewer servers than the rest of the application, which means you'll need far fewer connections to the database. Which is important, because databases generally don't perform as well when they're servicing tons of concurrent connections.
TLDR; It's easier to tune, scale and cache your data access when it's something that happens inside of a single dedicated web service than it is when it's something that happens across many different applications using different languages and technologies
Final thoughts
Please don't come away from this thinking "Oh wow, I should always be using REST APIs to get my data" or "This idiot is trying to say we're doing it wrong because our web app talks to the database directly, but our stuff works fine!". The major point I'm trying to make is that different systems and different businesses have different requirements; In a lot of cases, putting a REST API in front of your database really doesn't make sense. It is a more complicated architecture that requires justifying that complexity. But when the complexity is warranted, there's a ton of benefits to having the REST API. Being able to weigh the different concerns and choose the right approach for your system is what makes a good engineer.
Additionally, if the REST API is getting in the way of debugging things, there's likely something wrong or missing in that picture. I don't believe having that added abstraction layer intrinsically makes debugging harder. When I work with large, n-tier systems, I like to make sure I have a distributed logging context. Perhaps when a user initiates a request, generate a GUID for that request and log the username of that user and the request they made. Then, pass that GUID on as your application talks to other systems. With proper log aggregation and indexing, you can query your entire platform for the user reporting the issue, and have visibility into all of their actions and they trickle through the system to quickly identify where things went wrong. Again, it's a more complicated architecture, so you should probably have more complicated infrastructure in place to facilitate supporting that architecture.
Sources:
http://alistair.cockburn.us/Hexagonal+architecture
https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing