I think the question is about responsibility for quality of data.
The answer depends on how you see the system.
If you see the database as an independent, distinct, and autonomous service separate from the application, then the database is responsible for ensuring the consistency and quality of the data it contains. Essentially because that database could be used by a different application, so it cannot rely on that second application having the same consistency and quality behaviours. In these circumstances the database needs to be designed to expose an API and autonomous behaviour. In this view there are at least two applications, one of them is the database and the other is the application using it.
Conversely the database could be considered a complicated form of file that is under the direct and total control of the application. In this sense the database devolves to being a pure serialisation and document navigation tool. It may provide some advanced behaviours to support query, and document maintenance (like JSON, or XML tools do) but then again it does not have to (like most file streams do). In this case it is purely the programs responsibility to maintain the correct format and content within the file. In this view there is one application.
In both views the next question is how to support the usage of the database as either a fancy file, or a separate service. You could achieve this by:
- using the tools that the database platform provides in the form of tables/views/stored procedures/triggers/etc...
- wrapping the database itself within a service that all clients must use in order to access the database
- wrapping the database in a library which must be used by all clients in order to access the data.
Each comes with its own pros/cons and will depend upon the architectural constraints of the environment the system operates within.
Regardless of which view you take it always pays to validate data at boundaries.
- Validate the fields on a UI that a user enters
- Validate the network/API request before it leaves the client
- Validate the network/API Request in the server before doing anything
- Validate the data being passed into business rules
- Validate the data before being persisted
- Validate the data after being retrieved from persistence
- so on and so on
How much validation is warranted at each boundary depends upon how risky it is to not validate it.
- multiplying two numbers together?
- you get the wrong number is that a problem?
- invoking a procedure on a given memory location?
- What is in that memory location?
- What happens if the object does not exist, or is in a bad state?
- using a regex on a string containing kanji?
- Can the regex module handle unicode?
- Can the regex handle unicode?