You need a test environment, where the solution can be executed without affection whatever is in production. If they have a procedure for that, somebody can do it for you, otherwise, you got to figure it out. And I suspect this case will be on the figure out side.
Set up the test environment
That test environment include a database. You can set up a local database for that purpose.
Ideally the solution knows the schema it needs... Although that is not common, so you will probably have to ask whoever handles the production database to export the schema so you can set it up in the test environment.
Next you need to figure out the connection. Ideally it is not hard-coded, so you can edit a configuration file to connect to your test database... Although, sometimes that is not the case. Well, first thing to change, I guess. Do the change on your branch – I'm assuming they have version control, if they don't... oh boy – that change should not going to production before you have your test environment (in case you broke something) working and the approval of the system admin (who needs to double check the configuration for the release environment).
Once it is connecting to the database, you need data. Ideally the solution can start with an empty database... Although, again, that is not common. See if there is any documentation of what the solution needs to start (usually it will be just a few entries, perhaps there is a script to run somewhere). If there is no documentation, you will have to figure it out from the error logs or messages. I'm assuming there are logs, if there aren't put "add error logs" to a TODO list or issue tracker, or kanban, or whatever they use. And document whatever the solution needs to run.
Note: you could have to do similar work for pther external systems. For example, is the code supposed to send email? Now you need a mail server to connect and a mailbox you can check programmatically.
Getting ready to write tests
All that was just setting up the environment. You have not written a single test yet. Although you are going to be doing integration tests, an automated test tool is a good idea (despite the fact that plenty of them say "unit" in their name). So pick one.
Oh, by the way, depending on the projects you could need a tool that allows you to move the mouse pointer and send keys, or perhaps a tool that allows to manipulate a browser programatically or something like that.
Your initial tests will treat the system as a black box. They check that you do something and something happens. Plenty of things could be idempotent. The usual recommendation is to undo during arrange (in Arrange-Act-Assert pattern), however, given that the system was not designed for testing, you could need a teardown after asserts. The AAA pattern is for unit tests, sometimes integration tests have to go out of line.
What to test
Test the documentation. There are documented requirements for the solution, right? Well, there is a chance there aren't, or there are in somebody's head. If there are, write tests to assert every part of them, including – and specially – the exceptional paths.
For the parts with no documentation, try to characterize the behavior of the system. You start with a guess of what it should do, based on experimentation, reading the code and talking to people. If you can figure out what it should do, write a test for that. If you can't figure it out... I suggest to also write a test, a test that passes when the system does what it currently does, and put it on a category apart so you know you were not sure next time you see it.
Test the interaction with other systems. If something should affect an external system, check it does.
Test everything that is exposed, try to increase coverage. You do not need 100% coverage. However, some coverage on all public parts is good.
I could go on refactoring or change management... but I would ramble a lot. I know, I wrote it and deleted it. So, hopefully I covered what you are interested in.