Most optimization algorithms (including heuristics) work on some configurations (in your example a route) by applying operations on them. The operations for itself should guarantee that they deliver only valid configurations, so first there should be unit tests for each of them. When you know for sure the optimization algorithm uses only those operations, there should typically be no need for a validity test of the algorithm's result.
To create good unit tests for any kind of more complex algorithm, one actually has to know the algorithm itself in detail. For simple heuristics like "hill climbing" you typically can predict the outcome for small inputs. For example, for initial routes of 3 to 5 points, when given in a certain order, you can predict what will happen. This will stay true for most deterministic heuristic algorithms I know of, so that's probably a good place to start.
For more complex algorithms, and bigger size of the input, when you just feed the input into the algorithm and try to check the output, you are actually not doing a unit test anymore, you are doing acceptance or integration test. The reason why you have problems to "unit test" such an algo is because it typically consists of a handful of smaller parts (individual units). So for really unit testing such an algorithm, you will have to identify those parts and test them individually. Additionally, you can use code coverage or branch coverage techniques to make sure you have enough test cases.
If you are not looking for unit-tests, but automated acceptance or integration tests, you can try what @Phillip suggested under (2) or (3).