If the test suite takes literally hours, you should focus on that and try to optimize your tests.
When it comes to testing in a context of SOA, it is important to be able to:
Test the internals of the service before deploying it. The tests here will be very granular, white-box tests, which would run in isolation. You will usually have a lot of them, and each one will perform in a matter of milliseconds. The whole set should take a few seconds (in a context of a microservice; for large applications, it may be a few minutes).
Test the interfaces (this includes the REST/SOAP interface provided by the service under test, but also how it consumes the underlying services as well; you won't like deploying a service which behaves badly towards other services it uses). Those tests are slightly slower, but still performed in isolation. They may take a few minutes.
Once those tests pass, you may deploy the service to staging environment which is very close to the production environment. Here, more things can be tested:
Test the service in interaction with other services. The goal of those tests is to be able to determine whether the interfaces are actually compatible. It should be unnecessary to tell that in order for those tests to have any value, the other services should match exactly the versions deployed in production. Those tests can take a few minutes.
Test a few end-to-end scenarios to ensure the whole chain works well. Chose the scenarios carefully: since each test may take up to a minute, you don't want too much tests here. There is no need to test every possible case, since you should already have enough coverage (and confidence) with all the other tests performed previously.
Now that you're confident that the service behaves well with the services deployed in production, the service can be pushed on some production machines. From this moment:
Test how the service is actually running. Keep it running for five minutes on the selected servers, and check whether the number of errors in the logs have increased or if the other services have a peak of demands, or other things which could indicate that something is not right. If this happens, the service should be rolled back.
Test a few end-to-end scenarios which are the most important ones, in order to be sure that the whole chain works in production as well. For instance, for an e-commerce website, a single test which registers a user, adds a product to a cart, makes a purchase, pays, then asks for a refund is largely enough: you don't need to make an additional test to know if the user can unregister or if a user can compare products—those scenarios are too minor and should have been already tested previously.
Once the system is confident that the service runs well in production, it may be deployed on the remaining production machines.
It's unclear how to run test suites for changes from branches other than master.
This depends entirely on your continuous integration strategy. In many cases, you do run unit tests on branches, but there is no continuous integration, which means that the code from branches is never pushed to staging environment. This, in turn, encourages the team to integrate often, either by not using branches at all, or by merging them on regular basis (that is, several times per day, or at least once per day).
Note that the specificity of a microservices ecosystem is that your product is not the whole set of services; your product is the specific service. This means that when you change its implementation, the only thing you should care about is that you haven't changed its interface; as soon as the interface is not changed, all the services which use the service you modified are expected to work correctly.
In fact, you don't even have to know which services are using yours.
In the same way, when Twilio or Amazon are changing their services, they won't run any tests to ensure that an application you have written which uses their services still works. Similarly, you don't test your app every time Amazon redeploys S3 (nor would you even know they redeployed it).
This also means that a lot of your effort should be spent at carefully designing interfaces:
- Which won't need to constantly change.
- Which will be detailed, unambiguous and well documented.
- Which won't leak implementation.