The normal way people handle this is to devise a model for how the degradation rate varies with, say temperature, input current or luminosity, parameters that would reasonably be expected to influence the failure rate.
They then test this model, by driving devices to failure at high temperature or 200% rated output, and seeing if the graph of failure time versus aggravating factors follows the predictions of their model. This is known as 'accelerated life testing'.
Then having confidence in the model, they extrapolate from it to predict what the failure rate is expected to be with less severe conditions.
Obviously, the validity of this approach depends on the quality of the model. The key word that should ring alarm bells is 'extrapolate', is it still a straight line fit in the region you haven't made any measurements? You can boost public confidence in your predictions by publishing the model, and in particular, the activation energy you derive from any power laws that you manage to fit. It's only after many years have passed that you'll see whether your model was correct or not, if you are still doing and observing the appropriate experiments that is!
Look on wikipedia for 'accelerated life testing' and 'Arrhenius plots', to see how this sort of approach tries to model temperature dependence, and to see what I mean by activation energy.
A difficulty in your particular case could be that I would expect there to be a dependence separately on both operating temperature and luminosity, which will complicate the construction and verification of the model.