When biasing a transistor you control its gain, its \$V_{be}\$ and the collector current.
Beta is in some circles considered a very bad design parameter, as it can vary with 400% due to slight differences in process variation. Beta is the collector current divided by the base current. But the base current is made with several different physical effects, that vary quite much. In addition, there is DC and AC betas (called hFE (or h21 from the h-parameters) and hfe), and both vary with current, hfe ("AC beta") varies with frequency (the hfe is measured when \$V_{ce} = 0\$ aka into a short circuit).
My education in electronics includes this bias against using beta in the design of bjt circuits.
(The base current is the sum of the hole diffusion current, the base recombination current, and the base-emitter depletion layer recombination current (from ecee colorado))
in the case of the common-emitter circuit:
\$V_{be}\$ varies in temperature, seen from the Ebers-Moll model, to counter this temperature dependence, an emitter voltage is set up with biasing resistor \$ R_E \$ (this resistor is often bypassed with a large capacitor) This gives negative feedback, as the current increases (\$V_{be}\$ up) the voltage drop over \$ R_E \$ goes up, forcing \$V_{be}\$ down, effectively stabilizing the collector current. (this is negative feedback in action)
Stabilizing the collector current, and at the same time designing for a "constant" base current, using this emitter degeneration resistor, could be called "stabilizing" the beta. But in my mind it is a misrepresentation of the term beta, as beta is only measured by the manufacturer at a specific \$V_{ce}\$ and collector current \$I_c\$.
A final note on beta, it might seem subjective to say that beta should not be used as a design parameter, but there are several published works that take the same view.
I hope this explains how the bjt is stabilized through biasing, but also how calling it "beta stabilization" is not really a good way of explaining the action of biasing.