what determines this turn on time
How much charge needs to be pumped into the FET gate to turn it on, divided by how much current the driver outputs:
\$ i = dq/dt \$ is valid for anything that can contain charge, like a capacitor, or a MOSFET gate. This one is more complicated than a capacitor, because of Miller effect, but that doesn't change the fact that to switch the MOSFET, an adequate charge must be put into the gate.
FET datasheets usually specify "total gate charge Qg" but be aware that it depends on Vgs and Vds, so two MOSFETs having Qg's specified at the same Vds and Vgs can be compared (the lower Qg one will switch faster) but this is not necessarily the case if they are specified at different Vds and Vgs. Likewise if you use lower values of Vgs and Vds, Qg will be lower in your application.
It will switch slower if the amount of inductance becomes cumbersome, especially in the gate.
Current delivered by the driver will usually not be constant across the output voltage range either.
what is this minimum on time?
No circuit can output an infinitely short pulse, whether it's a logic gate or a MOSFET driver. They all have a speed limit, and therefore a minimum output pulse time below which the output will begin to go in the other direction before it has reached the flat top level of the pulse, turning a nice rectangular pulse into something more like a triangle. So there is a minimum off-time limit imposed by the driver chip itself. It can't go faster and still output a clean pulse.

However the MOSFET also adds another constraint: if the FET turns off immediately after turning on, or even worse if it doesn't have time to turn on completely then turns off immediately, then it will incur switching losses but it will do almost no actual useful work.
In a DC-DC converter, the FET is only doing useful stuff when it is conducting current. In fact, in an ideal converter the FET would switch instantly, so it would always be either on or off, and never in-between, so there would be no switching losses.
Basically, when on-time is large relative to switching time, it's worth it to pay switching losses, then pass a good amount of energy through the FET. Otherwise, it is more efficient to avoid switching losses by not switching at all, and skip cycles or go to sleep instead, operating the DC-DC converter in discontinuous mode. This also saves the energy required to drive the gate.
DC-DC chips designers take this into account and will usually pick a minimum on-time value that is not necessarily related to the fastest pulse the driver can output, but rather something reasonable to keep the converter efficient.
There are other constraints, for example if the chip samples the current through the FET only while it is on, then this requires a bit of time, and it has to stay on for longer than that for it to work.