If I have a small antenna, the near-field is said to extend to \$\frac\lambda{2\pi}\$:
The outer boundary of the reactive near-field region is commonly considered to be a distance of \${\textstyle {\frac {1}{2\pi }}}\$ times the wavelength (i.e., \${\textstyle {\frac {\lambda }{2\pi }}}\$ or approximately 0.159λ) from the antenna surface.
From https://en.wikipedia.org/wiki/Near_and_far_field.
I do not understand why the distance is \$\frac\lambda{2\pi}\$, I know the concept of far-field is not clear cut, but I still want to understand where this distance comes from. All literature I can find gloss over this, only referring to the reactive field becoming destructive.
The best speculative, handwavy explanation I can come up with is by looking at the high impedance electric field close to an antenna made up of two oposite point charges. In my mind, changes to the electric field propagate along the field lines with a speed of c. The electric field will form a roughly circular loop, tangent with the two charges, with circumference \$O=\pi D\$ meters. For the field to loop back and become destructive, it would have to travel \$O=\frac\lambda{2}\$ meters (I.e. 180 degrees shifted). So the near field will, by this definition, extend to the diameter of this circle, given by \$\frac\lambda{2}=\pi D\$ hence \$D=\frac\lambda{2\pi}\$.
Is this what's going on, or am I barking up the wrong tree here?