I can't find any source online which explains this in a different way from the others. I'd like to think I'm not just being slow, but it seems that I might be just that, regarding the op-amplifier.
So the summing-point constraint claims that \$x\rightarrow 0\$, if we assume the op-amplifier to be ideal. To me, this makes no sense when you consider the characteristics of an ideal op-amplifier.
Suppose that, due to \$V_{in}\$, a positive voltage \$V_{DIFF}\$ appears at the inverting input. Then a large negative output voltage result at the output.
\$V_{OUT}\rightarrow-\infty\$
(Here's where I don't agree with explanations anymore)
A fraction of this is sent back through \$R_F\$, meaning that \$V_{DIFF}\rightarrow 0\$ with time.
I just don't get this, if I were to use numbers to explain why I'm confused, suppose that \$V_{DIFF} = 0.1\,v\$ initially, and then \$V_{OUT}=-10\,000\,v\$. If a fraction of this is sent back, say \$0.1\%\$. Then the voltage \$V_{DIFF}=0.1+0.001\cdot V_{OUT}=-9.9\,V\$
And it continues to spiral out of control. How does it really work? Why does \$V_{DIFF}\rightarrow 0\$?