When I read about photonics, I always see that they can be used for linear transformations (just matrix multiplications), and that this is a limitation that makes them unsuitable for building a complete photonic microprocessor. Why are linear transformations insufficient? What kind of computations require nonlinear transformations, and is there a subtopic in EECS that tackles the separation between applications of linear and nonlinear operations?
-
4Binary addition, for starters. – hobbs Jun 19 '22 at 01:04
-
@hobbs, could you elaborate as an answer? I would follow up from the comments, but I feel that I am missing a rather simple explanation. – egemen404 Jun 19 '22 at 01:08
-
5Fundamentally the reason that we use nonlinear elements is that most of the things we want to compute are ultimately nonlinear. Think about the basics of any programming language, how would you implement simple things like loops if you can't AND conditions together? – user1850479 Jun 19 '22 at 02:02
-
1@user1850479 Most things in general are nonlinear. Linearity is a crutch because it is easy to comprehend. – DKNguyen Jun 19 '22 at 02:06
-
This question may be even better served at quantum computing SE: https://quantumcomputing.stackexchange.com/ where they think a lot about novel computing architectures, or even computer science SE https://cs.stackexchange.com/ – Jagerber48 Jun 19 '22 at 16:23
-
3Old professor shared a bit of wisdom (goes back to at least the 90's): non-linear physics is like non-elephant biology. (In other words, don't confuse the special case and the general case). – MSalters Jun 20 '22 at 09:03
-
2While the accepted answer is certainly a good one, @user1850479's comment is really the fundamental answer. You can't make a non-linear device out of only linear components. There must be some non-linearity to at least one component. So if you want to do some operation that is non linear with a circuit, you *must*have nonlinear components. – Cort Ammon Jun 20 '22 at 15:25
-
As for the last question, about a subtopic in EECS, having been near the field for several decades, I'd argue that anything I have seen in EE worth being paid to do has been right on that ugly, murky, hateful juncture between linear and non-linear, and has been so for probably a hundred years. We just get better at exploring its murkyness, so we dive deeper. – Cort Ammon Jun 20 '22 at 15:27
-
This is a follow-up question rather than an answer, is it really impossible to create non-linear photonic microelectronics? – Abhigyan Jun 20 '22 at 16:44
-
@Abhigyan Nonlinear optics is a broad topic and definitely exists, but it's hard and typically requires high powers, which is somewhat at odds with the goal of lower power consumption through photonics. – user1850479 Jun 20 '22 at 17:00
-
@user1850479 Any recommended reading? – Abhigyan Jun 21 '22 at 08:17
-
1The simplest possible logic, P and Q, is non-linear. Photonics can do some interesting things, for instance 'compute' a Discrete Fourier Transform. When a general purpose digital computer does this, it will generally perform hosts of non-linear operations for the practical business of program flow control. However, the transform itself is linear. A sufficiently special purpose digital processor could compute the DFT with only linear operations, just as the photonic lens arrangement for DFT computation is very special purpose. – Neil_UK Jun 21 '22 at 08:50
4 Answers
Most computers use digital logic. Digital circuits are 'restoring' and minimize (eliminate) signal level error propagation. Analog circuits (which are used for linear transformations) generally add noise (errors) and decrease accuracy as signals propagate, making them unsuitable for complex multi-stage calculations.
Basically, if a logic gate has a signal (voltage level) that is high, or 'nearly high', its output will be even closer to 'perfect'. Similarly for low signals. This means that a logic level will propagate through the logic (with some delays), but at each step it doesn't lose quality -- in fact it improves the precision of the signal.
This comes because the digital logic gates are non-linear. For input signals close to the transition point, it has very high gain -- thus the output signal will be further away from the transition point. Since input and output signal levels span the same range, the corollary of this is that as the input level moves away from the transition point, the output saturates -- i.e. asymptotically approaches the ideal level; this implies that in this condition the gain is very low. Thus the circuit is non-linear.
This means that even to implement linear calculations (e.g. matrix multiplication) only the most basic manipulations are practical (i.e. accurate enough) to implement with analog circuits (e.g. opamp gain or integrators). When linear algorithms are implemented digitally (e.g. floating point computation) it is practical to have millions of variables and billions of calculations -- that is not feasible with an analog implementation.
There are a few cases where analog (linear) implementations can be better -- these are ultra low power circuits where a few transistors can implement a calculation that would require 1000's of digital gates; also extremely high speed circuits (multi-GHz) where digital logic isn't fast enough and some noise (inaccuracy) is tolerated. Examples include the front end of RADAR systems where the initial signal processing and filtering is implemented with analog circuits.
to clarify -- even if the computation is linear, it is generally most robustly implemented with digital electronics (e.g. a computer), likely with floating point representations, and computation using that uses non-linear functions (binary logic operations).

- 18,395
- 17
- 46
-
That's an interesting perspective. The brittle nature of digital signals means that frequency spectra of noise isn't propogated through so although resolution is not infinite, precision is maintained. – DKNguyen Jun 19 '22 at 01:51
I think you're confusing linear mathematics with linear electronics by attempting to view it all through the lens of abstract mathematics. Or perhaps confusing the method (the logic of the math being performed) with the means (the physical representation of that logic and how the calculation is physically carried out).
Our digital computing is built on ones and zeroes because switches that are either conducting or not conducting are easy to construct in real life. That's it. That's the reason we do the things the way we do it. No need to think about math like linearity transformations.
But it happens to be that a switch is a nonlinear device because the frequency spectra of the signal you get out of the switch is not limited to the frequency spectra of the signal you use to control the switch. But that's we don't use switches because we need nonlinear devices in computing. It's the reverse. We use switches for much less abstract reasons and they just happen to be nonlinear devices.

- 54,733
- 4
- 67
- 153
-
3I don't think he is confusing the concept of linearity. Rather he is thinking about optical computing where elements are typically VERY linear (photons REALLY like to obey superposition) and thus the types of computation you can do are limited by the need to map them to interactions photons (rather than electrons) can do. A lot of optical computing thus reduces to trying to engineer practical nonlinear optical interactions. I think he is asking why you need than and can't run a typical computer program using only linear operations. – user1850479 Jun 19 '22 at 02:34
-
4@user1850479 I see. That's even simpler then...you're boxed in with only linearity since it is a very small subset of everything. You can't get any frequencies out other than those you put in so in a sense you can't make anything "new" signal wise. If everything non-linear is the name of the game you can get close-to-linear which is still nonlinear, but if you have only linear that's the end of the road. There are many ways to be non-linear but only one way to be linear. I guess in most cases non-linearity runs rampant so we usually strive for linearity because it is easy to analyze. – DKNguyen Jun 19 '22 at 03:19
We ask computers to calculate arbitrary functions of their inputs. Most functions are nonlinear. The simplest example is comparison if(a>b) ...
. A linear computer can't make decisions.

- 2,227
- 5
- 12
Consider adding two numbers in binary. They can be 1-bit numbers, to start with:
+ | 0 | 1 |
---|---|---|
0 | 00 | 01 |
1 | 01 | 10 |
There are actually two circuits/functions here, one for each bit of the output. The least-significant bit of the sum is represented by the function XOR:
XOR | 0 | 1 |
---|---|---|
0 | 0 | 1 |
1 | 1 | 0 |
and the "carry" output is represented by the function AND:
AND | 0 | 1 |
---|---|---|
0 | 0 | 0 |
1 | 0 | 1 |
Both of these are nonlinear. AND is at least monotone: you could represent it using a linear function of its two inputs and then a thresholding operation — but of course thresholding is nonlinear as detailed by jp314. XOR isn't even that nice: you can never draw a neat line between the inputs that get a 0 output and the ones that get a 1 output.
Adding two numbers of more than one bit requires chaining together the carry-outs of one adder to the input of another adder, so the absence or presence of a one bit at a given point in the output is a very nonlinear function of every less-significant bit of both of the inputs.
Now, is this a consequence of our insistence on binary representation? Sort of. What about analog computers? Well, they're called that because the voltages inside the computer are analogous to the values in the computation. An analog computer with only linear elements could only solve linear problems, and would therefore be rather unexciting.

- 6,719
- 1
- 19
- 31
-
1I agree with your answer. Funnily enough, XOR is considered linear in the context of GF(2) arithmetic, cryptanalysis, etc. – Nayuki Jun 21 '22 at 02:36