-1

I am interested in robotics programming. HW control often involves calculating the derivative or integration of the signal. It seems quite inevitable to avoid local states for such calculation. (Since it must store past n signals).

The advantage of referential transparency seems attractive for the safe and reliable operation of robots, but it is hard to be held for it.

However, if one pick-up the concept that the output must only be determined by inputs (with no side effect) and these properties must hold by each part of the code, it seems the whole program would also meet the property. This makes a kind of loosened referential transparency seem not enough for substituting expression with its value, but enough for substituting signal processing unit with the resultant signal source.

Is there any name or discussion regarding this concept?

chanwoo ahn
  • 103
  • 2
  • 2
    I am inclined to answer "gibberish". Please rewrite this to make sense. There may be a question in here but I can't find it. What signal(s) are you on about? Introduce the general problem, make the link to robotics, explain why and how in your view referential transparency (in what code exactly?) would be advantageous in a robot control system. – Martin Maat Jan 01 '23 at 08:17

1 Answers1

1

Referential transparency is a functional programming (FP) concept. However, most real-world FP is not “pure” FP, and still deals with state somewhere – it's just minimized, avoided, and made more explicit.

For example, a more “object-oriented” solution might design objects that keep track of some internal state. E.g. here we might have:

class Derivative {
  RingBuffer<float> state = ...;

  float next(float x) {
    state.push(x);
    ...
  }
}

Derivative d = ...;
while (var x = takeMeasurement()) {
  printDerivative(d.next(x));
}

A more FP approach would push the state outwards. The actual derivative calculation might be a pure function that just takes a list of values:

float derivative(Iterable<float> measurements) { ... }

However, the caller might still manage local mutable state:

RingBuffer<float> measurements = ...;
while (var x = takeMeasurement()) {
  measurements.push(x);
  printDerivative(derivative(measurements));
}

We could also create a FP solution that abstracts over the state management, by taking an old state and returning a new state. How to do this depends a lot on the programming language. For example, in Rust I'd define opaque types for my state and return tuples from my function:

#[derive(Default)]
pub struct State {
  buffer: RingBuffer<f32>
}

// takes old "State" by value
pub fn derivative(mut state: State, x: f32) -> (State, f32) {
  // local mutation, does not affect caller
  state.buffer.push(x);
  let d = ...;
  // returns new state
  (state, d)
}

// usage
let mut state = State::default();
while let Some(x) = take_measurement() {
  let (new_state, d) = derivative(state, x);
  state = new_state; // local mutation, this is fine
  print_derivative(d);
}

If you wanted to avoid the local mutation in the caller, this could be dressed up with fancy iterators avoiding the while-loop, but it's semantically equivalent.

In languages like Java or Python this would be more difficult to do cleanly, as they use reference semantics instead of value semantics. It is not possible to be exclusive owner of an object, making in-place mutation potentially unsafe. A FP solution would likely have to avoid local mutations, and instead make copies. For example in Python:

State: TypeAlias = list[float]

def derivative(state: Optional[state], x: float) -> tuple[State, float]:
  # create new state
  if not state:
    new_state = [x]
  else:
    new_state = [*state[1:], x]  # copy part of old state

  # calculate derivative
  d = ...

  return new_state, d

# usage, with local mutation of the "state" variable
state = None
while (x := take_measurement()):
  state, d = derivative(state, x)
  print_derivative(d)

All in all, I'd recommend not thinking too much in terms of referential transparency. It is quite useful to write mostly-pure functions, since it is easier to reason about them and to test them. But a lot of actual code deals concerns stateful interaction, and has to deal with the outside world. It is not practical to have a completely pure program, you can just deal with mutable state in different ways – encapsulating the state in objects, or making the state change explicit via old state → new state style functions. Local mutable state is also far less problematic than dealing with shared mutable object graphs. You can mix and match techniques as you see fit to achieve a useful design.

amon
  • 132,749
  • 27
  • 279
  • 375
  • I appreciate your detailed explanation with kind examples. I really do. The last concept is indeed one I have also figured out. However, isn't it such a "pulling state out" approach hard to reuse code? For example, if I have N concatenated such blocks, is there any way I can see them as pure functions with an external state as a whole? It seems this is the core question of whether one can make complex programs or not. – chanwoo ahn Jan 01 '23 at 14:29
  • @chanwooahn If you want pure functions, that means the state is ultimately updated somewhere externally. You can stick the state + function together in an immutable object if that helps, but that won't substantially change how the code works (aside from allowing you to encapsulate contents of the state with `private` visibility, if desired). I don't think there are downsides to reuse – if anything, pure functions are safer to reuse since they don't have unexpected effects. But it's more difficult to evolve the externalized state, since it's managed across different places. – amon Jan 01 '23 at 16:11