For even your 1 node scenario, if you want the above to work correctly in the face of machine/network failure then what you are asking for is to solve consensus. This is impossible to achieve with the technical meaning of "availability". This is the CAP theorem. (Beware the misleading wording describing "availability" on that page. A better wording is: "any valid request will receive a successful response no matter what". In particular, your system is not available if it returns an error message for every request or, in fact, any valid request. An invalid request, like reading a key that doesn't exist, can obviously have an error response.) By "high availability", though, you probably mean something like "will work even if N servers fail". This is achievable and is what algorithms like Paxos, Viewstamped Replication, and Raft do.
That said, you really don't want to implement these algorithms yourself for production use. It's very easy to make very subtle errors when implementing them. Luckily, others have already built production-ready implementations. In particular, I recommend using Apache ZooKeeper, or possibly Apache Kafka (which was built on top of ZooKeeper). If you decide to use ZooKeeper "directly", then you will probably want to look at Apache Curator.
If your operations are idempotent (and it sounds like they may well be) and the order jobs are executed on different nodes doesn't matter, then having the master be a Kafka cluster and having the nodes subscribe to independent topics will handle this nicely with pretty good efficiency. Simply commit the offset after handling each request. If order matters between nodes, a single topic that all nodes subscribe to to get requests can be used. Then all the nodes send their responses to a separate Kafka topic. Finally, all nodes consume the response topic as well and only process the next request when a response to the previous request is consumed. Duplicate responses can easily be ignored and compacted.
To get exactly-once semantics you need to be able to atomically commit the log offset with any persistent state of the node. If your node's state is a deterministic function of the history of requests it's received, then exactly-once semantics is trivial: all the necessary information is in the Kafka log, you just replay it. This scenario is extremely desirable, fits very naturally with Kafka, and simplifies things dramatically. In this scenario, replicating the node is trivial; you can just run multiple copies, you can spin them up whenever and they'll recreate their state, no coordination between them is needed.
If your nodes' state is not a deterministic function of the history of requests it receives, then you are in a potentially hairy place of essentially implementing a replication log. The good news is that ZooKeeper and Kafka can handle a lot of the trickier aspects. You can use ZooKeeper to elect a leader and Kafka to store and distribute the replication log. The leader will consume the request log, perform any non-deterministic actions such as reading the local clock or making web requests to other servers, calculate a delta between the current state and the tentative new state, then publish that delta, which can be deterministically applied, to a Kafka topic representing the replication log. The nodes (both the followers and the leader) then consume the replication log applying the deltas to their local state. The leader publishes its response to the appropriate topic when it applies the delta it read from the replication log. It's possible multiple deltas get published for the same request if there is a leadership change during the processing. That's not a problem since the replication log is strictly ordered, and any deltas after the first can be ignored. (In the scenario of the previous paragraph, the request log essentially was the replication log which is why so much was simplified.)
The final concern is that you'll probably want to do some form of checkpointing so you don't need to keep the full log around and replay it from the beginning. Another nice thing about this log-centered design is that this can be done as a background process if desired.