I have provided a reference (a) below to a method which describes means for the combining of single output maps to provide solutions with multiple outputs. Examples with 2 outputs and 3 outputs are given but the method is extensible indefinitely.
Here is a larger and possibly more complete treatment of the same chapter. More examples and maybe more text plus an exam questions example.
All this MAY be from this book
Introduction to Logic Design
Alan Marcovitz
ISBN13: 9780073191645
ISBN10: 0073191647
Division: Higher Education
Pub Date: MAR-09,
Publish Status: In Print
Pages: 656
Edition: 3
Price: A$ 172.95 / NZ$ 192(Incl. GST)
Also - Introduction to Logic Design with CD
Alan Marcovitz
ISBN13: 9780071123990
ISBN10: 0071123997
Division: Higher Education
Pub Date: AUG-01,
Publish Status: Out of Print
Edition: 1
Price: A$ 149.95 / NZ$ 175(Incl. GST)
Also referenced (b) is a method that claims to allow wholly method based derivation of 2 (or more?) output solutions.
It looks like (a) will do what you want without making your head spin too much, and that (b) may be superb if you can spend the effort wading through it. In addition (b) provides terms which are liable to produce useful search-engines leads - eg It sounds like, as well as investigating the method described here, PPIs, Quine-McCluskey, Petrick, MOPIs.
I suspect that you may be able to go a long way towards dealing with a 2 output solution by, for each output, using the other output as an input in a one output Karnaugh map. ie if you have inputs A B C and outputs y z you may get useful results by considering maps A B C y -> z & A B C z -> y. But, maybe not. I'll have to have a play with this. Failing that, the following methods appear to meet you need.
(a) Rule based method for obtaining multiple output solutions from multiple single output Karnaugh maps.
This is a free access chapter from a McGraw Hill book with an excellent treatment of Karnaugh maps including dealing with systems with multiple outputs. Unfortunately they make it difficult to determine the book's identity. Examples of 2 output and 3 output solutions are given.
(b) A purely map procedure for two-level multiple-output logic minimization
Here is a complete paper describing "A purely map procedure for two-level multiple-output logic minimization"
A (n attempted :-) ) de-densification of their main points yields their claims of:
a class-tested pedagogical treatment of the problem of collective
two-level multiple-output logic minimization through a brief exposition of a novel efficient procedure for tackling this problem.
This procedure is a purely map heuristic which generalizes the Karnaugh map (K-map) procedure used in single-output minimization.
In fact, the K-map procedure can be viewed as a short-cut technique for obtaining the minimal sum of a switching function without deriving the set of all its prime implicants or its complete sum
The procedure is a short-cut technique for obtaining a minimal collective cover without the need to determine the set of all MOPIs or even its
subset of PPIs
The procedure retains a pure map nature as it avoids any resort to algebraic or tabular techniques in the form of a presence function or cover matrix,
respectively.
It is of a good pedagogical value because of the pictorial insight it provides, and because it leads to a natural understanding of pertinent terminology such as MOPIs and PPIs.
All it requires of students is some repeated application of the familiar map heuristic ...
and the implementation of a simple algorithm which handles visual interactions between various maps.
Maps are grouped at distinct levels of a Hasse diagram, which is conveniently
drawn in a K-map layout so that any parent map is easily visualized as adjacent to all it
In addition, if you can wade through the language, it's useful to note what they claim to be providing an alternative to, as this describes others ways of solving the problem, and provides search terms for more investigation. Viz -
- In many texts (e.g. [15]) students are not burdened with any particular details of collective
minimization. Instead, they are instructed to apply separate or individual minimization and
then try to identify obvious common product terms so as to share these between the pertinent
output functions. Other texts [7, 10] introduce students to the classical method of collective
minimization through a map derivation of all paramount prime implicants (PPIs), followed by
the construction of a Quine–McCluskey cover matrix and/or a presence or Petrick function
for selecting a minimal subset of the set of all PPIs. This method requires further checking
for irredundant connections if exact minimization is to be achieved. Some texts present a
simplified version of the classical method in which the final solution is generated directly
from the set of multiple-output prime implicants (MOPIs) without reducing the MOPI set to
its subset of PPIs [8, 12, 13].
In this paper
It sounds like, as well as investigating the method described here, PPIs, Quine, McCluskey, Petrick, MOPIs and Google may be your friends :-).