简体   繁体   中英

Method to minimize boolean function in SOP-form

I'm dealing with boolean functions of which I can only (but safely) assume that they come in as a SOP and contain no negations (eg (A && B && C) || (A && B && D) ). The number of disjunctions is usually > 5, the number of conjunctions within usually > 10.

Because in my case computing the value of each variable is hard and the result is to be considered ephemeral , I need to be able to minimize said functions with respect to variable occurrence. The result of this minimization does not need to be in any normal form and is allowed to be nested arbitrarily deep.

Having asked a similar question before, SO points to general solutions using fanout-minimization, Karnough maps, QM or BDDs. Before dealing with these approaches - which would blow up my code extensively - I'd like to double-check if the a priori known facts about the input function do not yield the possibility to use a smaller though less general approach of minimization.

AFAICS applying the absorption and distributivity laws will always provide the minimal form. Is there a possibility to exploit the fact that the functions come in as SOPs and have no negations? It appears to me that there should be a recursive algorithm of simple intersection- and union-operations on the variables that will yield the desired result.

Can one describe that algorithm?

Edit: Request for comments: Having done some research on the topic, it appears to me that the question asked here is equivalent to finding the optimal variable ordering of the reduced BDD of the given functions.


Background: The minimized function is passed on to a job queue to figure out the value of all required variables. The function is evaluated afterwards. Consider the application examples:

  • The input function (A && B && C) || (A && B && D) can be written as A && B && (C || D) , which eliminates having to evaluate A and B twice. Evaluation of C and D is serialized in the job queue because only one of them needs to be proven true.
  • (A && B && C) || (A && B && C && D) || (A && B && X && E) is reduced to A && B && (C || (X && E)) . The evaluation of X && E is considered more hard and therefor placed behind evaluation of C in the queue, the evaluation of D is dropped.

Here is a simple algorithm :

let's consider an exampe : ABC+ABD

there is

  • 2 terms T1 = ABC and T2=ABD
  • 4 vars A,B,C and D

First convert your expression to a 2D table (it's not a k-map) :

    T1  T2
A   1   1 
B   1   1   
C   1   0
D   0   1 

**begin** 
**While** the table is not empty do :
    **if** a row or a column have only zeros, **then** 
           remove it from table and continue. 
    **end if**
    **if** there is a row or more with only ones **then** 
           factor the vars corresponding to the rows 
           and remove the rows from the table 
    **else** 
           get the rows having a max number of ones, 
           do their scalar prod 
           from the scalar prod obtained, 
           get the columns corresponding to zeros (terms) 
           and put aside the one having a min number of ones 
           and remove its column from the table
    **end else**

**end while**

close brackets
**end**

Application to the table above :

    T1  T2
A   1   1 
B   1   1   
C   1   0
D   0   1 

iteration 1 : there is 2 rows having only ones, A and B, factor them and remove them from table :

the expression will begin with : AB(... and the table is now :

    T1  T2  
C   1   0
D   0   1 

iteration 2 : no rows having only ones. two rows having a max number of ones equal to 1 , their scalar prod is 0 0 , two columns having a zero, T1 and T2 both have the same number of 1 , no min , take one of them aside, let's take T1 and remove it from the table : the expression will begin with : AB(T1+ and T1 is 1*C+0*D = C the expression will begin with : AB(C+... the table is now :

    T2  
C   0
D   1 

iteration 3 : the row C have only zeros, we shall remove it , the row D have only ones, we factor it and remove it from table

the expression is now : AB(C+D(...

the table is now : empty

iteration4: the table is empty -> end of while

close brackets :

the expression is AB(C+D)

it's not an optimal algorithm but it's less general than k-maps because it takes into consideration the fact that the expression is SOP and without negations

According to your assumptions, you'll need a function to evaluate your signature before executing the required function.

There's no a priori algorithm that will do this for you, at least in java, hence you'll need to codify it and keep iterating until you find the most general abstraction.

Boolean algebra

There you have all the properties applied in logic, being the first three the most useful for you as you don't want to use the NOT operation. I hope this helps.

I would do it with a "common sense" algorithm; I am not sure it is optimal, but "optimality" is difficult to express in that case. I assume that you don't have any 'preference' in the order in which the clauses are evaluated, but this could be included in the procedure without difficulty.

Let x_1 ... x_n be your decision variables, y_1 ... y_m be the conjonctive clauses of the form prod_{i in I_j} x_i for each j : the expression you wish to minimize is then the sum from j=1 to m of the y_j .

The decision variables can be "partitioned" first:

  • if they appear in all the I_j , they need to be evaluated anyway; do this first (and remove those from the sets I_j afterward)
  • if they appear in none of the sets I_j , they do not need to be evaluated (remove them from the sets I(j) too).

If one of the x_i that appeared in all the clauses is false, then the expression is false; END.

Otherwise, the objective is to find one of the sets I_j such as all the x_i are true (or to prove that none exist).

Order the I_j by increasing cardinality, to minimize the number of evaluation. Keep an array (say z_i ) such as z_i=1 if x_i was already evaluated to true, and false otherwise. For each of the sets I_j in that ordered list:

For each i in I_j :

  • evaluate x_i (if z_i is false);

    • if x_i is false, remove I_j and all the sets that contain i .
    • if it is true, store 1 in z_j and continue
  • if this loop ends (all the x_i were true), the expression is true. END.

  • if the list of the sets I_j is empty, the expression is false. END.
  • otherwise, go to the next I_j .

It has the advantage of being really simple to implement, and should be quite efficient I believe.

From a complexity standpoint, I think there are some partly related results that would seem to suggest that this problem is hard.

According to "On the Readability of Monotone Boolean Formulae" by Elbassioni, Makino, & Rauf (pdf link), it is NP-hard to determine whether a Boolean formula in CNF or DNF can be rewritten as a formula where each variable appears at most k times (for k >= 2). Note that this result does not match the problem statement because the original formula is not monotone (ie: may contain negations).

According to "Complexity of DNF and Isomorphism of Monotone Formulas" by Goldsmith, Hagen, & Mundelhenk (pdf link) , it is NP-hard to compute the minimal DNF for an arbitrary montone Boolean function. This result doesn't match exactly because the original formula is not given in DNF and the output formula is restricted to DNF.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM