简体   繁体   中英

Revised simplex method enters endless loop

I'm trying to implement a revised simplex method (RSM) algorithm using Python and numpy. I'm stuck with its either working on maximization only (correct on tiny matrices like 2x4, 5x5 etc and wrong on larger ones) or entering endless loop in most cases of minimization. The code below demonstrates my attempt to implement minimization:

    def __init__(self, A: np.ndarray, b: np.ndarray, c: np.ndarray):
        base_size = A.shape[0]  # Number of basis vectors
        I = np.eye(base_size)  # Identity matrix, an initial basis 
        self.extended_matrix = np.concatenate((A, I), axis=1)  # Extended matrix of the system
        self.free_size = A.shape[1]  # Number of free vectors
        self.b = b  # Right parts of the constraints
        self.base_inv = I  # Initial inverse basis matrix
        self.c = np.concatenate((c, np.zeros(base_size)))  # Objective function quotients including those related to the basis variables
        self.c_i = [i for i, coeff in enumerate(self.c)]  # Indices for all variables
        
    @property
    def c_f_indices(self):
        """
        Indices of the free variables.
        """
        return self.c_i[:self.free_size]
    
    @property
    def c_T_B(self):
        """
        Objective function quotients related to the basis variables.
        """
        c_B_indices = self.c_i[self.free_size:]  # Basis variables indices.
        return self.c[c_B_indices]
    
    @property
    def c_T_f(self):
        """
        Objective function quotients related to the free variables.
        """
        return self.c[self.c_f_indices]
        
    @property
    def free(self):
        """
        Free vectors.
        """
        return self.extended_matrix[:, self.c_f_indices]
    
    @property
    def y_T(self):
        """
        Lagrange multipliers.
        """
        return self.c_T_B @ self.base_inv
    
    @property
    def deltas(self):
        """
        Net evaluations. 
        """
        return (self.y_T @ self.free) - self.c_T_f 
    


    def _swap(self, exits: int, enters: int) -> None:
        """
        In-/excluding respective vectors into/from the basis.
        """
        self.c_i[enters], self.c_i[exits + self.free_size] = self.c_i[exits + self.free_size], self.c_i[enters]
    
    def optimize(self):
        while any([delta > 0 for delta in self.deltas]): # < 0 in case of maximization
            x_B = self.base_inv @ self.b  # Current basis variables
            enters_base = np.argmax(self.deltas)  # Vector to enter the basis; argmin in case of maximization
            
            # Finding the vector to leave the basis:
            alpha = self.base_inv @ self.free[:, enters_base]

            try:
                exits_base = np.argmin([b/a if a > 0 else np.inf for b, a in zip(x_B, alpha)])
                assert alpha[exits_base] != 0
            except (ValueError, AssertionError):
                raise Exception("No solutions")
            
            # Finding the E_r matrix, which current inverse basis will be left-multiplied with in order to achieve the new inverse basis:
            zetas = [-alpha[j] / alpha[exits_base] if j != exits_base else 1/alpha[exits_base] for j, a_j in enumerate(alpha)]
            E = np.eye(self.free.shape[0])
            E[:, exits_base] = zetas
            self.base_inv = E @ self.base_inv
            
            # In-/excluding respective vectors into/from the basis:
            self._swap(exits_base, enters_base)
            
        return self.c_T_B @ self.base_inv @ self.b # Final objective function value

I've also tried to sort c_f_indices but still get endless loop. A similar RSM implementation also yields wrong results on larger matrices (like 16x8, for example) and works fine on tiny ones.

Where is the root of the problem?

As already mentioned by Erwin Kalvelagen, the normal Dantzig pivot rule can lead to cycles and stalling , where there is no improvement in the objective value for long periods of time.

In general, this phenomena is known as degeneracy of the LP. Limited numerical accuracy and roundoffs errors can contribute to the degeneracy of an issue. This is why it is typically more prevalent in large LP's.

There's a few ways to combat this problem. As Erwin mentioned, pertubation is most often used. However, if you're doing this as a learning project, I would suggest using a solution where you use more refined pivoting rule such as Zadeh's Rule or Cunningham's Rule , where you simply keep aa table or variable to ensure you pick a different variable to enter the basis once you've cycled.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM