簡體   English   中英

用遺傳算法解決異或問題

[英]Problems Solving XOR with Genetic Algorithm

我正在嘗試使用神經網絡解決 XOR 問題。 對於訓練,我使用遺傳算法。

人口規模:200

最大代數:10000

交叉率:0,8

突變率:0.1

權重數 : 9

激活函數:sigmoid

選擇方法:最適合的高比例

代碼:

    def crossover(self,wfather,wmother):
        r = np.random.random()
        if r <= self.crossover_perc:
            new_weight= self.crossover_perc*wfather+(1-self.crossover_perc)*wmother
            new_weight2=self.crossover_perc*wmother+(1-self.crossover_perc)*wfather
            return new_weight,new_weight2
        else:
            return wfather,wmother

    def select(self,fits):
        percentuais = np.array(fits) / float(sum(fits))
        vet = [percentuais[0]]
        for p in percentuais[1:]:
            vet.append(vet[-1] + p)
        r = np.random.random()
        #print(len(vet), r)
        for i in range(len(vet)):
            if r <= vet[i]:
                return i


    def mutate(self, weight):
        r = np.random.random()
        if r <= self.mut_perc:
            mutr=np.random.randint(self.number_weights)
            weight[mutr] = weight[mutr] + np.random.normal()
        return weight

    def activation_fuction(self, net):
        return 1 / (1 + math.exp(-net))

問題:

~5/10 測試工作正常

預期輸出:

0,0 0

0,1 1

1,0 1

1,1 0

測試:

它不一致,有時我得到四個 0,三個 1,多個結果 你能幫我找出錯誤嗎?

**編輯

所有代碼:

 def create_initial_population(self):
        population = np.random.uniform(-40, 40, [self.population_size, self.number_weights])  
        return population

    def feedforward(self, inp1, inp2, weights):
        bias = 1
        x = self.activation_fuction(bias * weights[0] + (inp1 * weights[1]) + (inp2 * weights[2]))
        x2 = self.activation_fuction(bias * weights[3] + (inp1 * weights[4]) + (inp2 * weights[5]))
        out = self.activation_fuction(bias * weights[6] + (x * weights[7]) + (x2 * weights[8]))
        print(inp1, inp2, out)
        return out

    def fitness(self, weights):
        y1 = abs(0.0 - self.feedforward(0.0, 0.0, weights))
        y2 = abs(1.0 - self.feedforward(0.0, 1.0, weights))
        y3 = abs(1.0 - self.feedforward(1.0, 0.0, weights))
        y4 = abs(0.0 - self.feedforward(1.0, 1.0, weights))
        error = (y1 + y2 + y3 + y4) ** 2
        # print("Error: ", 1/error)
        return 1 / error

    def sortpopbest(self, pop):
        pop_with_fit = [(weights,self.fitness(weights)) for weights in pop]
        sorted_population=sorted(pop_with_fit, key=lambda weights_fit: weights_fit[1]) #Worst->Best One
        fits = []
        pop = []
        for i in sorted_population:
            pop.append(i[0])
            fits.append(i[1])
        return pop,fits

 def execute(self):
        pop = self.create_initial_population()
        for g in range(self.max_generations):  # maximo de geracoes
            pop, fits = self.sortpopbest(pop)
            nova_pop=[]
            for c in range(int(self.population_size/2)):
                weights =  pop[self.select(fits)]
                weights2 =  pop[self.select(fits)]
                new_weights,new_weights2=self.crossover(weights,weights2)
                new_weights=self.mutate(new_weights)
                new_weights2=self.mutate(new_weights2)
                #print(fits)
                nova_pop.append(new_weights)  # adiciona na nova_pop
                nova_pop.append(new_weights2)
            pop = nova_pop
            print(len(fits),fits)

一些輸入:

  • XOR 是一個簡單的問題。 有幾百個隨機初始化,你應該有一些幸運的立即解決它(如果“解決”意味着他們在做一個閾值后輸出是正確的)。 這是一個很好的測試,可以查看您的初始化和前饋傳遞是否正確,而無需一次性調試整個 GA。 或者你可以手工制作正確的權重和偏差,看看是否有效。
  • 您的初始權重(統一 -40...+40)太大了。 我想對於 XOR 這可能是好的。 但是初始權重應該使大多數神經元不會飽和,但也不完全處於 sigmoid 的線性區域中。
  • 在你的實現工作之后,看看這個神經網絡的前饋傳遞numpy 實現,了解如何用更少的代碼來做到這一點。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM