繁体   English   中英

玩具神经网络还是花哨的波浪线生成器?

[英]Toy Neural Network or fancy squiggly line generator?

我写了这段代码是为了更好地理解机器学习,但我不确定我是否在正确的轨道上。 到目前为止,它使用 python 3.7 在整个屏幕上绘制随机波浪线。

import turtle
import random

# Sets the Turtle main screen color 
turtle.bgcolor("pink")

# Settings for bug sprite
bug = turtle.Turtle()
bug.penup()
bug.color("red")
bug_x = bug.setx(-150)
bug_y = bug.sety(12)
bug.pendown()

# Settings for food sprite
food = turtle.Turtle()
food.penup()
food.color("green")
food_x = food.setx(160)
food_y = food.sety(59)
food.pendown()



# Main Loop
while True:


    # X and Y coordinate of Food
    destination = [160,59]

    # X and Y coordinate of Bug
    x_1 = bug.xcor()
    y_1 = bug.ycor()
    origin = [x_1,y_1]

    learn = .10
    bias = 0

    # Weights
    wghts = [random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1),
             random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1)]
    #print(wghts)




    # Output Neurons
    output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + bias
    output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + bias
    output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + bias

    #Relu Function
    if output_1 >= 0.1:
        output_1 = output_1
    else:
        output_1 = 0

    if output_2 >= 0.1:
        output_2 = output_2
    else:
        output_2 = 0

    if output_3 >= 0.1:
        output_3 = output_3
    else:
        output_3 = 0

    # Compares food/destination X and Y with bug/origin X and Y.
    # applies update ("learn") to all weights
    if origin[0] != destination[0] and origin[1] != destination[1]:
        wghts[0] = wghts[0] + learn
        wghts[1] = wghts[1] + learn
        wghts[2] = wghts[2] + learn
        wghts[3] = wghts[3] + learn
        wghts[4] = wghts[4] + learn
        wghts[5] = wghts[5] + learn
    else:
        wghts[0] = wghts[0] 
        wghts[1] = wghts[1] 
        wghts[2] = wghts[2] 
        wghts[3] = wghts[3] 
        wghts[4] = wghts[4] 
        wghts[5] = wghts[5]

    #print(wghts)
    #print("\n")

    # Creates a barrier for turtle
    bug_1a = int(bug.xcor())
    bug_2a = int(bug.ycor())

    if bug_1a > 300 or bug_2a > 300:
        bug.penup()
        bug.setx(5)
        bug.sety(5)
        bug.pendown()
    if bug_1a < -300 or bug_2a < -300:
        bug.penup()
        bug.setx(5)
        bug.sety(5)
        bug.pendown()

    # Output values applied to turtle direction controls
    bug.forward(output_1)
    bug.right(output_2)
    bug.left(output_3)

我在您的程序中看到的问题:

wghts从前一次迭代中什么也wghts学到——它们每次通过循环随机重置。

output_1output_2output_3是根据刚重新初始化的wghts计算得出的,因此所做的更改如下:

if origin[0] != destination[0] and origin[1] != destination[1]:
        wghts[0] = wghts[0] + learn
        ...
        wghts[5] = wghts[5] + learn

永远不会反映在output_*变量中。

您正在添加错误的 X 和 Y 坐标,并将其用作要转动的度数。 两次。 我不明白这有什么意义,但我想这是一个神经网络的事情。

您在代码中进行屏障检查太晚了,以至于它与后面的内容不同步。 虫子不动,所以早点检查。

下面的代码清理不会让你的错误变得不那么随机——只是希望让你的代码更容易使用:

from turtle import Screen, Turtle
from random import uniform

# Sets the Turtle main screen color
screen = Screen()
screen.bgcolor("pink")

# X and Y coordinate of Food
destination = (160, 59)

# Settings for food sprite
food = Turtle()
food.color("green")
food.penup()
food.setposition(destination)
food.pendown()

start = (-150, 12)

# Settings for bug sprite
bug = Turtle()
bug.color("red")
bug.penup()
bug.setposition(start)
bug.pendown()

LEARN = 0.1
BIAS = 0

# Main Loop
while True:

    # X and Y coordinate of Bug
    x, y = bug.position()

    # Creates a barrier for turtle
    if not -300 <= x <= 300 or not -300 <= y <= 300:
        bug.penup()
        bug.goto(start)
        bug.pendown()
        origin = start
    else:
        origin = (x, y)

    # Weights
    wghts = [uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1)]

    # Compares food/destination X and Y with bug/origin X and Y.
    # applies update ("LEARN") to all weights
    if origin != destination:
        wghts[0] += LEARN
        wghts[1] += LEARN
        wghts[2] += LEARN
        wghts[3] += LEARN
        wghts[4] += LEARN
        wghts[5] += LEARN

    # Output Neurons
    output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + BIAS
    output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + BIAS
    output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + BIAS

    # Relu Function
    if output_1 < 0.1:
        output_1 = 0

    if output_2 < 0.1:
        output_2 = 0

    if output_3 < 0.1:
        output_3 = 0

    # Output values applied to turtle direction controls
    bug.forward(output_1)
    bug.right(output_2)
    bug.left(output_3)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM