简体   繁体   中英

Toy Neural Network or fancy squiggly line generator?

I have written this code to better understand machine learning but I am unsure if I am on the right track or not. So far it draws random squiggly lines all over the screen using python 3.7.

import turtle
import random

# Sets the Turtle main screen color 
turtle.bgcolor("pink")

# Settings for bug sprite
bug = turtle.Turtle()
bug.penup()
bug.color("red")
bug_x = bug.setx(-150)
bug_y = bug.sety(12)
bug.pendown()

# Settings for food sprite
food = turtle.Turtle()
food.penup()
food.color("green")
food_x = food.setx(160)
food_y = food.sety(59)
food.pendown()



# Main Loop
while True:


    # X and Y coordinate of Food
    destination = [160,59]

    # X and Y coordinate of Bug
    x_1 = bug.xcor()
    y_1 = bug.ycor()
    origin = [x_1,y_1]

    learn = .10
    bias = 0

    # Weights
    wghts = [random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1),
             random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1)]
    #print(wghts)




    # Output Neurons
    output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + bias
    output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + bias
    output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + bias

    #Relu Function
    if output_1 >= 0.1:
        output_1 = output_1
    else:
        output_1 = 0

    if output_2 >= 0.1:
        output_2 = output_2
    else:
        output_2 = 0

    if output_3 >= 0.1:
        output_3 = output_3
    else:
        output_3 = 0

    # Compares food/destination X and Y with bug/origin X and Y.
    # applies update ("learn") to all weights
    if origin[0] != destination[0] and origin[1] != destination[1]:
        wghts[0] = wghts[0] + learn
        wghts[1] = wghts[1] + learn
        wghts[2] = wghts[2] + learn
        wghts[3] = wghts[3] + learn
        wghts[4] = wghts[4] + learn
        wghts[5] = wghts[5] + learn
    else:
        wghts[0] = wghts[0] 
        wghts[1] = wghts[1] 
        wghts[2] = wghts[2] 
        wghts[3] = wghts[3] 
        wghts[4] = wghts[4] 
        wghts[5] = wghts[5]

    #print(wghts)
    #print("\n")

    # Creates a barrier for turtle
    bug_1a = int(bug.xcor())
    bug_2a = int(bug.ycor())

    if bug_1a > 300 or bug_2a > 300:
        bug.penup()
        bug.setx(5)
        bug.sety(5)
        bug.pendown()
    if bug_1a < -300 or bug_2a < -300:
        bug.penup()
        bug.setx(5)
        bug.sety(5)
        bug.pendown()

    # Output values applied to turtle direction controls
    bug.forward(output_1)
    bug.right(output_2)
    bug.left(output_3)

Issues I see with your program:

The wghts learn nothing from the previous iteration -- they are randomly reset each time through the loop.

The output_1 , output_2 and output_3 are calculated from the freshly reinitialized wghts so the changes made by:

if origin[0] != destination[0] and origin[1] != destination[1]:
        wghts[0] = wghts[0] + learn
        ...
        wghts[5] = wghts[5] + learn

are never reflected in the output_* variables.

You're adding the bug's X and Y coordinates and using that as the number of degrees to turn. Twice. I don't see how that makes any sense, but I guess it's a neural network thing.

You do your barrier check too late in the code such that it's out of sync with what follows. The bug isn't moving, so do the check earlier.

The following code cleanup won't make your bug any less random -- just hopefully make your code easier to work with:

from turtle import Screen, Turtle
from random import uniform

# Sets the Turtle main screen color
screen = Screen()
screen.bgcolor("pink")

# X and Y coordinate of Food
destination = (160, 59)

# Settings for food sprite
food = Turtle()
food.color("green")
food.penup()
food.setposition(destination)
food.pendown()

start = (-150, 12)

# Settings for bug sprite
bug = Turtle()
bug.color("red")
bug.penup()
bug.setposition(start)
bug.pendown()

LEARN = 0.1
BIAS = 0

# Main Loop
while True:

    # X and Y coordinate of Bug
    x, y = bug.position()

    # Creates a barrier for turtle
    if not -300 <= x <= 300 or not -300 <= y <= 300:
        bug.penup()
        bug.goto(start)
        bug.pendown()
        origin = start
    else:
        origin = (x, y)

    # Weights
    wghts = [uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1)]

    # Compares food/destination X and Y with bug/origin X and Y.
    # applies update ("LEARN") to all weights
    if origin != destination:
        wghts[0] += LEARN
        wghts[1] += LEARN
        wghts[2] += LEARN
        wghts[3] += LEARN
        wghts[4] += LEARN
        wghts[5] += LEARN

    # Output Neurons
    output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + BIAS
    output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + BIAS
    output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + BIAS

    # Relu Function
    if output_1 < 0.1:
        output_1 = 0

    if output_2 < 0.1:
        output_2 = 0

    if output_3 < 0.1:
        output_3 = 0

    # Output values applied to turtle direction controls
    bug.forward(output_1)
    bug.right(output_2)
    bug.left(output_3)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM