[英]Toy Neural Network or fancy squiggly line generator?
I have written this code to better understand machine learning but I am unsure if I am on the right track or not.我写了这段代码是为了更好地理解机器学习,但我不确定我是否在正确的轨道上。 So far it draws random squiggly lines all over the screen using python 3.7.
到目前为止,它使用 python 3.7 在整个屏幕上绘制随机波浪线。
import turtle
import random
# Sets the Turtle main screen color
turtle.bgcolor("pink")
# Settings for bug sprite
bug = turtle.Turtle()
bug.penup()
bug.color("red")
bug_x = bug.setx(-150)
bug_y = bug.sety(12)
bug.pendown()
# Settings for food sprite
food = turtle.Turtle()
food.penup()
food.color("green")
food_x = food.setx(160)
food_y = food.sety(59)
food.pendown()
# Main Loop
while True:
# X and Y coordinate of Food
destination = [160,59]
# X and Y coordinate of Bug
x_1 = bug.xcor()
y_1 = bug.ycor()
origin = [x_1,y_1]
learn = .10
bias = 0
# Weights
wghts = [random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1),
random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1)]
#print(wghts)
# Output Neurons
output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + bias
output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + bias
output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + bias
#Relu Function
if output_1 >= 0.1:
output_1 = output_1
else:
output_1 = 0
if output_2 >= 0.1:
output_2 = output_2
else:
output_2 = 0
if output_3 >= 0.1:
output_3 = output_3
else:
output_3 = 0
# Compares food/destination X and Y with bug/origin X and Y.
# applies update ("learn") to all weights
if origin[0] != destination[0] and origin[1] != destination[1]:
wghts[0] = wghts[0] + learn
wghts[1] = wghts[1] + learn
wghts[2] = wghts[2] + learn
wghts[3] = wghts[3] + learn
wghts[4] = wghts[4] + learn
wghts[5] = wghts[5] + learn
else:
wghts[0] = wghts[0]
wghts[1] = wghts[1]
wghts[2] = wghts[2]
wghts[3] = wghts[3]
wghts[4] = wghts[4]
wghts[5] = wghts[5]
#print(wghts)
#print("\n")
# Creates a barrier for turtle
bug_1a = int(bug.xcor())
bug_2a = int(bug.ycor())
if bug_1a > 300 or bug_2a > 300:
bug.penup()
bug.setx(5)
bug.sety(5)
bug.pendown()
if bug_1a < -300 or bug_2a < -300:
bug.penup()
bug.setx(5)
bug.sety(5)
bug.pendown()
# Output values applied to turtle direction controls
bug.forward(output_1)
bug.right(output_2)
bug.left(output_3)
Issues I see with your program:我在您的程序中看到的问题:
The wghts
learn nothing from the previous iteration -- they are randomly reset each time through the loop. wghts
从前一次迭代中什么也wghts
学到——它们每次通过循环随机重置。
The output_1
, output_2
and output_3
are calculated from the freshly reinitialized wghts
so the changes made by: output_1
、 output_2
和output_3
是根据刚重新初始化的wghts
计算得出的,因此所做的更改如下:
if origin[0] != destination[0] and origin[1] != destination[1]:
wghts[0] = wghts[0] + learn
...
wghts[5] = wghts[5] + learn
are never reflected in the output_*
variables.永远不会反映在
output_*
变量中。
You're adding the bug's X and Y coordinates and using that as the number of degrees to turn.您正在添加错误的 X 和 Y 坐标,并将其用作要转动的度数。 Twice.
两次。 I don't see how that makes any sense, but I guess it's a neural network thing.
我不明白这有什么意义,但我想这是一个神经网络的事情。
You do your barrier check too late in the code such that it's out of sync with what follows.您在代码中进行屏障检查太晚了,以至于它与后面的内容不同步。 The bug isn't moving, so do the check earlier.
虫子不动,所以早点检查。
The following code cleanup won't make your bug any less random -- just hopefully make your code easier to work with:下面的代码清理不会让你的错误变得不那么随机——只是希望让你的代码更容易使用:
from turtle import Screen, Turtle
from random import uniform
# Sets the Turtle main screen color
screen = Screen()
screen.bgcolor("pink")
# X and Y coordinate of Food
destination = (160, 59)
# Settings for food sprite
food = Turtle()
food.color("green")
food.penup()
food.setposition(destination)
food.pendown()
start = (-150, 12)
# Settings for bug sprite
bug = Turtle()
bug.color("red")
bug.penup()
bug.setposition(start)
bug.pendown()
LEARN = 0.1
BIAS = 0
# Main Loop
while True:
# X and Y coordinate of Bug
x, y = bug.position()
# Creates a barrier for turtle
if not -300 <= x <= 300 or not -300 <= y <= 300:
bug.penup()
bug.goto(start)
bug.pendown()
origin = start
else:
origin = (x, y)
# Weights
wghts = [uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1)]
# Compares food/destination X and Y with bug/origin X and Y.
# applies update ("LEARN") to all weights
if origin != destination:
wghts[0] += LEARN
wghts[1] += LEARN
wghts[2] += LEARN
wghts[3] += LEARN
wghts[4] += LEARN
wghts[5] += LEARN
# Output Neurons
output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + BIAS
output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + BIAS
output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + BIAS
# Relu Function
if output_1 < 0.1:
output_1 = 0
if output_2 < 0.1:
output_2 = 0
if output_3 < 0.1:
output_3 = 0
# Output values applied to turtle direction controls
bug.forward(output_1)
bug.right(output_2)
bug.left(output_3)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.