簡體   English   中英

找不到文件/如何讓 Python 識別我文件夾中的所有文件?

[英]File not found / How to make Python recognize all the files in my folder?

我是一個沒有經驗的程序員,我不得不為學校的一個項目制作一個聊天機器人,我成功地制作了它,但是我需要導入的文件有問題

使用完整的文件路徑是可行的,但我不能這樣做,因為我必須將它發送給我的老師進行審查,然后在學校展示它,我必須使它能夠識別其文件夾中的文件。

我將所有文件放在一個文件夾中,所有文件都正確命名並且我沒有拼錯任何內容(因為僅在我嘗試導入的名稱之前粘貼文件路徑可以解決我的問題)。 它只是一直說找不到文件,我發現了關於這個問題的多個問題,有一些解決方法和一些解決問題的方法,但對我來說不是,我如何直接告訴我的代碼“看這里白痴!它在您所在的文件夾中!”

CHATBOT(我遇到問題的那個):

import random
import json
import pickle
import numpy as np

import nltk
from nltk.stem import WordNetLemmatizer

from keras.models import load_model

lemmatizer = WordNetLemmatizer()
intentii = json.loads(open('intentii.json').read())

cuvinte = pickle.load(open('cuvinte.pkl', 'rb'))
clase = pickle.load(open('clase.pkl', 'rb'))
model = load_model('chatbotmodel.h5')

def clean_up_sentence(sentence):
    sentence_cuvinte = nltk.word_tokenize(sentence)
    sentence_cuvinte = [lemmatizer.lemmatize(cuvant) for cuvant     in     sentence_cuvinte]
    return sentence_cuvinte

def pachet_de_cuvinte(sentence):
    sentence_cuvinte = clean_up_sentence(sentence)
    pachet = [0] * len(cuvinte)
        for w in sentence_cuvinte:
        for i, cuvant in enumerate(cuvinte):
            if cuvant == w:
                pachet[i] = 1
    return np.array(pachet)

def predict_class(sentence):
    bow = pachet_de_cuvinte(sentence)
    res = model.predict(np.array([bow]))[0]
    LIMITA_EROARE = 0.25
    rezultate = [[i, r] for i, r in enumerate(res) if r >     LIMITA_EROARE]

    rezultate.sort(key=lambda x: x[1], reverse=True)
    return_list = []
    for r in rezultate:
        return_list.append({'intentie': clase[r[0]], 'probabilitate': str(r[1])})
    return return_list

def primeste_raspuns(lista_intentii, intentii_json):
    tag = lista_intentii[0]['intentie']
    lista_de_intentii = intentii_json['intentii']
    for i in lista_de_intentii:
        if i['tag'] == tag:
            result = random.choice(i['raspunsuri'])
            break
    return result

print("V.A.S.I.L.E a fost initializat!")

while True:
    message = input("")
    ints = predict_class(message)
    res = primeste_raspuns(ints, intentii)
    print(res)

培訓(我對我的文件使用相同的導入並且它有效的那個):

from random import random

import random
import json
import pickle
import numpy as np

import nltk
from nltk.stem import WordNetLemmatizer

from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.optimizers import SGD

lematizare = WordNetLemmatizer()

intentii = json.loads(open('intentii.json').read())

cuvinte = []
clase = []
documente = []
ignora_simboluri = ['?', '!', ',', '.']

for intentie in intentii['intentii']:
    for pattern in intentie['patterns']:
        word_list = nltk.word_tokenize(pattern)
        cuvinte.extend(word_list)
        documente.append((word_list, intentie['tag']))
        if intentie['tag'] not in clase:
            clase.append(intentie['tag'])
        
cuvinte = [lematizare.lemmatize(cuvant) for cuvant in cuvinte if cuvant not in ignora_simboluri]
cuvinte = sorted(set(cuvinte))

clase = sorted(set(clase))

pickle.dump(cuvinte, open('cuvinte.pkl', 'wb'))
pickle.dump(clase, open('clase.pkl', 'wb'))

training = []
output_gol = [0] * len(clase)

for document in documente:
    pachet = []
    cuvant_patterns = document[0]
    cuvant_patterns = [lematizare.lemmatize(cuvant.lower()) for cuvant in cuvant_patterns]
    for cuvant in cuvinte:
        pachet.append(1) if cuvant in cuvant_patterns else     pachet.append(0)
    
    output_row = list(output_gol)
    output_row[clase.index(document[1])] = 1
    training.append([pachet, output_row])

random.shuffle(training)
training = np.array(training)

train_x = list(training[:, 0])
train_y = list(training[:, 1])

model = Sequential()
model.add(Dense(512, input_shape=(len(train_x[0]),), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(len(train_y[0]), activation='softmax'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

tmodel = model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
model.save('chatbotmodel.h5', tmodel)
print("Gata!")

我不完全是你的問題,但我覺得你在問這個./filename ,但如果你使用的是 Windows,那么它就是.\ (.\filename)

要獲取相對文件路徑,請使用__file__

from pathlib import Path

this_file = Path(__file__).absolute()
this_file_directory = this_file.parent

所以如果你的結構是:

top_dir
├── code_dir
|    └── main.py
└── data
    └── something.json

main.py

from pathlib import Path

top_dir = Path(__file__).absolute().parent.parent
data_dir = top_dir / 'data'
data_file = data_dir / 'something.json'

這對任何人都有效

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM