简体   繁体   中英

Python multithreading raw_input

I'm currently doing some work with multithreading and i'm trying to figure out why my program isn't working as intended.

def input_watcher():
  while True:
    input_file = os.path.abspath(raw_input('Input file name: '))
    compiler = raw_input('Choose compiler: ')

    if os.path.isfile(input_file):

        obj = FileObject(input_file, compiler)

        with file_lock:
            files.append(obj)

        print 'Adding %s with %s as compiler' % (obj.file_name, obj.compiler)
    else:
        print 'File does not exists'

This is running in one thread and it works fine until i start adding adding the second fileobject.

This is the output from the console:

Input file name: C:\Users\Victor\Dropbox\Private\multiFile\main.py
Choose compiler: aImport
Adding main.py with aImport as compiler
Input file name: main.py updated
C:\Users\Victor\Dropbox\Private\multiFile\main.py
Choose compiler: Input file name: Input file name: Input file name: Input file name:

The input filename keeps popping up the second i added the second filename and it ask for a compiler. The program keeps printing input file name until it crashes.'

I have other code running in a different thread, i don't think it has anything to do with the error, but tell me if you think you need to see it and i will post it.

the full code:

import multiprocessing
import threading
import os
import time


file_lock = threading.Lock()
update_interval = 0.1


class FileMethods(object):
    def a_import(self):
        self.mod_check()




class FileObject(FileMethods):
    def __init__(self, full_name, compiler):

        self.full_name = os.path.abspath(full_name)
        self.file_name = os.path.basename(self.full_name)
        self.path_name = os.path.dirname(self.full_name)

        name, exstention = os.path.splitext(full_name)
        self.concat_name = name + '-concat' + exstention

        self.compiler = compiler
        self.compiler_methods = {'aImport': self.a_import}

        self.last_updated = os.path.getatime(self.full_name)

        self.subfiles = []
        self.last_subfiles_mod = {}

    def exists(self):
        return os.path.isfile(self.full_name)

    def mod_check(self):
        if self.last_updated < os.path.getmtime(self.full_name):
            self.last_updated = os.path.getmtime(self.full_name)
            print '%s updated' % self.file_name
            return True
        else:
            return False

    def sub_mod_check(self):
        for s in self.subfiles:
            if self.last_subfiles_mod.get(s) < os.path.getmtime(s):
                self.last_subfiles_mod[s] = os.path.getmtime(s)
                return True

        return False


files = []


def input_watcher():
    while True:
        input_file = os.path.abspath(raw_input('Input file name: '))
        compiler = raw_input('Choose compiler: ')

        if os.path.isfile(input_file):

            obj = FileObject(input_file, compiler)

            with file_lock:
                files.append(obj)

            print 'Adding %s with %s as compiler' % (obj.file_name, obj.compiler)
        else:
            print 'File does not exists'


def file_manipulation():
    if __name__ == '__main__':
        for f in files:
            p = multiprocessing.Process(target=f.compiler_methods.get(f.compiler)())
            p.start()
            #f.compiler_methods.get(f.compiler)()

def file_watcher():
    while True:
        with file_lock:
            file_manipulation()
        time.sleep(update_interval)


iw = threading.Thread(target=input_watcher)
fw = threading.Thread(target=file_watcher)

iw.start()
fw.start()

This is happening because you're not using an if __name__ == "__main__": guard, while also using multiprocessing.Process on Windows. Windows needs to re-import your module in the child processes it spawns, which means it will keep creating new threads to handle inputs and watch files. This, of course, is a recipe for disaster. Do this to fix the issue:

if __name__ == "__main__":
    iw = threading.Thread(target=input_watcher)
    fw = threading.Thread(target=file_watcher)

    iw.start()
    fw.start()

See the "Safe importing of the main module" section in the multiprocessing docs for more info.

I also have a feeling file_watcher isn't really doing what you want it to (it will keep re-spawning processes for files you've already processed), but that's not really related to the original question.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM