简体   繁体   中英

running script multiple times simultaniously in python 2.7

Hello I am trying to run a script multiple times but would like this to take place at the same time from what I understood i was to use subprocess and threading together however when i run it it still looks like it is being executed sequentially can someone help me so that i can get it to run the same script over and over but at the same time? is it in fact working and just really slow?

edit forgot the last piece of code now at the bottom

here is what i have so far

import os
import datetime
import threading
from subprocess import Popen

today = datetime.date.today()
os.makedirs("C:/newscript_image/" + str(today))

class myThread(threading.Thread):
    def run(self):
        for filename in os.listdir('./newscript/'):
            if '.htm' in filename:
                name = filename.strip('.htm')

                dbfolder = "C:/newscript/db/" + name
                os.makedirs(dbfolder)

                Popen("python.exe C:/execution.py" + ' ' + filename + ' ' + name + ' ' + str(today) + ' ' + dbfolder)
myThread().start()

Personally, I'd use multiprocessing . I'd write a function that takes a filename and does whatever the main guts of execution does (probably by importing execution and running some function within it):

import multiprocessing
import execution
import datetime

#assume we have a function:
#exection.run_main_with_args(filename,name,today_str,dbfolder)

today = datetime.datetime.today()
def my_execute(filename):
    if '.htm' in filename:
       name = filename.strip('.htm')
       dbfolder = "C:/newscript/db/" + name
       os.makedirs(dbfolder)
       execution.run_main_with_args(filename,name,str(today),dbfolder)

p = multiprocessing.Pool()
p.map(my_execute,list_of_files_to_process)

Ran some quick tests. Using the framework of your script:

#!/usr/bin/env python

import os
import threading
from subprocess import Popen

class myThread(threading.Thread):
    def run(self):
        for filename in os.listdir("./newscript/"):
            if '.htm' in filename:
                Popen("./busy.sh")

myThread().start()

I then populated the "newscript" folder with a bunch of ".htm" files against which to run the script against.

Where "busy.sh" is basically:

#!/usr/bin/env bash
while :
do
    uptime >> $$
    sleep 1
done

The code you have does indeed fire off multiple processes running in the background. I did this with a newscript folder containing 200 files, and I see 200 processes all running in the background.

You noted that you want them to run all in the background at the same time.

For the most part, parallel processes are running in the background "roughly" in parallel, but because of the way that most common operating systems are setup, "parallel" is more like "nearly parallel" or more commonly referred to as asynchronously. If you look at the access times VERY closely, the various processes spawned in this manner will each take a turn, but they will never all do something at the same time.

That is something to be aware of. Especially since you are accessing files controlled by the OS and underlying filesystem.

For what you are trying to do: process a bunch of files inbound, how you are doing it is basically spawning off a process to process the file in the background for each file that appears.

There are a couple of issues with the logic as presented:

  1. High risk of a fork bomb situation, as your spawning is unbounded and there is no tracking of what is still spawned.
  2. The way you are spawning, by calling out and executing another program results in an OS level process being spawned, which is more resource intensive.

Suggestion:

Instead of spawning off jobs, you would be better off taking the file processing code you would be spawning and turning it into a Python function. Re-write your code as a daemonized process, which watches the folder and keeps track of how many processes are spawned, so that the level of background processes handing file conversion is managed.

When processing the file, you would spin off a Python thread to handle it, which would be a lighter weight alternative to spawning off an OS level thread.

A little elaboration mgilson's answer:

Let's say we have a folder example1.
Inside example1 we have two python scripts:
execution.py , and main.py

The contents of execution.py looks like this:

import subprocess


def run_main_with_args(filename,name,today,dbfolder):
    print('{} {} {}'.format('\nfilename: ',filename, ''))
    print('{} {} {}'.format('name: ',name, ''))
    print('{} {} {}'.format('today: ',today, ''))
    print('{} {} {}'.format('dbfolder: ',dbfolder, ''))

    outfile = dbfolder+ '/' + name + '.txt'
    with open (outfile, 'w') as fout:
        print('name', file=fout)

Also, the contents of main.py look like this:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Author      : Bhishan Poudel; Physics Graduate Student, Ohio University
# Date        : Aug 29, 2016
#

# Imports
import multiprocessing,os,subprocess
import datetime
import execution  # file: execution.py

#assume we have a function:
#exection.run_main_with_args(filename,name,today_str,dbfolder)

today = datetime.datetime.today()
def my_execute(filename):
    if '.txt' in filename:
       name = filename.strip('.txt')
       dbfolder = "db/" + name
       if not os.path.exists(dbfolder): os.makedirs(dbfolder)
       execution.run_main_with_args(filename,name,str(today),dbfolder)



p = multiprocessing.Pool()
p.map(my_execute,['file1.txt', 'file2.txt'])

Then, if we run this main.py it will create required files in the required directories in parallel way!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM