简体   繁体   中英

Pass argument to python script running in a docker container

Suppose the following setup:

  • Website written in php / laravel
  • User uploads a file (either text / doc / pdf)
  • We have a docker container which contains a python script for converting text into a numpy array.

I want to take this uploaded data and pass it to the python script.

I can't find anything which explains how to pass dynamically generated inputs into a container.

Can this be done by executing a shell script from inside the laravel app which contains the uploaded file as a variable specified in the dockerfile's ENTRYPOINT?

Are there any other ways of doing this?

One way to do this would be to upload the files to a directory to which the Docker container has access to and then poll the directory for new files using the Python script. You can access local directories from Docker containers using "bind mounts" . Google something like "How to share data between a Docker container and host system" to read more about bind mount and sharing volumes.

I would strongly recommend using tcp/ip for such purposes. By the way, in this case you benefit from:

  • You can detect whether your python service is online
  • You can move python container to another machine

Implementation is really simple. You can choose any framework, but for me suitable is Twisted , and implement your python script as follows:

from twisted.internet.protocol import Factory, Protocol
from twisted.protocols.basic import LineReceiver

class DataProcessor(LineReceiver):
  def lineReceived(self, line):
    # line contains your data
    pass

Factory factory = Factory()
factory.protocol = DataProcessor
reactor.listenTCP(8080, factory)

... a python script for ...

Just run it; don't package it into a Docker container. That's doubly true if its inputs and outputs are both local files, and it expects to do its thing and exit promptly: the filesystem isolation Docker provides works against you here.


This is, of course, technically possible. Depending on how exactly the support program container is set up, the "command" at the end of docker run will be visible to the Python script in sys.argv , like any other command-line options. You can use a docker run -v option to publish parts of the host's filesystem into the container. So you might be able to run something like

docker run --rm -v $PWD/files:/data \
  converter_image \
  python convert.py /data/in.txt /data/out.pkl

where all of the /data paths are in the container's private filesystem space.

There are two big caveats:

  1. The host paths in the docker run -v option are paths specifically on the physical host . If your HTTP service is also running in a container you need to know some host-system path you can write to that's also visible in your container filesystem.

  2. Running any docker command at all effectively requires root privileges. If any of the filenames or paths involved are dynamic, shell injection attacks can compromise your system . Be very careful with how you run this from a network-accessible script.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM