简体   繁体   中英

How can I make a working upstart job with yas3fs?

I've got a very simple upstart config for maintaining a yas3fs mount.

start on filesystem
stop on runlevel [!2345]

respawn
kill timeout 15
oom never
expect fork

script
    . /etc/s3.env
    export AWS_ACCESS_KEY_ID
    export AWS_SECRET_ACCESS_KEY
    exec /opt/yas3fs/yas3fs.py /mnt/something --url=s3://something --cache-path=/mnt/s3fs-cache --mp-size=5120 --mp-num=8
end script'

What happens is that I get two copies of yas3fs.py running. One appears to mount the s3 bucket correctly, but the other is CONSTANTLY respawned by upstart (presumably because it errors due to the other one running).

If I throw in an "expect fork", the job never starts correctly. I just want to be able to have this simple mount safely able to be restarted, stopped, etc as an upstart job. Ideas?

I'm not an upstart expert, but this script should work:

start on (filesystem and net-device-up IFACE=eth0)
stop on runlevel [!2345]

env S3URL="s3://BUCKET[/PREFIX]"
env MOUNTPOINT="/SOME/PATH"

respawn
kill timeout 15
oom never

script
    MOUNTED=$(mount|grep " $MOUNTPOINT "|wc -l)
    if [ $MOUNTED = "1" ]; then
        umount "$MOUNTPOINT"
    fi
    exec /opt/yas3fs/yas3fs.py "$MOUNTPOINT" --url="$S3URL" --mp-size=5120 --mp-num=8 -f
end script

pre-stop script
    umount "$MOUNTPOINT"
end script

The trick is to leave yas3fs in foreground with the '-f' option, it seems there are too many forks to manage otherwise.

I added a check to clean (ie unmount) the mount point if yas3fs dies in some wrong way (eg "kill -9").

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM