简体   繁体   中英

Production Thin Best Practices

I'm using Thin as a server for Faye . To do that, I use something like this:

require 'faye'
bayeux = Faye::RackAdapter.new(:mount => '/faye', :timeout => 25)
bayeux.listen(9292)

The Thin process is monitored by God and all is well in Development.

However, I'm not sure if this is the right setup for a production configuration. What I would like to know is how is this setup (with no Nginx or HAProxy at the front) is going to perform in a production environment.

this is my god config.

#Faye
ports = [9292, 9293, 9294, 9295]
ports.each_with_index do |port, id|
  God.watch do |w|
    w.dir      = "#{rails_root}"
    w.name     = "faye-#{port}"
    w.group    = "faye"
    w.interval = 30.seconds
    w.start    = "thin start -R #{rails_root}/faye.ru -e production -p #{port} -P  #{rails_root}tmp/pids/faye-#{port}.pid"
    w.stop     = "thin stop -P  #{rails_root}tmp/pids/faye-#{port}.pid"
    w.log      = "#{rails_root}/log/god_node.log"

    #w.uid = 'server'
    #w.gid = 'server'

    # restart if memory usage is > 500mb
    w.transition(:up, :restart) do |on|
      on.condition(:memory_usage) do |c|
        c.above = 500.megabytes
        c.times = 2
      end
    end

    # determine the state on startup
    w.transition(:init, { true => :up, false => :start }) do |on|
      on.condition(:process_running) do |c|
        c.running = true
      end
    end

    # determine when process has finished starting
    w.transition([:start, :restart], :up) do |on|
      on.condition(:process_running) do |c|
        c.running = true
        c.interval = 10.seconds
      end

      # failsafe
      on.condition(:tries) do |c|
        c.times = 5
        c.transition = :start
        c.interval = 10.seconds
      end
    end

    # start if process is not running
    w.transition(:up, :start) do |on|
      on.condition(:process_running) do |c|
        c.running = false
      end
    end
  end
end

and i'm using nginx to load balancing.

I have been using thin to run faye with redis. making sure to set

Faye::WebSocket.load_adapter('thin')

all traffic goes through haproxy (the first one is named proxied as I redirect all traffic to https)

frontend proxied
    bind 127.0.0.1:81 accept-proxy
    timeout client 86400000
    default_backend nginx_backend
    acl is_websocket hdr(Upgrade) -i WebSocket
    acl is_websocket hdr_beg(Host) -i ws
    use_backend socket_backend if is_websocket


backend nginx_backend
    balance roundrobin
    option forwardfor #except 127.0.0.1 # This sets X-Forwarded-For
    timeout server 30000
    timeout connect 4000
    server nginx1 localhost:8081 weight 1 maxconn 20000 check

backend socket_backend
    balance roundrobin
    option forwardfor except 127.0.0.1 # This sets X-Forwarded-For
    timeout queue 5000
    timeout server 86400000
    timeout connect 86400000
    server socket1 localhost:3100 weight 1 maxconn 20000 check
    server socket2 localhost:3101 weight 1 maxconn 20000 check
    server socket3 localhost:3102 weight 1 maxconn 20000 check
    server socket4 localhost:3103 weight 1 maxconn 20000 check
    ...

if it is http traffic I route it through nginx which forwards to the same set of thin instances if it includes the /faye path.

I am not an haproxy expert but this is working for websocket and long poll connections.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM