简体   繁体   中英

Rails + Unicorn + Nginx + Capistrano 3 + Linode VPS: Not Recognizing Staging Environment

I've been racking my brain for a few days now and have exhausted my research on this issue.

A little background: I have a rails app working completely fine in production. I added a staging environment to the same server under a different directory. I'm able to get to the staging site.

What I've Noticed:

  • My code changes (in staging) are showing correctly on the staging site.
  • My staging database has been created successfully.

The Problem:

It seems that my staging site thinks it's a production site. I feel like I'm not properly setting the staging environment somewhere. Some weird things are happening:

  • writes to the production.log
  • it's using the production database

The Code: (I've replaced actual domain names/IP's)

config/deploy.rb

# config valid only for current version of Capistrano
lock '3.3.5'

set :stages, %w(production staging)
set :default_stage, 'staging'

set :repo_url, 'git@github.com:test/test.git'
set :user, 'deploy'
set :linked_dirs, %w{log tmp/pids tmp/cache tmp/sockets public/system/members}

namespace :deploy do

  %w[start stop restart].each do |command|
    desc 'Manage Unicorn'
    task command do
      on roles(:app), in: :sequence, wait: 1 do
        execute "/etc/init.d/unicorn_#{fetch(:application)} #{command}"
      end      
    end
  end

  after :publishing, :restart

end

config/deploy/staging.rb

set :rails_env, 'staging'
set :application, 'test_staging'
set :deploy_to, '/var/www/staging.test.co'
set :branch, 'staging'

role :app, %w{deploy@IP_HERE}
role :web, %w{deploy@IP_HERE}
role :db,  %w{deploy@IP_HERE}

config/unicorn.rb

if ENV["RAILS_ENV"] == "production"
    root = "/var/www/test.co/current"
else
    root = "/var/www/staging.test.co/current"
end
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"

if ENV["RAILS_ENV"] == "production"
    listen "/tmp/unicorn.test.sock"
else
    listen "/tmp/unicorn.test_staging.sock"
end
worker_processes 1
timeout 30

[On Server] /etc/init.d/unicorn_test_staging

#!/bin/sh
### BEGIN INIT INFO
# Provides:          unicorn
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Manage unicorn server
# Description:       Start, stop, restart unicorn server for a specific     application.
### END INIT INFO
set -e

# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/var/www/staging.test.co/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="cd $APP_ROOT; bundle exec unicorn -D -c      $APP_ROOT/config/unicorn.rb -E staging"
AS_USER=deploy
set -u

OLD_PIN="$PID.oldbin"

sig () {
  test -s "$PID" && kill -$1 `cat $PID`
}

oldsig () {
  test -s $OLD_PIN && kill -$1 `cat $OLD_PIN`
}

run () {
  if [ "$(id -un)" = "$AS_USER" ]; then
    eval $1
  else
    su -c "$1" - $AS_USER
  fi
}

case "$1" in
start)
  sig 0 && echo >&2 "Already running" && exit 0
  run "$CMD"
  ;;
stop)
  sig QUIT && exit 0
  echo >&2 "Not running"
  ;;
force-stop)
  sig TERM && exit 0
  echo >&2 "Not running"
  ;;
restart|reload)
  sig HUP && echo reloaded OK && exit 0
  echo >&2 "Couldn't reload, starting '$CMD' instead"
  run "$CMD"
  ;;
upgrade)
  if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
  then
    n=$TIMEOUT
    while test -s $OLD_PIN && test $n -ge 0
    do
      printf '.' && sleep 1 && n=$(( $n - 1 ))
    done
    echo

    if test $n -lt 0 && test -s $OLD_PIN
    then
      echo >&2 "$OLD_PIN still exists after $TIMEOUT seconds"
      exit 1
      fi
    exit 0
  fi
  echo >&2 "Couldn't upgrade, starting '$CMD' instead"
  run "$CMD"
  ;;
reopen-logs)
  sig USR1
  ;;
*)
  echo >&2 "Usage: $0 "
  exit 1
  ;;
esac

config/database.yml

development:
 adapter: postgresql
 encoding: unicode
 database: test_dev
 host: localhost
 pool: 5
 username: test
 password: password

staging:
 adapter: postgresql
 encoding: unicode
 database: test_staging

production:
 adapter: postgresql
 encoding: unicode
 database: test_production

Please let me know if you need any other code to help me find the issue here. I appreciate anyone's help on this.

Thanks!

It is quite possible that your staging server is calling into routes of your production server because they are on the same host.

You need to make sure you setup your subfolder with: config.relative_url_root or the RAILS_RELATIVE_URL_ROOT environment variable.

Look up relative_url_root in Configuring Rails Applications

Finally figured this out. I wrote Rails.env to the screen and it was returning "production" on my staging site. This is why it was using my production database and log.

I had initially forgot to change my unicorn init script in /etc/init.d/unicorn_myapp to -E staging when I first created my staging site. I set that to staging a few days ago and it didn't solve the issue.

So out of desperation today, I went into my server and tried a full unicorn stop and start

sudo service unicorn_myapp stop
sudo service unicorn_myapp start

and that cleared up the issue! My Rails.env started returning "staging" and all is working correctly now.

TLDR: If you change your unicorn init script, always make sure to stop and start the unicorn app. A simple restart didn't work for me.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM