简体   繁体   中英

Rails production not work in AWS load balancer

My Rails 6 App work fine with development mode in EC2 instances. But when config to use production mode. The load balancer not able to do health check and not able to run the app.

My health check:

在此处输入图像描述

Security: Load Balancer

在此处输入图像描述

在此处输入图像描述

Security: Rails App(s)

在此处输入图像描述

在此处输入图像描述

Load balancer worked in development

在此处输入图像描述

Here the development that work with load balancer

Start rails:

rails s -p 3000 -b 0.0.0.0

then responded

=> Booting Puma
=> Rails 6.0.3.2 application starting in development 
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 4.3.5 (ruby 2.6.3-p62), codename: Mysterious Traveller
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3000

config/environments/development.rb

Rails.application.configure do
  config.hosts << "xxxxxxxx.us-east-2.elb.amazonaws.com" #This is public dns of load balance
  config.cache_classes = false
  config.eager_load = false
  config.consider_all_requests_local = true
  if Rails.root.join('tmp', 'caching-dev.txt').exist?
    config.action_controller.perform_caching = true
    config.action_controller.enable_fragment_cache_logging = true

    config.cache_store = :memory_store
    config.public_file_server.headers = {
      'Cache-Control' => "public, max-age=#{2.days.to_i}"
    }
  else
    config.action_controller.perform_caching = false

    config.cache_store = :null_store
  end
  config.action_mailer.raise_delivery_errors = false
  config.action_mailer.default_url_options = { :host => 'localhost:3000' }

  config.action_mailer.perform_caching = false
  config.active_support.deprecation = :log
  config.assets.debug = true
  config.assets.quiet = true
  config.file_watcher = ActiveSupport::EventedFileUpdateChecker
end

Below is production(that not working)

config/environments/production.rb

Start rails:

RAILS_ENV=production rails s -p 3000 -b 0.0.0.0

then responded:

=> Booting Puma
=> Rails 6.0.3.2 application starting in production 
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 4.3.5 (ruby 2.6.3-p62), codename: Mysterious Traveller
* Min threads: 5, max threads: 5
* Environment: production
* Listening on tcp://0.0.0.0:3000


Rails.application.configure do
  config.hosts << "xxxxxxxx.us-east-2.elb.amazonaws.com" #This is public dns of load balance
  config.hosts << "3.14.65.84"
  config.cache_classes = true
  config.eager_load = true
  config.consider_all_requests_local       = false
  config.action_controller.perform_caching = true
  config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present?
  config.assets.compile = false
  config.log_level = :debug
  config.log_tags = [ :request_id ]
  config.action_mailer.perform_caching = false
  config.i18n.fallbacks = true
  config.active_support.deprecation = :notify
  config.log_formatter = ::Logger::Formatter.new
  if ENV["RAILS_LOG_TO_STDOUT"].present?
    logger           = ActiveSupport::Logger.new(STDOUT)
    logger.formatter = config.log_formatter
    config.logger    = ActiveSupport::TaggedLogging.new(logger)
  end
end

Load Balancer: Health Check not work!

在此处输入图像描述

I also tried:

  1. copy config/environments/development.rb to production.rb and then run as production environment the result =====> Health Check Not Work!
  2. copy config/environment/production.rb to development.rb and then run as development environment the result =====> Health Check Work!

It seems nothing about the rails config, but the way it handle production in AWS

Help: How to make this Rails 6 work as production in AWS EC2 with load balancer?

My company just ran into a very similar sounding issue. Once ECS spun up the tasks, we were able to access the Rails app through the ELB, but the health checks would fail and it would automatically shut down each container it tried to spin up.

We ended up adding our IP range to the hosts configuration. Completely disabling it in production didn't feel right, so we arrived at something akin to this:

config.hosts = [
  "publicdomain.com",
  "localhost",
  IPAddr.new("10.X.X.X/23")
]

The whitelisted IP address matches the range that will be used by ECS when creating and slotting in the containers. Hopefully that helps!

Instead of ping to root path, I think better if you create your own routes for health check in application like this:

# controller
class HealthCheckController < ApplicationController
  def show
    render body: nil, status: 200
  end
end

# routes
get '/health_check', to: 'health_check#show'

then update ping path in LB health check to /health_check

Edit:

Add config.hosts.clear replaced config.hosts << "xxxxxxxx.us-east-2.elb.amazonaws.com" in production config file to make rails accept request

The missing information here is that Rails by default does not set config.hosts in production. The purpose of config.hosts is to protect against DNS rebinding in development environments due to the presence of web-console .

This is the best article I found on the topic: https://prathamesh.tech/2019/09/02/dns-rebinding-attacks-protection-in-rails-6/

For us, we have set config.hosts in application.rb for our primary domain and subdomain and then customized it in all the other environments. As such, this causes config.hosts to be enforced in production and then fail AWS health checks as observed by the OP.

You have two options:

  1. Remove config.hosts completely in production. Since this is not set by Rails by default, the presumption is that DNS rebinding attacks are not an issue in prod.
  2. Determine the request ip in production.rb. The above solutions tie the app to the infrastructure which is not good. What if you want to deploy your app to a new region? You can do this dynamically or statically.
    1. Static: set an environment variable to pull in the ELB request ip addresses. If you're using AWS, hopefully you're using CloudFormation, so you can pass the appropriate values through as ENV or as ParameterStore variable.
    2. Dynamically: Use the AWS Ruby SDK to pull in the ELB ip addresses

Another to those who come across this is here: https://discuss.rubyonrails.org/t/feature-proposal-list-of-paths-to-skip-when-checking-host-authorization/76246

Had the same issue today. In my case, I simply lower the bar to accept 403 as healthy. It's not ideal, but we shouldn't sacrifice hosts protection or open it widely for predictable IPs.

健康检查配置


Updated 1:

Rails already support exclude config from 6.1

config.host_authorization = { exclude: ->(request) { request.path =~ /healthcheck/ } }

Ref: https://api.rubyonrails.org/classes/ActionDispatch/HostAuthorization.html

The main reason is connection from target group to container for healthcheck uses IP, not domain, so rails response 403. Accept 403 or exclude it from host authorization.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM