简体   繁体   English

Dockerize 用于生产的 PHP 应用程序

[英]Dockerize PHP Application for Production

We have a PHP (specifically Laravel) application that should be dockerized for the Production environment.我们有一个 PHP(特别是 Laravel)应用程序,应该针对生产环境进行 dockerized。 But there is a problem with sharing application source code with the Web Server and PHP-FPM containers.但是与 Web 服务器和 PHP-FPM 容器共享应用程序源代码存在问题。

Both Nginx and PHP-FPM should have access to the Application source codes so here are the workarounds that suggested on the web: Nginx 和 PHP-FPM 都应该可以访问应用程序源代码,所以这里是网络上建议的解决方法:

  1. Having two separate containers for Nginx and PHP-FPM, and mounting the source code on the host machine and create a volume of it.有两个独立的 Nginx 和 PHP-FPM 容器,并在主机上安装源代码并创建一个卷。 Then, assign this volume to those containers.然后,将此卷分配给这些容器。 This solution is not desired because every time the application code changes, the entire stack should be built again and created volume should be flushed.这种解决方案是不可取的,因为每次应用程序代码更改时,都应该重新构建整个堆栈并刷新创建的卷。 Also, these tasks should be executed on all of our servers which may waste a lot of time.此外,这些任务应该在我们所有的服务器上执行,这可能会浪费很多时间。
  2. Having both PHP-FPM and Nginx on the same container and keep their process running with supervisor or an entrypoint script.在同一个容器上同时拥有 PHP-FPM 和 Nginx,并使用supervisorentrypoint脚本保持它们的进程运行。 In this solution, when the source code changes, we build the image once and hopefully, there is no shared volume to be flushed, so it seems a good workaround.在这个解决方案中,当源代码发生变化时,我们构建一次映像,希望没有要刷新的共享卷,因此这似乎是一个很好的解决方法。 But, the main problem with this solution is that it violates the idea behind of the containerization.但是,这个解决方案的主要问题是它违反了容器化背后的思想。 Docker in its documentation says: Docker在其文档中说:

    You should have one concern (or running process) per container.每个容器应该有一个问题(或正在运行的进程)。

    But here, we have two running processes!但是在这里,我们有两个正在运行的进程!

Is there any other solution that may work on the production environment ?有没有其他解决方案可以在生产环境中使用 I have to mention that we are going to use Swarm or Kubernetes in the near future.我不得不提一下,我们将在不久的将来使用SwarmKubernetes

Thanks.谢谢。

In general, both approaches should be avoided in production, but if I compare volume mounting and two processes per container, I will go for two processes per container instead of mounting host code to the container,一般来说,在生产中应该避免这两种方法,但是如果我比较卷安装和每个容器的两个进程,我会为每个容器选择两个进程,而不是将主机代码安装到容器,

There are some cases where the first approach failed, like in the case of Fargate , where there is no host which is a kind of serverless then in this you will definitely go for running two processes per container.在某些情况下,第一种方法会失败,例如Fargate的情况,其中没有主机是一种无服务器,那么在这种情况下,您肯定会为每个容器运行两个进程。

The main issue come with running multiple processes per container is "What if php-fpm is down and the Nginx process is running" .每个容器运行多个进程的主要问题是“如果 php-fpm 关闭并且 Nginx 进程正在运行怎么办” but you can handle this case with multiple approach, you can look suggested approach by docker documentation.但是您可以通过多种方法处理这种情况,您可以通过 docker 文档查看建议的方法。

docker-multi-service_container docker-multi-service_container

The docker documentation covered this scenario with a custom script or supervisord. docker 文档使用自定义脚本或 supervisord 涵盖了这个场景。

If you need to run more than one service within a container, you can accomplish this in a few different ways.如果您需要在一个容器中运行多个服务,您可以通过几种不同的方式来实现。

  • Put all of your commands in a wrapper script, complete with testing and debugging information.将所有命令放在一个包装脚本中,并附上测试和调试信息。 Run the wrapper script as your CMD.将包装脚本作为 CMD 运行。 This is a very naive example.这是一个非常幼稚的例子。 First, the wrapper script:首先,包装脚本:
#!/bin/bash

# Start the first process
./my_first_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_first_process: $status"
  exit $status
fi

# Start the second process
./my_second_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_second_process: $status"
  exit $status
fi

# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds

while sleep 60; do
  ps aux |grep my_first_process |grep -q -v grep
  PROCESS_1_STATUS=$?
  ps aux |grep my_second_process |grep -q -v grep
  PROCESS_2_STATUS=$?
  # If the greps above find anything, they exit with 0 status
  # If they are not both 0, then something is wrong
  if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
    echo "One of the processes has already exited."
    exit 1
  fi
done
  • Use a process manager like supervisord.使用像 supervisord 这样的进程管理器。 This is a moderately heavy-weight approach that requires you to package supervisord and its configuration in your image (or base your image on one that includes supervisord), along with the different applications it manages.这是一种中等重量级的方法,需要您将 supervisord 及其配置打包到您的映像中(或将您的映像基于一个包含 supervisord 的映像),以及它管理的不同应用程序。 Then you start supervisord, which manages your processes for you.然后您启动 supervisord,它为您管理您的流程。 Here is an example Dockerfile using this approach, that assumes the pre-written supervisord.conf, my_first_process, and my_second_process files all exist in the same directory as your Dockerfile.这是一个使用这种方法的 Dockerfile 示例,它假设预先编写的 supervisord.conf、my_first_process 和 my_second_process 文件都与您的 Dockerfile 存在于同一目录中。

But if you are looking for a supervisor you can check shutdown supervisor once one of the programs is killed and other similar approach to monitor the process.但是,如果您正在寻找主管,您可以在其中一个程序被终止后检查关闭主管,以及其他类似的方法来监控进程。

You can create two separate Docker images, one with only your static assets and one with the runnable backend code.您可以创建两个单独的 Docker 镜像,一个只包含您的静态资产,另一个包含可运行的后端代码。 The static-asset image could be as minimal as静态资产图像可以最小化为

# Dockerfile.nginx
FROM nginx:latest
COPY . /usr/share/nginx/html

Don't bind-mount anything anywhere.不要在任何地方绑定安装任何东西。 Do have your CI system build both images让你的 CI 系统构建两个镜像

TAG=20191214
docker build -t myname/myapp-php:$TAG .
docker build -t myname/myapp-nginx:$TAG -f Dockerfile.nginx .

Now you can run two separate containers (not violating the one-process-per-container guideline), scale them independently (3 nginx but 30 PHP), and not have to manually copy your source code around.现在,您可以运行两个独立的容器(不违反每个容器一个进程的准则),独立扩展它们(3 个 nginx 但 30 个 PHP),而不必手动复制源代码。

Another useful technique is to publish your static assets to some external hosting system;另一种有用的技术是将您的静态资产发布到某个外部托管系统; if you're running in AWS anyways, S3 works well here.如果您无论如何都在 AWS 中运行,那么 S3 在这里运行良好。 You will still need some kind of proxy to forward requests to either the asset store or your backend service, but that can now just be an Nginx with a custom config file;您仍然需要某种代理来将请求转发到资产商店或您的后端服务,但现在可以只是一个带有自定义配置文件的 Nginx; it doesn't need any of your application code in it.它不需要您的任何应用程序代码。 (In Kubernetes you could run this with an Nginx deployment pointing at a config map with the nginx.conf file.) (在 Kubernetes 中,您可以使用 Nginx 部署来运行它,该部署指向带有nginx.conf文件的配置映射。)

When you set up your CI system, you definitely should not bind mount code into your containers at build or integration-test time.设置 CI 系统时,绝对不应在构建或集成测试时将挂载代码绑定到容器中。 Test what's actually in the containers you're building and not some other copy of your source code.测试您正在构建的容器中的实际内容,而不是源代码的其他副本。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM