简体   繁体   English

如何在 monorepo 项目中使用 docker-compose 处理 node_modules

[英]How to handle node_modules with docker-compose in a monorepo project

I'm running a Node.js monorepo project using yarn workspaces.我正在使用 yarn 工作区运行一个 Node.js monorepo 项目。 File structure looks like this:文件结构如下所示:

workspace_root
    node_modules
    package.json
    apps
        appA
            node_modules
            package.json
        appB
            node_modules
            package.json
    libs
        libA
            dist
            node_modules
            package.json

All apps are independents, but they all require libA所有应用程序都是独立的,但它们都需要libA

I'm running all these apps with docker-compose. My question here is how to handle properly all the dependencies as I don't want the node_modules folders to be synchronized with host.我正在使用 docker-compose 运行所有这些应用程序。我的问题是如何正确处理所有依赖项,因为我不希望node_modules文件夹与主机同步。 Locally, when I run yarn install at workspace root, it installs all dependencies for all projects, populating the different node_modules .在本地,当我在工作区根目录运行yarn install时,它会安装所有项目的所有依赖项,填充不同的node_modules In docker-compose, ideally each app should not be aware of others apps.在 docker-compose 中,理想情况下每个应用程序都不应知道其他应用程序。

My approach so far, which is working but not ideal and not very scalable.到目前为止,我的方法有效但不理想且可扩展性不强。

version: "3.4"

services:
  # The core is in charge of installing dependencies for ALL services. Each service must for wait the core, and then
  # just do their job, not having to handle install.
  appA:
    image: node:14-alpine
    volumes: # We must load every volumes for install
        - .:/app  # Mount the whole workspace structure
        - root_node_modules:/app/node_modules
        - appA_node_modules:/app/apps/appA/node_modules
        - appB_node_modules:/app/apps/appB/node_modules
        - libA_node_modules:/app/libs/libA/node_modules
    working_dir: /app/apps/appA
    command: [sh, -c, "yarn install && yarn run start"]

  appB:
    image: node:14-alpine
    volumes: # We must load every volumes for install
        - .:/app  # Mount the whole workspace structure
        - root_node_modules:/app/node_modules
        - appB_node_modules:/app/apps/appB/node_modules
    working_dir: /app/apps/appB
    command: [sh, -c, "/scripts/wait-for-it.sh appA:4001  -- yarn run start"]

    # And so on for all apps....
  
volumes:
    root_node_modules:
        driver: local
    appA_node_modules:
        driver: local
    appB_node_modules:
        driver: local
    libA_node_modules:
        driver: local

The main drawbacks I see:我看到的主要缺点:

  • Service appA is responsible for install dependencies of ALL apps.服务appA负责安装所有应用程序的依赖项。
  • I have to create a volume for each app + one for the root node_modules我必须为每个应用程序创建一个卷 + 一个为根 node_modules
  • The whole project is mounted in each service, even though I'm using only a specific folder整个项目都安装在每个服务中,即使我只使用一个特定的文件夹

I would like to avoid a build for development, as it has to be done each time you add a dependency, it's quite cumbersome and it's slowing you down我想避免为开发而构建,因为每次添加依赖项时都必须完成它,这很麻烦而且会减慢你的速度

attaching at the bottom of this answer example repository i have created.附在我创建的这个答案示例存储库的底部。

Basically utilizing yarn workspaces i have created a common dockerfile for each of the packages/modules to use when built.基本上利用纱线工作区,我为每个构建时使用的包/模块创建了一个通用的 dockerfile。

The entire repository is copied for each of the docker images (It's not a good practices for later releasing the product, you would probably want to create a different flow for that)为每个 docker 映像复制整个存储库(这不是以后发布产品的好做法,您可能希望为此创建不同的流程)

So if the entire repository is mounted to each of the running services you can watch changes in the libraries (in the repository i have configured nodemon so it will also watch the lib files)因此,如果将整个存储库安装到每个正在运行的服务上,您可以观察库中的更改(在存储库中我已经配置了 nodemon,因此它也会观察 lib 文件)

To sum this up:总结一下:

  1. Hot reload even if libraries are changing because the entire project is mounted to each of the services docker containers即使库正在更改,也因为整个项目已安装到每个服务 docker 容器而热重载
  2. utilizing yarn workspaces to manage the packages easily with convenience commands利用纱线工作空间通过便捷的命令轻松管理包
  3. For building each of the libraries each time they change they should have each respectively a docker container raised by the docker-compose为了在每次更改时构建每个库,它们应该分别拥有一个由 docker-compose 提出的 docker 容器
  4. Development process is not a good practice for any production related processes like releasing the docker images later since all of the repository is available in the image开发过程对于任何与生产相关的过程都不是一个好的做法,例如稍后发布 docker 映像,因为映像中的所有存储库都可用
  5. Once added the libraries as docker service each with hot reload they will be rebuilt every-time you make a change so no need to docker-compose build repeatedly.将库添加为 docker 服务后,每个服务都带有热重载,每次您进行更改时都会重新构建它们,因此无需重复docker-compose build

Anyways i would have not worried much about the repeated docker-compose build once the libraries are settled and changes are less frequent you will find your self less rebuilding (But any ways i gave the solution for that also)无论如何,一旦图书馆安顿下来并且更改频率降低,我就不会担心重复的docker-compose build ,你会发现你的自我重建更少(但我也给出了解决方案的任何方式)

Github Repository example Github 存储库示例

I believe that in your case, the best thing you should do is to build your own Docker image instead of using the image from node.我相信在你的情况下,你应该做的最好的事情是构建你自己的 Docker 图像而不是使用节点中的图像。 So, lets do some coding.所以,让我们做一些编码。 First of all, you should tell Docker to ignore node_modules folders.首先,您应该告诉 Docker 忽略 node_modules 文件夹。 In order to do that, you'll need to create a.dockerignore and a Dockerfile for each of your apps.为此,您需要为每个应用程序创建一个 .dockerignore 和一个 Dockerfile。 So, your structure might look like this:因此,您的结构可能如下所示:

workspace_root
node_modules
package.json
apps
    appA
        .dockerignore
        node_modules
        Dockerfile
        package.json
    appB
        .dockerignore
        node_modules
        Dockerfile
        package.json
libs
    libA
        .dockerignore
        dist
        node_modules
        Dockerfile
        package.json

In the.dockerignore file, you can repeat the same value below.在 .dockerignore 文件中,您可以在下面重复相同的值。

node_modules/
dist/

That will make docker ignore those folders during the build.这将使 docker 在构建过程中忽略这些文件夹。 And now to the Dockerfile itself.现在是 Dockerfile 本身。 So, in order to make sure your project runs fine inside your container, the best practice is to build your project in the container, and not outside it.因此,为了确保您的项目在容器内正常运行,最佳做法是在容器内构建项目,而不是在容器外。 It avoids lots of "works fine in my computer" problems.它避免了许多“在我的电脑上工作正常”的问题。 That said, one example of a Dockerfile could be like this:也就是说,Dockerfile 的一个示例可能是这样的:

# build stage
FROM node:14-alpine AS build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY prod_nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

In that case I used nginx also, to make sure user gets to the container through a proper webserver.在那种情况下,我也使用了 nginx,以确保用户通过适当的网络服务器访问容器。 At the end I'll let the prod_nginx.conf also.最后我也会让 prod_nginx.conf。 But the point here, is that you can just build that image and send it to dockerhub, and from there, use it in your docker-compose.yml instead of using a raw node image.但这里的要点是,您可以只构建该图像并将其发送到 dockerhub,然后从那里在您的 docker-compose.yml 中使用它,而不是使用原始节点图像。

Docker-compose.yml would be like this: Docker-compose.yml 会是这样的:

version: "3.4"

services:
  appA:
    image: mydockeraccount/appA
    container_name: container-appA
    port: 
      - "8080:80"
    ....

Now, as promised, the prod_nginx.conf现在,正如承诺的那样,prod_nginx.conf

user                    nginx;
worker_processes        1;
error_log               /var/log/nginx/error.log warn;
pid                     /var/run/nginx.pid;
events {
    worker_connections  1024;
}

http {
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    log_format          main '$remote_addr - $remote_user [$time_local] "$request" '
                             '$status $body_bytes_sent "$http_referer"'
                             '"$http_user_agent" "$http_x_forwarded_for"';
    access_log          /var/log/nginx/access.log main;
    sendfile            on;
    keepalive_timeout   65;
    server {
        listen          80;
        server_name     _ default_server;
        index           index.html;
        location / {
            root        /usr/share/nginx/html;
            index       index.html;
            try_files   $uri $uri/ /index.html;
        }
    }
}

Hope it helps.希望能帮助到你。 Best regards.最好的祝福。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM