繁体   English   中英

docker-compose控制台输出问题

[英]Issue with docker-compose console output

问题

我在开发时运行docker-compose up所以我只需要快速查看终端(使用集成的vs代码终端)来查看我的单元测试,lint作业以及运行正常的情况。

同样,如果我想在它刚刚在终端弹出的API中的console.log ,我可以从它调试。

但是,从今天下午开始,我只是从容器transpilerkibanaapm-server中获取日志,而不是从所有容器中获取日志。

我想解决什么问题

我曾经习惯用ctrl + s来触发linter和mocha容器(因为这两个容器都使用nodemon所以修改文件会使它们输出),并将typescripts文件构建成js(在监视模式下转换器)并让它们输出到终点站的一切。

尽管我在代码中放了一些console.log ,但没有来自apimochalinter输出。

我没有做任何重大更新,只是切换计算机(两者都是安装了docker的ubuntu linux),我无法弄清楚如何解决这个问题

docker-compose.yml文件

version: "3.3"
services:

  api:
    container_name: api
    build: .
    env_file:
      - .env
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - 9000:9000
    restart: always
    depends_on:
      - mongo
      - elasticsearch
    command: sh -c "mkdir -p dist && touch ./dist/app.js && yarn run start"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/api/v1/ping"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  transpiler:
    container_name: transpiler
    build: .
    restart: always
    volumes:
      - .:/app
      - /app/node_modules
    command: yarn run transpile -w

  linter:
    container_name: linter
    build: .
    restart: always
    volumes:
      - .:/app
      - /app/node_modules
    # https://github.com/yarnpkg/yarn/issues/5457 --silent not working
    command: nodemon --delay 500ms --exec yarn run lint

  mongo:
    container_name: mongo
    image: mongo:4.0
    restart: always
    ports:
      - 27017:27017
    command: mongod
    volumes:
      - ./db/mongodb:/data/db

  mongo_express:
    container_name: mongo_express
    restart: always
    image: mongo-express
    ports:
      - 8081:8081
    depends_on:
      - mongo
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8081"]
      interval: 2m30s
      timeout: 10s
      retries: 3

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
    container_name: elasticsearch
    restart: always
    volumes:
      - ./db/elasticsearch:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
      - discovery.type=single-node
    ports:
      - 9300:9300
      - 9200:9200
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9200"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  kibana:
    container_name: kibana
    restart: always
    image: docker.elastic.co/kibana/kibana:6.6.0
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5601"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  logstash:
    container_name: logstash
    restart: always
    image: docker.elastic.co/logstash/logstash:6.6.0
    ports:
      - 9600:9600
    environment:
      - KILL_ON_STOP_TIMEOUT=1
    volumes:
      - ./logstash/settings/:/usr/share/logstash/config/
      - ./logstash/pipeline/:/usr/share/logstash/pipeline/
    depends_on:
      - elasticsearch
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9600"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  apm-server:
    container_name: apm_server
    restart: always
    image: docker.elastic.co/apm/apm-server:6.6.0
    ports:
      - 8200:8200
    volumes:
      - ./apm_settings/apm-server.yml:/usr/share/apm-server/apm-server.yml
    depends_on:
      - elasticsearch
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8200"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  mocha:
    container_name: mocha
    restart: always
    build: .
    volumes:
      - .:/app
      - /app/node_modules
    command: nodemon --delay 500ms --exec yarn run test-coverage
    env_file:
      - .env
    environment:
      NODE_ENV: 'test'

volumes:
  esdata:

Dockerfile

FROM mhart/alpine-node:10
ADD . /app
WORKDIR /app

RUN apk add --no-cache --virtual .gyp g++ libtool make python curl &&\
    yarn &&\
    yarn global add nodemon &&\
    apk del .gyp

数据样本

当我运行docker-up时,所有输出都很好:

mongo            | 2019-03-22T23:11:26.048+0000 I NETWORK  [conn6] end connection 172.22.0.8:52266 (3 connections now open)
apm_server       | 2019-03-22T23:11:26.048Z     INFO    [request]       beater/v2_handler.go:96 error handling request  {"request_id": "77b88109-c7c0-41a2-a28c-2343a82862bd", "method": "POST", "URL": "/intake/v2/events", "content_length": -1, "remote_address": "172.22.0.8", "user-agent": "elastic-apm-node/2.6.0 elastic-apm-http-client/7.1.1", "error": "unexpected EOF"}
api              | [nodemon] app crashed
api              | error Command failed with exit code 1.
api              | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
mocha            | 
mocha            | 
mocha            | Express server listening on port 9000, in test mode
mocha            |   GET PING ressource
mocha            |     GET /api/v1/ ping/
mongo            | 2019-03-22T23:11:27.951+0000 I NETWORK  [listener] connection accepted from 172.22.0.2:39956 #8 (4 connections now open)
mongo            | 2019-03-22T23:11:27.961+0000 I NETWORK  [conn8] received client metadata from 172.22.0.2:39956 conn8: { driver: { name: "nodejs", version: "3.1.13" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.20.7-042007-generic" }, platform: "Node.js v10.15.3, LE, mongodb-core: 3.1.11" }
mongo            | 2019-03-22T23:11:28.051+0000 I NETWORK  [listener] connection accepted from 172.22.0.2:39958 #9 (5 connections now open)
mongo            | 2019-03-22T23:11:28.197+0000 I NETWORK  [listener] connection accepted from 172.22.0.2:39962 #10 (6 connections now open)
mocha            |       ✓ ping api (154ms)

是的我知道那些日志显示有些错误,但我主要担心的是仍然在终端输出它们

但是做一个ctrl + s只是显示了这个:(这是我真正的问题):

[10:59:15 PM] File change detected. Starting incremental compilation...
transpiler       | 
transpiler       | [10:59:15 PM] Found 0 errors. Watching for file changes.
transpiler       | 
apm_server       | 2019-03-22T22:59:40.309Z     INFO    [request]       beater/common_handlers.go:272   handled request {"request_id": "5948c9ee-c6fd-42ad-bd1e-acc259e1634c", "method": "POST", "URL": "/intake/v2/events", "content_length": -1, "remote_address": "172.22.0.11", "user-agent": "elastic-apm-node/2.6.0 elastic-apm-http-client/7.1.1", "response_code": 202}
kibana           | {"type":"response","@timestamp":"2019-03-22T22:59:44Z","tags":[],"pid":1,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"user-agent":"curl/7.29.0","host":"localhost:5601","accept":"*/*"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1"},"res":{"statusCode":302,"responseTime":7,"contentLength":9},"message":"GET / 302 7ms - 9.0B"}

我尝试了什么(并没有工作)

  • 删除所有容器
  • 删除所有容器及其容量
  • 删除所有容器及其容量和所有图像
  • 重启
  • 删除所有内容后重建( docker-compose build
  • 从简单的终端运行docker-compose up cmd以确保它不是vs代码集成终端的问题
  • 重启docker服务( sudo systemctl restart docker

当你在某个地方重建npm包中最有可能发生变化的一切时(你不知道你有一个依赖)。

另外,你说过切换计算机它是否仍能按预期在以前的计算机和操作系统上运行?

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM