简体   繁体   中英

Docker-Compose can't resolve names of containers?

I have a wierd problem, as it seems to have been working fine until today. I can't tell what's changed since then, however. I run docker-compose up --build --force-recreate and the build fails saying that it can't resolve the host name.

The issue is specifically because of CURL commands inside one of the Dockerfiles:

USER logstash
WORKDIR /usr/share/logstash
RUN ./bin/logstash-plugin install logstash-input-beats

WORKDIR /tmp
COPY templates/winlogbeat.template.json winlogbeat.template.json
COPY templates/metricbeat.template.json metricbeat.template.json

RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/metricbeat-6.3.2 -d@metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/winlogbeat-6.3.2 -d@winlogbeat.template.json

Originally, I had those commands running inside of the Elasticsearch Container, but it stopped working, reporting Could not resolve host: elasticsearch; Unknown error Could not resolve host: elasticsearch; Unknown error

I thought maybe it was trying to do the RUN commands too soon, so moved the process to the Logstash container, but the issue remains. Logstash depends on Elasticsearch, so Elastic should be up and running by the time that the Logstash container is trying to run this.

I've tried deleting images, containers, network, etc but nothing seems to let me run these CURL commands during the build process;

I'm thinking that perhaps the Docker daemon is caching DNS names, but can't figure out how to reset it, as I've already deleted and recreated the network several times.

Can anyone offer any ideas?

Host: Ubuntu Server 18.04

SW: Docker-CE (current version)

ELK stack: All are the official 6.3.2 images provided by Elastic.

Docker-Compose.YML:

version: '2'

services:

  elasticsearch:
    build:
      context: elasticsearch/
    volumes:
      - esdata:/usr/share/elasticsearch/data
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
#    ports:
#      - "9200:9200"
#      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx512m -Xms512m"
      HOSTNAME: "elasticsearch"
    networks:
      - elk

  logstash:
    build:
      context: logstash/
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5000:5000"
      - "5044:5044"
      - "5045:5045"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
    volumes:
      - ./kibana/config/:/usr/share/kibana/config:ro
# Port 5601 is not exposed outside of the container
# Can be accessed through Nginx Reverse Proxy only
#    ports:
#      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

  nginx:
    build:
      context: nginx/
    environment:
      - APPLICATION_URL=http://docker.local
    volumes:
      - ./nginx/conf.d/:/etc/nginx/conf.d:ro
    ports:
      - "80:80"
    networks:
      - elk
    depends_on:
      - elasticsearch

  fouroneone:
    build:
      context: fouroneone/
# No direct access, only through Nginx Reverse Proxy
#    ports:
#      - "8181:80"
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

volumes:
  esdata:

Running a curl to elasticsearch is a wrong shortcut as it may not be up, plus Dockerfile may be the wrong place altogether

Also I would not put this script in the Dockerfile but possibly use it to alter the ENTRYPOINT for the image if I really wanted to use Dockerfile (again I would not advise it)

Best to do here is to have a logstash service in docker-file with the image on updated input plugin only and remove all the rest of lines in Dockerfile. And you could have a logstash_setup service which does the setup bits (using logstash image or even cleaner a basic centos image which should have bash and curl installed - since all you do is run a couple of curl commands passing some files)

Script I am talking about might look something like this :

#!/bin/bash
set -euo pipefail
es_url=http://elasticsearch:9200
# Wait for Elasticsearch to start up before doing anything.
until curl -s $es_url -k -o /dev/null; do
    sleep 1
done
#then put your code here
curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_ ...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM