简体   繁体   中英

Elastic Stack [docker-elk] docker entry-point index creation using API calls

I'm working on a project using the elastic stack, elasticsearch logstash and kibana. The goal now is to use docker to run the stack. based on this https://github.com/deviantony/docker-elk i configured the docker-compose to run my own pipelines with my own parsing.

The problem i'm facing is automatic configuration (creating index templates and/or indices). I know i call use curl to do API calls, after a search i found out about the entry-point shell script, i tried copying the script from the official image and adding my curl -XPUT calls in there but it doesn't work. So my next reflex was to open the CLI and type in my curl calls and it works fine.

So my question is how can i run my API calls automatically when the container starts, i'm pretty new with docker and the elastic stack.

thank you !

EDIT:

So i got it working with a basic bash image and curl installed with the RUN command on my Dockerfile.

now i have an issue with having the services communicate (i put them all in the same network) i'm probably doing it wrong so i'm getting:

curl: (7) Failed to connect to localhost port 9200: Connection refused

edit:

updated my scripts to call elasticsearch:9200 without exposing ports and it works fine now !

i also think that the script is not waiting on elasticsearch to finish to execute the script.

Here is my docker-compose file:

version: '3.2'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
      # Use single node discovery in order to disable production mode and avoid bootstrap checks.
      # see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    networks:
      - elk

  configurator:
    build:
      context: configurator/
    networks:
      - elk
    depends_on:
      - elasticsearch

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config
        target: /usr/share/logstash/config
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
      - type: bind
        source: ./logstash/data
        target: /usr/share/logstash/data
    ports:
      - "5044:5044"
      - "5000:5000/tcp"
      - "5000:5000/udp"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - configurator

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - configurator

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:

Create a new handmade docker image (or use a good starting image), add the API calls you need to issue on startup to the startup script. Make sure, it verifies if is need to execute the calls or all is already done by a previous run.

Next add the image to your compse and make it depending on elasticsearch service with the depends_on option. This will ensure that your cluster is running before starting your 'init container'.

Now, when starting up, the required services will be started (if not already running) and after that the init container will start.

Optional: Make the init container not starting up by default but only if explicitly started. For a example have a look at the monitor service an the x-enabled option here

It´sa good practice adding healthchecks to services. If you don´t, docker will assume service health just by the container state (running=healthly). But elasticsearch has a longer bootstrap procedure we need to wait for.

Add the following to elasticsearch:

healthcheck:
  test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
  interval: 30s
  timeout: 30s
  retries: 5

And add this to the configurator service:

depends_on:
  elasticsearch:
    condition: service_healthy

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM