简体   繁体   English

无法使用docker-compose连接到容器

[英]cannot connect to container with docker-compose

I'm using docker 1.12 and docker-compose 1.12, on OSX. 我在OSX上使用docker 1.12和docker-compose 1.12。

I created a docker-compose.yml file which runs two containers: 我创建了一个docker-compose.yml文件,该文件运行两个容器:

  • the first, named spark, builds and runs a sparkjava application 第一个名为spark,构建并运行一个sparkjava应用程序
  • the second, named behave, runs some functional tests on the API exposed by the first container. 第二个名为行为,对第一个容器公开的API运行一些功能测试。

     version: "2" services: behave: build: context: ./src/test container_name: "behave" links: - spark depends_on: - spark entrypoint: ./runtests.sh spark:9000 spark: build: context: ./ container_name: "spark" ports: - "9000:9000" 

As recommended by Docker Compose documentation , I use a simple shell script to test if the spark server is ready. 根据Docker Compose文档的建议 ,我使用一个简单的shell脚本来测试spark服务器是否准备就绪。 This script is name runtest.sh, and runs into the container named "behave". 该脚本名为runtest.sh,并运行到名为“行为”的容器中。 It is launched by docker-compose (see above): 它由docker-compose启动(见上文):

#!/bin/bash

# This scripts waits for the API server to be ready before running functional tests with Behave
# the parameter should be the hostname for the spark server
set -e

host="$1"
echo "runtests host is $host"

until curl -L "http://$host"; do
  >&2 echo "Spark server is not ready - sleeping"
  sleep 5
done

>&2 echo "Spark server is up - starting tests"
behave
```

The DNS resolution does not seem to work. DNS解析似乎无效。 curl makes a request to spark.com instead of a request to my container named "spark". curl向Spark.com发出请求,而不是向我的名为“ spark”的容器发出请求。

UPDATE: 更新:

By setting an alias for my link ( links: -spark:myserver ), I've seen the DNS resolution is not done by Docker: I received an error message from a corporate network equipment (I'm running this from behind a corporate proxy, with Docker for Mac). 通过为链接设置别名( links: -spark:myserver ),我发现Docker无法完成DNS解析:我收到来自公司网络设备的错误消息(我是从公司代理后面运行的) ,适用于Mac的Docker)。 Here is an extract of the output: 这是输出的摘录:

Recreating spark
Recreating behave
Attaching to spark, behave
behave    | runtests host is myserver:9000
behave    |   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
behave    |                                  Dload  Upload   Total   Spent    Left  Speed
100   672  100   672    0     0    348      0  0:00:01  0:00:01 --:--:--   348
behave    | <HTML><HEAD>
behave    | <TITLE>Network Error</TITLE>
behave    | </HEAD>
behave    | <BODY>
behave    | ...
behave    | <big>Network Error (dns_unresolved_hostname)</big>
behave    | Your requested host "myserver" could not be resolved by DNS.
behave    | ...
behave    | </BODY></HTML>
behave    | Spark server is up - starting tests

To solve this, I added an environment variable no_proxy for the name of the container I wanted to join. 为了解决这个问题,我为要加入的容器的名称添加了一个环境变量no_proxy

In the dockerfile for the container behave, I have: 在容器的dockerfile行为中,我有:

ENV http_proxy=http://proxy.mycompany.com:8080
ENV https_proxy=http://proxy.mycompany.com:8080
ENV no_proxy=127.0.0.1,localhost,spark

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM