[英]Not able to connect to Elasticsearch from docker container (node.js client)
I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the @elastic/elasticsearch client for node.我已经设置了一个 elasticsearch/kibana docker 配置,我想使用节点的 @elastic/elasticsearch 客户端从 docker 容器内部连接到 elasticsearch。 However, the connection is "timing out".
但是,连接正在“超时”。
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/该项目的灵感来自 Patrick Triest: https : //blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.但是,为了连接kibana,我做了一些修改,使用更新的ES镜像和新的elasticsearch节点客户端。
I am using the following docker-compose file:我正在使用以下 docker-compose 文件:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.在构建/启动容器时,我能够从 ES 得到响应: curl -XGET "localhost:9200", "you know, for search"... kibana 正在运行并且能够连接到索引。
I have the following file located in the backend container (connection.js):我在后端容器 (connection.js) 中有以下文件:
const { Client } = require("@elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:如果我在容器外运行它,那么我会得到预期的响应:
node server/connection.js
节点服务器/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:但是,如果我在容器内运行它:
docker exec mp-backend "node" "server/connection.js"
docker exec mp-backend“节点”“服务器/connection.js”
Then I get the following response:然后我得到以下回复:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/@elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/@elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):因此,我尝试将客户端连接更改为(我在某处读到这可能会有所帮助):
const client = new Client({ node: " http://172.24.0.1:9200 " });
const client = new Client({ node: " http://172.24.0.1:9200 " });
Then I am just "stuck" waiting for a response.然后我只是“卡住”等待回应。 Only one console.log of "Connecting to Elasticsearch"
只有一个“连接到 Elasticsearch”的 console.log
I am using the following version:我正在使用以下版本:
"@elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:正如您可能看到的,我并没有完全了解这里发生的事情......我还试图补充:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.到api服务,没有任何运气。
Does anyone know what I am doing wrong here?有谁知道我在这里做错了什么? Thank you in advance :)
先感谢您 :)
EDIT:编辑:
I did a "docker network inspect" on the network with *_elastic.我用 *_elastic 在网络上做了一个“docker network inspect”。 There I see the following:
在那里我看到以下内容:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:更改客户端以连接到“网关”IP:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works!然后它起作用了! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
我仍然想知道为什么这只是“反复试验” 有没有办法无需检查网络即可获得此 Ip?
In Docker, localhost
(or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container";在Docker中,
localhost
(或者对应的IPv4地址127.0.0.1,或者对应的IPv6地址::1)一般表示“这个容器”; you can't use that host name to access services running in another container.您不能使用该主机名访问在另一个容器中运行的服务。
In a Compose-based setup, the names of the services:
blocks ( api
, elasticsearch
, kibana
) are usable as host names.在基于撰写-设置中,名称
services:
块( api
, elasticsearch
, kibana
)可用作主机名。 The caveat is that all of the services have to be on the same Docker-internal network.需要注意的是,所有服务都必须在同一个 Docker 内部网络上。 Compose creates one for you and attaches containers to it by default.
Compose 会为您创建一个并默认将容器附加到它。 (In your example
api
is on the default
network but the other two containers are on a separate elastic
network.) Networking in Compose in the Docker documentation has some more details. (在您的示例中,
api
位于default
网络上,但其他两个容器位于单独的elastic
网络上。)Docker 文档中 Compose中的网络有更多详细信息。
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch因此,要完成这项工作,您需要告诉您的客户端代码遵守您正在设置的指向 Elasticsearch 的环境变量
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml
file delete all of the networks:
blocks to use the provided default
network.在您
docker-compose.yml
文件中删除所有networks:
块以使用提供的default
网络。 (While you're there, links:
is unnecessary and Compose provides reasonable container_name:
for you; api
can reasonably depends_on: [elasticsearch]
.) (当你在那里时,
links:
是不必要的,Compose 为你提供了合理的container_name:
; api
可以合理地depends_on: [elasticsearch]
。)
Since we've provided a fallback for $ES_HOST
, if you're working in a host development environment, it will default to using localhost
;由于我们为
$ES_HOST
提供了后备,如果您在主机开发环境中工作,它将默认使用localhost
; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.在 Docker 之外,它意味着“当前主机”,它将到达 Elasticsearch 容器的已发布端口。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.