[英]ElasticSearch Java API:NoNodeAvailableException: No node available
public static void main(String[] args) throws IOException {
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "foxzen")
.put("node.name", "yu").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("XXX.XXX.XXX.XXX", 9200));
// XXX is my server's ip address
IndexResponse response = client.prepareIndex("twitter", "tweet")
.setSource(XContentFactory.jsonBuilder()
.startObject()
.field("productId", "1")
.field("productName", "XXX").endObject()).execute().actionGet();
System.out.println(response.getIndex());
System.out.println(response.getType());
System.out.println(response.getVersion());
client.close();
}
I access server from my computer 我从我的电脑访问服务器
curl -get http://XXX.XXX.XXX.XXX:9200/
get this 得到这个
{
"status" : 200,
"name" : "yu",
"version" : {
"number" : "1.1.0",
"build_hash" : "2181e113dea80b4a9e31e58e9686658a2d46e363",
"build_timestamp" : "2014-03-25T15:59:51Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
Why get error by using Java API? 为什么使用Java API会出错?
EDIT 编辑
There is the cluster and node part config of elasticsearch.yml
有elasticsearch.yml
的集群和节点部分配置
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: foxzen
#################################### Node #####################################
# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
node.name: yu
Some suggestions: 一些建议:
1 - Use port 9300. [9300-9400] is for node-to-node communication, [9200-9300] is for HTTP traffic. 1 - 使用端口9300. [9300-9400]用于节点到节点通信,[9200-9300]用于HTTP流量。
2 - Ensure the version of the Java API you are using matches the version of elasticsearch running on the server. 2 - 确保您使用的Java API版本与服务器上运行的elasticsearch版本相匹配。
3 - Ensure that the name of your cluster is foxzen
(check the elasticsearch.yml on the server). 3 - 确保群集名称为foxzen
(检查服务器上的foxzen
)。
4 - Remove put("node.name", "yu")
, you aren't joining the cluster as a node since you are using the TransportClient
, and even if you were it appears your server node is named yu
so you would want a different node name in any case. 4 - 删除put("node.name", "yu")
,因为您正在使用TransportClient
,所以您没有将群集作为节点加入,即使您出现了,您的服务器节点也会被命名为yu
因此您可能需要在任何情况下都是不同的节点名称。
You need to change your code to use port 9300 - correct line would be: 您需要更改代码以使用端口9300 - 正确的行将是:
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("XXX.XXX.XXX.XXX", 9300));
The reason is that the Java API is using the internal transport used for inter node communications and it defaults to port 9300. Port 9200 is the default for the REST API interface. 原因是Java API使用用于节点间通信的内部传输,默认为端口9300.端口9200是REST API接口的默认端口。 Common issue to run into - check this sample code here towards the bottom of the page, under Transport Client: 遇到的常见问题 - 请在此处查看此示例代码,位于页面底部的Transport Client下:
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html
// on startup
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress("host1", 9300))
.addTransportAddress(new InetSocketTransportAddress("host2", 9300));
// on shutdown
client.close();
I met this error too. 我也遇到了这个错误。 I use ElasticSearch 2.4.1 as a standalone server (single node) in docker, programming with Grails 3/spring-data-elasticsearch. 我使用ElasticSearch 2.4.1作为docker中的独立服务器(单节点),使用Grails 3 / spring-data-elasticsearch进行编程。 My fix is setting client.transport.sniff
to false
. 我的修复是将client.transport.sniff
设置为false
。 Here is my core conf : 这是我的核心内容:
application.yml application.yml
spring.data.elasticsearch:
cluster-name: "my-es"
cluster-nodes: "localhost:9300"
properties:
"client.transport.ignore_cluster_name": true
"client.transport.nodes_sampler_interval": "5s"
"client.transport.ping_timeout": "5s"
"client.transport.sniff": false # XXX : notice here
repositories.enabled: false
I assume that you are setting the ES server on a remote host? 我假设您在远程主机上设置ES服务器? In that case you will need to bind the publish address to the host's public IP address. 在这种情况下,您需要将发布地址绑定到主机的公共IP地址。
In your ES host edit /etc/elasticsearch/elasticsearch.yml
and add its public IP after network.publish_host: 在您的ES主机中编辑/etc/elasticsearch/elasticsearch.yml
并在network.publish_host之后添加其公共IP:
# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
network.publish_host: 192.168.0.1
And in your code connect to this host on port 9300. Note that you need the IP and not the domain name (at least according to my experience on Amazon EC2) 并在您的代码中连接到端口9300上的此主机。请注意,您需要IP而不是域名(至少根据我在Amazon EC2上的经验)
If you are still having issues, even when using port 9300, and everything else seems to be configured correctly, try using an older version of elasticsearch. 如果您仍然遇到问题,即使使用端口9300,并且其他所有内容似乎都配置正确,请尝试使用旧版本的elasticsearch。
I was getting this same error while using elasticsearch version 2.2.0, but as soon as I rolled back to version 1.7.5, my problem magically went away. 我在使用elasticsearch 2.2.0时遇到了同样的错误,但是当我回滚到版本1.7.5时,我的问题神奇地消失了。 Here's a link to someone else having this issue : older version solves problem 这是与其他人有这个问题的链接: 旧版本解决了问题
For folks with similar problems, I received this because I had not set cluster.name
in the TransportClient
builder. 对于有类似问题的人,我收到了这个,因为我没有在TransportClient
构建器中设置cluster.name
。 Added the property and everything worked. 添加了属性,一切正常。
Other reason could be, your Elasticsearch Java client is a different version from your Elasticsearch server . 其他原因可能是,您的Elasticsearch Java客户端与Elasticsearch服务器的版本不同。
Elasticsearch Java client version is nothing but your elasticsearch jar version in your code base. Elasticsearch Java客户端版本只是您代码库中的弹性搜索 jar版本。
For example: In my code it's elasticsearch-2.4.0.jar 例如:在我的代码中,它是elasticsearch-2.4.0.jar
To verify Elasticsearch server version, 要验证Elasticsearch服务器版本,
$ /Users/kkolipaka/elasticsearch/bin/elasticsearch -version
Version: 5.2.2, Build: f9d9b74/2017-02-24T17:26:45.835Z, JVM: 1.8.0_111
As you can see, I've downloaded latest version of Elastic server 5.2.2 but forgot to update the ES Java API client version 2.4.0 https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html 如您所见,我已下载最新版本的Elastic server 5.2.2,但忘记更新ES Java API客户端版本2.4.0 https://www.elastic.co/guide/en/elasticsearch/client/java- API /电流/ client.html
Another solution may be to include io.netty.netty-all
into project dependencies explicitly. 另一种解决方案可能是将io.netty.netty-all
明确地包含在项目依赖项中。
On addTransportAddresses
a method nodesSampler.sample()
is being executed, and added addresses are being checked for availability there. 在addTransportAddresses
上,正在执行一个方法nodesSampler.sample()
,并且正在检查添加的地址的可用性。 In my case try-catch
block swallows ConnectTransportException
because a method io.netty.channel.DefaultChannelId.newInstance()
cannot be found. 在我的情况下, try-catch
块吞下ConnectTransportException
因为io.netty.channel.DefaultChannelId.newInstance()
方法io.netty.channel.DefaultChannelId.newInstance()
。 So added node just isn't treated as available. 因此添加的节点不被视为可用。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.