[英]Hyperledger Fabric : test-network client failed to connect
Currently I am learning hyperledger fabric.I did run on my windows and Ubuntu computer locally the hyperledger fabric test-network and also made some shell script to add new 0rg3,org4 to different channels and also the default mychannel.Apply different chaincode to channel than the default fabcar. Currently I am learning hyperledger fabric.I did run on my windows and Ubuntu computer locally the hyperledger fabric test-network and also made some shell script to add new 0rg3,org4 to different channels and also the default mychannel.Apply different chaincode to channel than默认工厂。
Then I tried to deploy the test-network to our Centos server we have.然后我尝试将测试网络部署到我们拥有的 Centos 服务器上。 I installed the latest go,node,npm,docker and docker-compose.
我安装了最新的 go,node,npm,docker 和 ZBAEDB53E845AE71F13945AEFCC0057. As a second step I opened all necessary ports used from the test-network with firewall-cmd.
作为第二步,我使用 firewall-cmd 打开了测试网络中使用的所有必要端口。
Then I tried: 1. ./network.sh up 2. ./network.sh createChannel然后我尝试了: 1. ./network.sh up 2. ./network.sh createChannel
But while creating the channel I get an error但是在创建频道时出现错误
+ peer channel create -o localhost:7050 -c mychannel --ordererTLSHostnameOverride orderer.example.com -f ./channel-artifacts/mychannel.tx --outputBlock ./channel-artifacts/mychannel.block --tls true --cafile /root/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
+ res=1
+ set +x
Error: failed to create deliver client for orderer: orderer client failed to connect to localhost:7050: failed to create new connection: context deadline exceedediled to create new connection: context deadline exceede
For fixes I tried many suggested tips.对于修复,我尝试了许多建议的技巧。 Also I tried downloading the lastest binaries and also the suggested in tutorial.
我还尝试下载最新的二进制文件以及教程中的建议。
docker ps -aq | xargs -n 1 docker stop docker ps -aq | xargs -n 1 docker rm -v docker volume prune docker network prune docker rmi -f $(docker images -q)
rm -rf fabric-samples
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s -- 2.0.1 1.4.6 0.4.18
I also did look into orderer docker container logs.When the command peer channel create -o
is trying to run I don't get any logs inside the orderer docker container logs.我还查看了订购者 docker 容器日志。当命令
peer channel create -o
尝试运行时,我没有在订购者 docker 容器日志中获得任何日志。
At the end I tried setenforce 0
again with no luck.最后,我再次尝试
setenforce 0
,但没有成功。
Thanks in advance for any suggestions.在此先感谢您的任何建议。
The log from oreder container is来自 orderer 容器的日志是
2020-05-05 08:15:11.389 UTC [localconfig] completeInitialization -> WARN 001 General.GenesisFile should be replaced by General.BootstrapFile
2020-05-05 08:15:11.389 UTC [localconfig] completeInitialization -> INFO 002 Kafka.Version unset, setting to 0.10.2.0
2020-05-05 08:15:11.389 UTC [orderer.common.server] prettyPrintStruct -> INFO 003 Orderer config values:
General.ListenAddress = "0.0.0.0"
General.ListenPort = 7050
General.TLS.Enabled = true
General.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
General.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.TLS.ClientAuthRequired = false
General.TLS.ClientRootCAs = []
General.Cluster.ListenAddress = ""
General.Cluster.ListenPort = 0
General.Cluster.ServerCertificate = ""
General.Cluster.ServerPrivateKey = ""
General.Cluster.ClientCertificate = "/var/hyperledger/orderer/tls/server.crt"
General.Cluster.ClientPrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.Cluster.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.Cluster.DialTimeout = 5s
General.Cluster.RPCTimeout = 7s
General.Cluster.ReplicationBufferSize = 20971520
General.Cluster.ReplicationPullTimeout = 5s
General.Cluster.ReplicationRetryTimeout = 5s
General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
General.Cluster.ReplicationMaxRetries = 12
General.Cluster.SendBufferSize = 10
General.Cluster.CertExpirationWarningThreshold = 168h0m0s
General.Cluster.TLSHandshakeTimeShift = 0s
General.Keepalive.ServerMinInterval = 1m0s
General.Keepalive.ServerInterval = 2h0m0s
General.Keepalive.ServerTimeout = 20s
General.ConnectionTimeout = 0s
General.GenesisMethod = "file"
General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.BootstrapMethod = "file"
General.BootstrapFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.Profile.Enabled = false
General.Profile.Address = "0.0.0.0:6060"
General.LocalMSPDir = "/var/hyperledger/orderer/msp"
General.LocalMSPID = "OrdererMSP"
General.BCCSP.ProviderName = "SW"
General.BCCSP.SwOpts.SecLevel = 256
General.BCCSP.SwOpts.HashFamily = "SHA2"
General.BCCSP.SwOpts.Ephemeral = true
General.BCCSP.SwOpts.FileKeystore.KeyStorePath = ""
General.BCCSP.SwOpts.DummyKeystore =
General.BCCSP.SwOpts.InmemKeystore =
General.Authentication.TimeWindow = 15m0s
General.Authentication.NoExpirationChecks = false
FileLedger.Location = "/var/hyperledger/production/orderer"
FileLedger.Prefix = "hyperledger-fabric-ordererledger"
Kafka.Retry.ShortInterval = 5s
Kafka.Retry.ShortTotal = 10m0s
Kafka.Retry.LongInterval = 5m0s
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = true
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 1
Debug.BroadcastTraceDir = ""
Debug.DeliverTraceDir = ""
Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
Operations.ListenAddress = "127.0.0.1:8443"
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
2020-05-05 08:15:11.404 UTC [orderer.common.server] initializeServerConfig -> INFO 004 Starting orderer with TLS enabled
2020-05-05 08:15:11.509 UTC [fsblkstorage] NewProvider -> INFO 005 Creating new file ledger directory at /var/hyperledger/production/orderer/chains
2020-05-05 08:15:11.515 UTC [orderer.common.server] extractSysChanLastConfig -> INFO 006 Bootstrapping because no existing channels
2020-05-05 08:15:11.526 UTC [orderer.common.server] Main -> INFO 007 Setting up cluster for orderer type etcdraft
2020-05-05 08:15:11.526 UTC [orderer.common.server] reuseListener -> INFO 008 Cluster listener is not configured, defaulting to use the general listener on port 7050
2020-05-05 08:15:11.526 UTC [fsblkstorage] newBlockfileMgr -> INFO 009 Getting block information from block storage
2020-05-05 08:15:11.763 UTC [orderer.consensus.etcdraft] HandleChain -> INFO 00a EvictionSuspicion not set, defaulting to 10m0s
2020-05-05 08:15:11.764 UTC [orderer.consensus.etcdraft] createOrReadWAL -> INFO 00b No WAL data found, creating new WAL at path '/var/hyperledger/production/orderer/etcdraft/wal/system-channel' channel=system-channel node=1
2020-05-05 08:15:11.813 UTC [orderer.commmon.multichannel] Initialize -> INFO 00c Starting system channel 'system-channel' with genesis block hash abfe6f42b6e7d524b6ba93e7961ae73f6d0859ea9d77ef093c152f0efb5f006d and orderer type etcdraft
2020-05-05 08:15:11.813 UTC [orderer.consensus.etcdraft] Start -> INFO 00d Starting Raft node channel=system-channel node=1
2020-05-05 08:15:11.813 UTC [orderer.common.cluster] Configure -> INFO 00e Entering, channel: system-channel, nodes: []
2020-05-05 08:15:11.813 UTC [orderer.common.cluster] Configure -> INFO 00f Exiting
2020-05-05 08:15:11.813 UTC [orderer.consensus.etcdraft] start -> INFO 010 Starting raft node as part of a new channel channel=system-channel node=1
2020-05-05 08:15:11.813 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 011 1 became follower at term 0 channel=system-channel node=1
2020-05-05 08:15:11.813 UTC [orderer.consensus.etcdraft] newRaft -> INFO 012 newRaft 1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] channel=system-channel node=1
2020-05-05 08:15:11.813 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 013 1 became follower at term 1 channel=system-channel node=1
2020-05-05 08:15:11.813 UTC [orderer.common.server] Main -> INFO 014 Starting orderer:
Version: 2.0.1
Commit SHA: 1cfa5da
Go version: go1.13.4
OS/Arch: linux/amd64
2020-05-05 08:15:11.813 UTC [orderer.common.server] Main -> INFO 015 Beginning to serve requests
2020-05-05 08:15:11.813 UTC [orderer.consensus.etcdraft] run -> INFO 016 This node is picked to start campaign channel=system-channel node=1
2020-05-05 08:15:11.837 UTC [orderer.consensus.etcdraft] apply -> INFO 017 Applied config change to add node 1, current nodes in channel: [1] channel=system-channel node=1
2020-05-05 08:15:12.814 UTC [orderer.consensus.etcdraft] Step -> INFO 018 1 is starting a new election at term 1 channel=system-channel node=1
2020-05-05 08:15:12.814 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 019 1 became pre-candidate at term 1 channel=system-channel node=1
2020-05-05 08:15:12.814 UTC [orderer.consensus.etcdraft] poll -> INFO 01a 1 received MsgPreVoteResp from 1 at term 1 channel=system-channel node=1
2020-05-05 08:15:12.814 UTC [orderer.consensus.etcdraft] becomeCandidate -> INFO 01b 1 became candidate at term 2 channel=system-channel node=1
2020-05-05 08:15:12.814 UTC [orderer.consensus.etcdraft] poll -> INFO 01c 1 received MsgVoteResp from 1 at term 2 channel=system-channel node=1
2020-05-05 08:15:12.814 UTC [orderer.consensus.etcdraft] becomeLeader -> INFO 01d 1 became leader at term 2 channel=system-channel node=1
2020-05-05 08:15:12.814 UTC [orderer.consensus.etcdraft] run -> INFO 01e raft.node: 1 elected leader 1 at term 2 channel=system-channel node=1
2020-05-05 08:15:13.178 UTC [orderer.consensus.etcdraft] run -> INFO 01f Raft leader changed: 0 -> 1 channel=system-channel node=1
2020-05-05 08:15:13.178 UTC [orderer.consensus.etcdraft] run -> INFO 020 Start accepting requests as Raft leader at block [0] channel=system-channel node=1
2020-05-05 08:15:13.178 UTC [orderer.consensus.etcdraft] run -> INFO 021 Leader 1 is present, quit campaign channel=system-channel node=1
I found out that i get a lot of errors in firewalld after some digging when running systemctl status firewalld我发现在运行 systemctl status firewalld 时经过一番挖掘后,我在 firewalld 中出现了很多错误
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t nat -C POSTROUTING -s 172.18.0.0/16 ! -o br-39898f55b0a1 -j MASQUERADE' failed: iptables: No chain/targe...ch by that name.
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t nat -C DOCKER -i br-39898f55b0a1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -D FORWARD -i br-39898f55b0a1 -o br-39898f55b0a1 -j DROP' failed: iptables: Bad rule (does a matching rule ...in that chain?).
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t filter -C FORWARD -i br-39898f55b0a1 -o br-39898f55b0a1 -j ACCEPT' failed: iptables: Bad rule (does a ma...in that chain?).
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t filter -C FORWARD -i br-39898f55b0a1 ! -o br-39898f55b0a1 -j ACCEPT' failed: iptables: Bad rule (does a ...in that chain?).
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t filter -C FORWARD -o br-39898f55b0a1 -j DOCKER' failed: iptables: No chain/target/match by that name.
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t filter -C FORWARD -o br-39898f55b0a1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT' failed: iptab...in that chain?).
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t filter -C DOCKER-USER -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).
May 07 08:40:22 firewalld[1966]: 2020-05-07 08:40:22 ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t filter -C FORWARD -j DOCKER-USER' failed: iptables: No chain/target/match by that name.
I got a similiar error like you, the difference is that my fabric client can't connect to the peer node while you can't connect to the orderer node.我遇到了和你一样的错误,不同的是我的结构客户端无法连接到对等节点,而你无法连接到排序节点。 I've tried v1.4, v2.0 and v2.1.
我试过 v1.4、v2.0 和 v2.1。
➜ javascript node invoke.js
Wallet path: /Users/mutexlock/code/fabric-samples/fabcar/javascript/wallet
2020-05-05T11:34:53.385Z - error: [Channel.js]: Error: 14 UNAVAILABLE: failed to connect to all addresses
2020-05-05T11:34:53.660Z - error: [Channel.js]: Error: 14 UNAVAILABLE: failed to connect to all addresses
2020-05-05T11:34:53.664Z - error: [Network]: _initializeInternalChannel: Unable to initialize channel. Attempted to contact 2 Peers. Last error was Error: 14 UNAVAILABLE: failed to connect to all addresses
at Object.exports.createStatusError (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/common.js:91:15)
at Object.onReceiveStatus (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:1209:28)
at InterceptingListener._callNext (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:568:42)
at InterceptingListener.onReceiveStatus (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:618:8)
at callback (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:847:24) {
code: 14,
metadata: [Metadata],
details: 'failed to connect to all addresses',
peer: [Object]
}
Failed to submit transaction: Error: Unable to initialize channel. Attempted to contact 2 Peers. Last error was Error: 14 UNAVAILABLE: failed to connect to all addresses
at Object.exports.createStatusError (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/common.js:91:15)
at Object.onReceiveStatus (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:1209:28)
at InterceptingListener._callNext (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:568:42)
at InterceptingListener.onReceiveStatus (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:618:8)
at callback (/Users/mutexlock/code/fabric-samples/fabcar/javascript/node_modules/grpc/src/client_interceptors.js:847:24) {
code: 14,
metadata: [Metadata],
details: 'failed to connect to all addresses',
peer: [Object]
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.