I am trying to do an experiment to test different runtimes of algorithms using Hadoop with 3 nodes and pig installed. I found a docker image (fluddeni/hadoop-pig) that meets these needs and seemingly is running when checked with docker ps
, but I can't find it on any of my ports. I am running my code on Windows and when I check where other docker images run (the docker-machine ip) on the port 9000, as indicated in my core-site.xml file I am getting no page found. Any ideas on where to find the master page for hadoop? Let me know if you need any more information!
Resources:
/usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://3b85d55c5080:9000</value>
</property>
</configuration>
command docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b85d55c5080 fluddeni/hadoop-pig "/etc/bootstrap.sh -…" 19 hours ago Up 2 seconds 2122/tcp, 8020/tcp, 8030-8033/tcp, 8040/tcp, 8042/tcp, 8088/tcp, 9000/tcp, 10020/tcp, 19888/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp boring_ptolemy
did you expose the port as you expect?
docker run -p 9000:9000 <rest image details>
reference:
While port forwarding is one problem, another is the fact that that container ID is not consistent. You should use Docker service DNS names
Ideally, using Compose network bridges.
Then you would always refer to hdfs://namenode:9000
, for example
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.