简体   繁体   中英

google compute engine add firewall rule for hadoop dashboard

I installed hadoop cluster using bdutil (instead of click to deploy). I am not able to access job tracker page at locahost:50030/jobtracker.jsp ( https://cloud.google.com/hadoop/running-a-mapreduce-job )

I am checking it locally using lynx instead of from my client browser (so localhost instead of external ip)

My setting in my config file for bdutil is

MASTER_UI_PORTS=('8088' '50070' '50030')

but after deploying the hadoop cluster when I do firewall rules list I get following

NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS

default-allow-http default 0.0.0.0/0 tcp:80,tcp:8080 http-server

default-allow-https default 0.0.0.0/0 tcp:443 https-server

default-allow-icmp default 0.0.0.0/0 icmp

default-allow-internal default 10.240.0.0/16 tcp:1-65535,udp:1-65535,icmp

default-allow-rdp default 0.0.0.0/0 tcp:3389

default-allow-ssh default 0.0.0.0/0 tcp:22

Now I dont see port 50030 in the list of rules. Why so?

so I run a command to add them (manually)

gcloud compute firewall-rules create allow-http --description "Incoming http allowed." --allow tcp:50030 --format json

Now it gets added and I can see in the output of firewall-rules list command.

But still when I do lynx locahost:50030/jobtracker.jsp I get unable to connect. Then, I run a hadoop job so that there is some output to view and then run lynx command but still see unable to connect.

Can someone tell me where I am going wrong in this complete process?

An ephemeral IP is an external IP. The difference between an ephemeral IP and a static IP is that a static IP can be reassigned to another virtual machine instance, while an ephemeral IP is released when the instance is destroyed. An ephemeral IP can be promoted to a static IP through the web UI or the gcloud command-line tool.

You can obtain the external IP of your host by querying the metadata API at http://169.254.169.254/0.1/meta-data/network . The response will be a JSON document that looks like this (pretty-printed for clarity):

{
   "networkInterface" : [
      {
         "network" : "projects/852299914697/networks/rabbit",
         "ip" : "10.129.14.59",
         "accessConfiguration" : [
            {
               "externalIp" : "107.178.223.11",
               "type" : "ONE_TO_ONE_NAT"
            }
         ]
      }
   ]
}

The firewall rule command seems reasonable, but you may want to choose a more descriptive name. If I saw a rule that said allow-http , I would assume it meant port 80. You may also want to restrict it to a target tag placed on your Hadoop dashboard instance; as written, your rule will allow access on that port to all instances in the current project.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM