
[英]neo4j-mazerunner, How to Increase memory size in docker-compose.yml
[英]docker-compose.yml spark/hadoop/hive for three data nodes
该docker-compose.yml
一个datanode
似乎工作确定:
version: "3"
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
container_name: namenode
restart: always
ports:
- 9870:9870
- 9010:9000
volumes:
- hadoop_namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
env_file:
- ./hadoop.env
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode
restart: always
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
ports:
- "9864:9864"
env_file:
- ./hadoop.env
resourcemanager:
image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8
container_name: resourcemanager
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864"
env_file:
- ./hadoop.env
nodemanager1:
image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
container_name: nodemanager
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088"
env_file:
- ./hadoop.env
historyserver:
image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8
container_name: historyserver
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088"
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
spark-master:
image: bde2020/spark-master:3.0.0-hadoop3.2
container_name: spark-master
depends_on:
- namenode
- datanode
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
spark-worker-1:
image: bde2020/spark-worker:3.0.0-hadoop3.2
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-server
depends_on:
- namenode
- datanode
env_file:
- ./hadoop-hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-metastore
env_file:
- ./hadoop-hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9864 hive-metastore-postgresql:5432"
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
container_name: hive-metastore-postgresql
presto-coordinator:
image: shawnzhu/prestodb:0.181
container_name: presto-coordinator
ports:
- "8089:8089"
volumes:
hadoop_namenode:
hadoop_datanode:
hadoop_historyserver:
我想修改它以便它使用三个datanodes
。 我尝试在原始数据datanode
部分的正下方添加它,但它似乎不喜欢它。 它基本上添加了新名称和新端口:
datanode1:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode1
restart: always
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
ports:
- "9865:9865"
env_file:
- ./hadoop.env
datanode2:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode2
restart: always
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
ports:
- "9866:9866"
env_file:
- ./hadoop.env
这是否可行,如果不可行,我需要更改什么才能获得三个数据datanodes
?
检查您的ports
设置。 端口映射似乎有问题。 您有“9865:9865”(datanode1)和“9866:9866”(datanode2)。
尝试将其分别设置为“9865:9864”和“9866:9864”,因为 9864 是 datanode 使用的默认端口,第一个端口号定义了 datanode 在 docker 网络之外如何访问。
使用建议的配置,您的数据节点将可以从网络内部在 datanode:9864 (datanode1:9864, datanode2:9864) 上访问,在 :9864 (和:9865, :9866) 上从 docker 网络外部访问。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.