[英]Openstack: Make network port unique for every instance OS::Heat::ResourceGroup COUNT
Problem is that the stack won't build when the count
is greater than 1. 问题是,当
count
大于1时,将不会建立堆栈。
The reason for this is because - port: { get_resource: test_port }
is not unique for every instance made. 其原因是因为
- port: { get_resource: test_port }
对于每个生成的实例而言都不都是唯一的。
Error code received: CREATE_FAILED Conflict: resources.compute_nodes.resources[3]: Port XXX is still in use. 收到错误代码: CREATE_FAILED冲突:resources.compute_nodes.resources [3]:端口XXX仍在使用中。
Question : How can I make - port: { get_resource: test_port }
unique for each instance? 问题 :如何使每个实例的
- port: { get_resource: test_port }
唯一?
compute_nodes:
type: OS::Heat::ResourceGroup
properties:
count: 3
resource_def:
type: OS::Nova::Server
properties:
name: test-%index%
key_name: { get_param: key_name }
image: "Ubuntu Server 18.04 LTS (Bionic Beaver) amd64"
flavor: m1.small
networks:
- port: { get_resource: test_port }
test_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: private_net }
security_groups: { get_param: sec_group_lin }
fixed_ips:
- subnet_id: { get_resource: private_subnet }
test_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_net }
port_id: { get_resource: test_port }
Iterate comma_delimited_list OS::Heat::ResourceGroup 迭代comma_delimited_list OS :: Heat :: ResourceGroup
Make use of " depends_on " to align the flow of execution of template 利用“ depends_on ”来调整模板的执行流程
compute_nodes:
type: OS::Heat::ResourceGroup
depends_on: [test_port, test_floating_ip]
properties:
count: 3
resource_def:
type: OS::Nova::Server
properties:
name: test-%index%
key_name: { get_param: key_name }
image: "Ubuntu Server 18.04 LTS (Bionic Beaver) amd64"
flavor: m1.small
networks:
- port: { get_resource: test_port }
test_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: private_net }
security_groups: { get_param: sec_group_lin }
fixed_ips:
- subnet_id: { get_resource: private_subnet }
test_floating_ip:
type: OS::Neutron::FloatingIP
depends_on: [test_port]
properties:
floating_network: { get_param: public_net }
port_id: { get_resource: test_port }
Your stack tries to attach the same port to different Nova server, so this is failing. 您的堆栈尝试将同一端口连接到其他Nova服务器,因此失败。 The solution would be to create a nested stack that would create your 3 resources (Nova server, Neutron port and Neutron Floating IP), and then your main stack would implement a resource group to "scale" your servers:
解决方案是创建一个嵌套堆栈,该堆栈将创建您的3种资源(Nova服务器,Neutron端口和Neutron浮动IP),然后您的主堆栈将实现一个资源组来“扩展”您的服务器:
Nested_stack: nested_stack.yaml Nested_stack:nested_stack.yaml
parameter:
index:
type: number
sec_group_lin:
type: string
key_name:
type: string
public_net:
type: string
resources:
compute_nodes:
type: OS::Nova::Server
depends_on: [test_port, test_floating_ip]
properties:
name: { list-join: ['-', ['test', {get_param: index} ] ] }
key_name: { get_param: key_name }
image: "Ubuntu Server 18.04 LTS (Bionic Beaver) amd64"
flavor: m1.small
networks:
- port: { get_resource: test_port }
test_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: private_net }
security_groups: { get_param: sec_group_lin }
fixed_ips:
- subnet_id: { get_resource: private_subnet }
test_floating_ip:
type: OS::Neutron::FloatingIP
depends_on: [test_port]
properties:
floating_network: { get_param: public_net }
port_id: { get_resource: test_port }
Then your main stack would look like: 然后您的主堆栈看起来像:
parameters:
key_name:
type: string
public_net:
type: string
sec_group_lin:
type: string
resources:
compute_nodes:
type: OS::Heat::ResourceGroup
properties:
count: 3
resource_def:
type: nested_stack.yaml
properties:
index: %index%
key_name: {get_param: key_name}
public_net: { get_param: public_net }
sec_group_lin: { get_param: sec_group_lin }
This will created x (here x=3 as your count is set to 3) servers with each of them having its own test port and test floating IP. 这将创建x个服务器(此处x = 3,因为您的计数设置为3),每个服务器都有自己的测试端口和测试浮动IP。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.