簡體   English   中英

如何使用Pyrax將Cloud Block Storage卷連接到OnMetal服務器?

[英]How to attach a Cloud Block Storage volume to an OnMetal server with pyrax?

我想通過編寫使用pyrax Python模塊的Python腳本來自動將Cloud Block Storage卷的附件附加到運行CentOS 7的OnMetal服務器。 你知道怎么做嗎?

將Cloud Block Storage卷附加到OnMetal服務器比將其附加到普通Rackspace虛擬服務器要復雜一些。 您會注意到,當您嘗試在Rackspace Web界面Cloud Control Panel中將Cloud Block Storage卷附加到OnMetal服務器時,您會看到以下文本:

注意:將卷附加到OnMetal服務器時,必須登錄OnMetal服務器以設置啟動器名稱,發現目標,然后連接到目標。

因此,您可以在Web界面中附加卷,但此外,您還需要登錄OnMetal服務器並運行一些命令。 可以將實際命令從Web界面復制並粘貼到OnMetal服務器的終端中。

同樣在分離之前,您需要運行命令。

但是實際上不需要Web界面。 可以使用Python模塊pyrax來完成。

首先在OnMetal服務器上安裝RPM軟件包iscsi-initiator-utils

[root@server-01 ~]# yum -y install iscsi-initiator-utils

假設volume_id和server_id已知,則此Python代碼首先附加卷,然后分離卷。 不幸的是,mount_point參數attach_to_instance()在OnMetal服務器上不起作用,因此在附加卷之前和之后,我們將需要使用命令lsblk -n -d 通過比較輸出,我們可以推斷出用於連接卷的設備名稱。 (下面的Python代碼未考慮推斷設備名稱的部分)。

#/usr/bin/python
# Disclaimer: Use the script at your own Risk!                                                                                                                                    
import json
import os
import paramiko
import pyrax

# Replace server_id and volume_id                                                                                                                                                                       
# to your settings                                                                                                                                                                                      
server_id = "cbdcb7e3-5231-40ad-bba6-45aaeabf0a8d"
volume_id = "35abb4ba-caee-4cae-ada3-a16f6fa2ab50"
# Just to demonstrate that the mount_point argument for                                                                                                                                                 
# attach_to_instance() is not working for OnMetal servers                                                                                                                                               
disk_device = "/dev/xvdd"

def run_ssh_commands(ssh_client, remote_commands):
    for remote_command in remote_commands:
        stdin, stdout, stderr = ssh_client.exec_command(remote_command)
        print("")
        print("command: " + remote_command)
        for line in stdout.read().splitlines():
            print(" stdout: " + line)
        exit_status = stdout.channel.recv_exit_status()
        if exit_status != 0:
            raise RuntimeError("The command :\n{}\n"
                               "exited with exit status: {}\n"
                               "stderr: {}".format(remote_command,
                                                   exit_status,
                                                   stderr.read()))

pyrax.set_setting("identity_type", "rackspace")
pyrax.set_default_region('IAD')
creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
pyrax.set_credential_file(creds_file)
server = pyrax.cloudservers.servers.get(server_id)
vol = pyrax.cloud_blockstorage.find(id = volume_id)
vol.attach_to_instance(server, mountpoint=disk_device)
pyrax.utils.wait_until(vol, "status", "in-use", interval=3, attempts=0,
                       verbose=True)

ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server.accessIPv4, username='root', allow_agent=True)

# The new metadata is only available if we get() the server once more                                                                                                                                   
server = pyrax.cloudservers.servers.get(server_id)

metadata = server.metadata["volumes_" + volume_id]
parsed_json = json.loads(metadata)
target_iqn = parsed_json["target_iqn"]
target_portal = parsed_json["target_portal"]
initiator_name = parsed_json["initiator_name"]

run_ssh_commands(ssh_client, [
    "lsblk -n -d",
    "echo InitiatorName={} > /etc/iscsi/initiatorname.iscsi".format(initiator_name),
    "iscsiadm -m discovery --type sendtargets --portal {}".format(target_portal),
    "iscsiadm -m node --targetname={} --portal {} --login".format(target_iqn, target_portal),
    "lsblk -n -d",
    "iscsiadm -m node --targetname={} --portal {} --logout".format(target_iqn, target_portal),
    "lsblk -n -d"
])

vol.detach()
pyrax.utils.wait_until(vol, "status", "available", interval=3, attempts=0,
                                    verbose=True)

運行python代碼如下所示

user@ubuntu:~$ python attach.py 2> /dev/null
Current value of status: attaching (elapsed:  1.0 seconds)
Current value of status: in-use (elapsed:  4.9 seconds)

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk

command: echo InitiatorName=iqn.2008-10.org.openstack:a24b6f80-cf02-48fc-9a25-ccc3ed3fb918 > /etc/iscsi/initiatorname.iscsi

command: iscsiadm -m discovery --type sendtargets --portal 10.190.142.116:3260
 stdout: 10.190.142.116:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50
 stdout: 10.69.193.1:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --login
 stdout: Logging in to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] (multiple)
 stdout: Login to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
 stdout: sdb    8:16   0   50G  0 disk

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --logout
 stdout: Logging out of session [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260]
 stdout: Logout of [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
Current value of status: detaching (elapsed:  0.8 seconds)
Current value of status: available (elapsed:  4.7 seconds)
user@ubuntu:~$

請注意:

盡管在Rackspace官方文檔中未提及

https://support.rackspace.com/how-to/attach-a-cloud-block-storage-volume-to-an-onmetal-server/

在2015年8月5日的論壇帖子中 ,Rackspace托管基礎架構支持還建議運行

iscsiadm -m node -T $TARGET_IQN -p $TARGET_PORTAL --op update -n node.startup -v automatic

使連接持久,以便在啟動時自動重新啟動iscsi會話。

更新資料

關於推斷新設備名稱:Hayden少校在博客中寫道:

[root@server-01 ~]# ls /dev/disk/by-path/

可以用來查找新設備的路徑。 如果您想尊重任何符號鏈接,我想這會起作用

[root@server-01 ~]# find -L /dev/disk/by-path -maxdepth 1 -mindepth 1 -exec realpath {} \;

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM