簡體   English   中英

Nova 實例在啟動時拋出錯誤 - “無法對實例執行請求的操作”

[英]Nova instance throws an error on launch - "failed to perform requested operation on instance"

Nova 實例在啟動時拋出錯誤 - “無法對實例執行請求的操作......服務器出錯或無法執行請求的操作(HTTP 500)”。 請參閱下面的屏幕截圖。

實例創建錯誤

令人驚訝的是,它在實例啟動后單獨附加卷時效果很好。 在創建實例時,您需要將“創建新卷”設置為“否”。

我們重新啟動了 cinder 服務,但並沒有解決問題。

從 API 日志中,我們發現服務端點(Nova 和 Cinder)中的 API 交互過程中存在 HTTP 500 錯誤。 日志粘貼在下面。

有人可以幫助解決這個問題嗎?

提前致謝。

Openstack - 詳細信息

它是3節點系統.一個控制器+2個計算。 控制器有 Centos7 和 Openstack Ocata 發布 Cinder 版本 1.11.0 和 Nova 版本 7.1.2 Nova 和 Cinder RPM 的列表

==> api.log <==

2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Caught error: <class 'oslo_messaging.exceptions.MessagingTimeout'> Timed out waiting for a reply to message ID bf2f80590a754b59a720405cd0bc1ffb
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault Traceback (most recent call last):
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault   File "/usr/lib/python2.7/site-packages/cinder/api/middleware/fault.py", line 79, in __call__
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault     return req.get_response(self.application)
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault   File "/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send
2019-01-30 04:16:28.793 275098 INFO cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action returned with HTTP 500
2019-01-30 04:16:28.794 275098 INFO eventlet.wsgi.server [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] 10.110.77.4 "POST /v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action HTTP/1.1" status: 500  len: 425 time: 60.0791931
2019-01-30 04:16:28.813 275098 INFO cinder.api.openstack.wsgi [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] POST http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action
2019-01-30 04:16:28.852 275098 INFO cinder.volume.api [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Volume info retrieved successfully.

新星日志:

2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Instance failed block device setup
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Traceback (most recent call last):
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1588, in _prep_block_device
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     wait_func=self._await_block_device_map_created)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 512, in attach_block_devices
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     _log_and_attach(device)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 509, in _log_and_attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     bdm.attach(*attach_args, **attach_kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 408, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     do_check_attach=do_check_attach)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 48, in wrapped
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     ret_val = method(obj, context, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 258, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 168, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     res = method(self, ctx, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 190, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     res = method(self, ctx, volume_id, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 391, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     exc.code if hasattr(exc, 'code') else None)})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     self.force_reraise()
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     six.reraise(self.type_, self.value, self.tb)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 365, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     context).volumes.initialize_connection(volume_id, connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 404, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     {'connector': connector})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 334, in _action
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     resp, body = self.api.client.post(url, body=body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 167, in post
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     return self._cs_request(url, 'POST', **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 155, in _cs_request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     return self.request(url, method, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 144, in request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     raise exceptions.from_response(resp, body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-dcd4a981-8b22-4c3d-9ba7-25fafe80b8f5)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]
2019-01-30 03:58:04.811 5642 DEBUG nova.compute.claims [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Aborting claim: [Claim: 4096 MB memory, 40 GB disk] abort /usr/lib/python2.7/site-packages/nova/compute/claims.py:124
2019-01-30 03:58:04.812 5642 DEBUG oslo_concurrency.lockutils [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2019-01-30 03:58:04.844 5642 INFO nova.scheduler.client.report [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Deleted allocation for instance aba62cf8-0880-4bf7-8201-3365861c8079

來自 openstack 的一些衛生命令的輸出:

[root@controller ~(keystone_admin)]# cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host           | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup    | controller     | nova | enabled | up    | 2019-01-31T10:27:20.000000 | -               |
| cinder-scheduler | controller     | nova | enabled | up    | 2019-01-31T10:27:13.000000 | -               |
| cinder-volume    | controller@lvm | nova | enabled | up    | 2019-01-31T10:27:12.000000 | -               |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+


[root@controller yum.repos.d]# rpm -qa | grep cinder
openstack-cinder-10.0.5-1.el7.noarch
puppet-cinder-10.4.0-1.el7.noarch
python-cinder-10.0.5-1.el7.noarch
python2-cinderclient-1.11.0-1.el7.noarch
[root@controller yum.repos.d]# rpm -qa | grep nova
openstack-nova-conductor-15.1.0-1.el7.noarch
openstack-nova-novncproxy-15.1.0-1.el7.noarch
openstack-nova-compute-15.1.0-1.el7.noarch
openstack-nova-cert-15.1.0-1.el7.noarch
openstack-nova-api-15.1.0-1.el7.noarch
openstack-nova-console-15.1.0-1.el7.noarch
openstack-nova-common-15.1.0-1.el7.noarch
openstack-nova-placement-api-15.1.0-1.el7.noarch
python-nova-15.1.0-1.el7.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-scheduler-15.1.0-1.el7.noarch
puppet-nova-10.5.0-1.el7.noarch
[root@controller yum.repos.d]#

[root@controller yum.repos.d]# rpm -qa | grep ocata
centos-release-openstack-ocata-1-2.el7.noarch
[root@controller yum.repos.d]# uname -a
Linux controller 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@controller yum.repos.d]#
centos-release-openstack-ocata-1-2.el7.noarch

[root@controller yum.repos.d]# cinder --version
1.11.0
[root@controller yum.repos.d]# nova --version
7.1.2
[root@controller yum.repos.d]#

我得到了這個問題的修復。 我觀察到 Openstack 中很少有項目在刪除卷時陷入錯誤狀態並顯示“錯誤刪除”。 我使用“cinder reset-state --state available volume-id”從 cinder db 顯式更改了卷狀態。

這使我能夠成功刪除該卷。 之后我重新啟動了煤渣服務,一切都照常工作

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM