简体   繁体   English

加载使用由ptxdist通过i586_qemu config生成的映像的vm的快照时,内核崩溃

[英]kernel panic when loading snapshot of vm that uses image geerated with i586_qemu config by ptxdist

I build the i586_qemu(with some changes of package selection) using ptxdist 2012.12.0. 我使用ptxdist 2012.12.0构建了i586_qemu(对程序包选择进行了一些更改)。 Everything works fine on my laptop(Ubuntu 12.04.2, Linux 3.5.0-23-generic in virtualbox run on MPB). 一切在我的笔记本电脑上都可以正常运行(Ubuntu 12.04.2,Linux 3.5.0-23(在MPB上运行的virtualbox中通用))。 However, when I copied images to a server(run Ubuntu 12.04.4, Linux 3.11.0-19-generic), and try to use savevm and loadvm command, I got a kernel panic. 但是,当我将映像复制到服务器(运行Ubuntu 12.04.4,Linux 3.11.0-19-generic)并尝试使用savevmloadvm命令时,出现内核崩溃。 here's the output: 这是输出:

(qemu) savevm vm0  
(qemu) Clocksource tsc unstable (delta = 5441725078 ns)  
Switching to clocksource jiffies  
(qemu) info snapshots  
ID        TAG                 VM SIZE                DATE       VM CLOCK  
1         vm0                     16M 2014-04-19 00:36:32   00:04:12.923 

It seems savevm run a little longer than it runs on my laptop. 看来savevm运行的时间比我的笔记本电脑运行的时间长一点。 But when I restart the vm, the problem comes: 但是当我重新启动虚拟机时,问题来了:

sudo kvm -nographic -m 256 -M pc -no-reboot -kernel ./images/linuximage  -hda ./images/hd.img.qcow2 -device e1000,netdev=net0,mac='DE:AD:BE:EF:12:03' -netdev tap,id=net0,script=qemu-ifup.sh -append "root=/dev/sda1 rw console=ttyS0,115200 debug" -loadvm vm0
+ switch=br0
+ ovs-vsctl del-port br0 tap0
+ [ -n tap0 ]
+ whoami
+ /usr/bin/sudo /usr/sbin/tunctl -u root -t tap0
sudo: /usr/sbin/tunctl: command not found
+ /usr/bin/sudo /sbin/ip link set tap0 up
+ sleep 0.1s
+ /usr/bin/sudo ovs-vsctl add-port br0 tap0
+ exit 0
divide error: 0000 [#1] PREEMPT 
Modules linked in:

Pid: 0, comm: swapper Not tainted 3.0.0-pengutronix #1 Bochs Bochs
EIP: 0060:[<c01067e8>] EFLAGS: 00000246 CPU: 0
EAX: 00000000 EBX: c02e6a74 ECX: 00000096 EDX: 00000003
ESI: 00020800 EDI: c02b4000 EBP: c02b3ff8 ESP: c02b3fe8
 DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
Process swapper (pid: 0, ti=c02b2000 task=c02ba480 task.ti=c02b2000)
Stack:
 c0101448 c02cc5a3 c02e6a74 00000800 0052b003 00000000
Call Trace:
 [<c0101448>] ? 0xc0101448
 [<c02cc5a3>] ? 0xc02cc5a3
Code: 0f 01 c8 e8 41 ff ff ff 85 c0 75 07 89 c1 fb 0f 01 c9 c3 fb c3 83 3d 98 c6 2f c0 00 75 1c 80 3d c5 9c 2c c0 00 74 13 eb 15 fb f4 <eb> 01 fb 89 e0 25 00 e0 ff ff 83 48 0c 04 c3 fb f3 90 c3 89 e0 
EIP: [<c01067e8>]  SS:ESP 0068:c02b3fe8
---[ end trace 6fe899157eb8f58b ]---
Kernel panic - not syncing: Attempted to kill the idle task!
Clocksource tsc unstable (delta = 5233522621 ns)

The most obvious thing to me is the clocksource unstable warning. 对我来说,最明显的是clocksource unstable警告。 According to What does “clocksource tsc unstable” mean? 根据“ clocksource tsc不稳定”是什么意思? , the problem could be the difference of tsc between cores(the server I am using have 48). ,问题可能是内核之间的tsc差异(我正在使用的服务器有48个)。 So, what should be done to stop the kernel panic? 那么,应该采取什么措施阻止内核恐慌呢? or are there any other causes? 还是还有其他原因?

The problem goes away when I use the tcg accelerator(which is the default accelerator in my laptop) instead of KVM kernel module. 当我使用tcg加速器(笔记本电脑中的默认加速器)而不是KVM内核模块时,问题消失了。 The clocksource problem still occurs, but seems have no influence on the VM. 时钟源问题仍然存在,但似乎对虚拟机没有影响。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM