What Domains(VM’s) are Using what Hard Drive Volumes in KVM/Libvirt?

I noticed in my well-used test KVM virtual environment that I had a few volumes that had different names than did the existing domain (VM) names. I also noticed that I had more volumes than domains. So how do I tell which volumes are being used by which VM? And how can I tell what volumes are orphaned? Note for path purposes, my host is running Ubuntu 18.04.

I tried several “virsh” commands and spent time looking in the man pages. Nothing stood out so here is what I ended up doing.

First: I needed to list and locate the volumes in the “default” pool:

virsh vol-list --pool default

This also provides the path to the “default” pool where the volumes are located. I also noticed that I had a volume listed that was not shown when I did “ls -l” on the “/var/lib/libvirt/images/” directory. No problem, just update the pool listing:

virsh pool-refresh default

Solved that problem. Now I need to tie the volumes back to the VMs:

virsh dumpxml "my vm name"

That works but is cumbersome for multiple VMs and multiple volumes. So where are the VM configuration files located? A quick look around the directories where the volumes are stored turned up nothing, so best guess is /etc:

grep - ir "my vm name" /etc

That returns “/etc/libvirt/qemu/” and looking in that directory indicates we have found the VM config files. So now I need a list of VMs and their associated volume drives. Note that I am literally grepping for “source file” here:

grep -i "source file" /etc/libvirt/qemu/*

That allows me to tie the VM names to volume names that don’t match.

Now I can double check to make sure that a volume is or is not associated to a VM:

grep -i "name.qcow2" /etc/libvirt/qemu/*

Locating a Preconfigured “ls” Alias in CentOS Using grep

Linux distributions sometimes configure “ll” (lowercase L’s) as an alias to “ls -l”. Debian based OS’s typically do this from the users ~/.bashrc file. This is not the case with Red Hat/CentOS based OS’s. I couldn’t remember where CentOS set the alias and I needed to locate it. As a non-root user, this proved to be a little more challenging than I first expected. Yes, I could look it up on the web but I decided to make an exercise out of it and it was more challenging than it would first seem.

grep -riE "alias ?ll" /etc/ 2> /dev/null

I’m sure there are other ways to it but this worked. In short. I knew I needed to search for “ll” in /etc, but “ll” is also common in many words. Also, I’m running as a normal user so I wanted to avoid “Permission Denied” and other errors that cluttered the search.

Note the the following also returns the same:

 grep -riE "alias ?ll" /etc/ 2>&1 | grep -v "Permission" 

RAID – Mark Failure and Replace Drive

I have wanted to get this posted for a while but have been busy with SANS FOR500 material, work, etc.

What I try to do when transferring my old notes to the blog is to go out and work through the steps first, correcting my notes as I step through them.  With this post, I have not done that because of the time it would take to setup and run through the steps.  But as I always warn, these are notes, not full instructions.  They get you in the ball park but you have to find the bases yourself.

So here we go…

This posting assumes raid and drive layout of this earlier post. Some steps below also refer to this post.

Software RAID 5 with UEFI/GPT via Ubuntu installer – Ubuntu Server 18.04

It might be best to set the efibootmgr to a partition not on the affected drive in case a reboot happens.  See steps 7-10 from the post above.

Check on drive state (and other useful items):

cat /proc/mdstat
mdadm --detail /dev/md0
mdadm --detail /dev/md1
mdadm --detail /dev/md2

Since in this case there are 3 raid arrays, mark the appropriate drive in all 3 arrays as failed and for removal (in this case, sde).

Mark failure:

mdadm --fail /dev/md0 /dev/sde2
mdadm --fail /dev/md1 /dev/sde3
mdadm --fail /dev/md2 /dev/sde4

Mark for removal:

mdadm --remove /dev/md0 /dev/sde2
mdadm --remove /dev/md1 /dev/sde3
mdadm --remove /dev/md2 /dev/sde4

Once drive is replaced,  re-add drive back into array:

mdadm --add /dev/md0 /dev/sde2
mdadm --add /dev/md1 /dev/sde3
mdadm --add /dev/md2 /dev/sde4

Watch rebuild status:

cat /proc/mdstat
mdadm --detail /dev/md0
mdadm --detail /dev/md1
mdadm --detail /dev/md2

Go to the link at the beginning of this post and do steps 7-10 if needed.