Finalmente Skylable ha rilasciato la sua prima beta

April 11th, 2014

Dopo 2 anni di duro lavoro, finalmente abbiamo rilasciato la beta 0.1 di Skylable SX.

Se e’ vero che “se non ti vergogni della tua beta, allora hai aspettato troppo a entrare in beta” allora si’, lo ammetto, abbiamo aspettato troppo.

Da circa 3 mesi utilizziamo questa release di Skylable SX sui nostri server di produzione (3 nodi da 2TB ciascuno) e non abbiamo perso alcun dato ne’ si sono verificati crash.

Abbiamo fatto anche moltissimi test di affidabilita’ sulla nostra test farm ospitata presso Exea, con un totale di 40 virtual machine, 190GB di RAM, 60 core e 72TB di dischi.

Molte cose sono ancora da rifinire, ma il core del progetto e’ pronto ed e’ molto stabile. Ora si tratta di completare le parti mancanti mantenendo intatte affidabilita’ e performance.

 

 

 

Share
Categoria howto
Publicado por admin a las 11:19 pm | Comments (0)

Extending a physical volume nested inside a logical volume: LVM over LVM

December 15th, 2013

I’ve recently hit a nasty problem while setting up a few virtual machines on a KVM host using a local LVM volume group as the storage pool for the virtual machines.

It’s not that easy to extend a VG which is nested inside a LV which is part of a larger VG. Now this might seem like a borderline scenario to you, but it’s actually a quite common situation when you start playing with KVM and LVM.

If you are in a hurry, jump to my proposed solutions: method 1 and method 2

If you are interested in the gory details, keep reading…

yodawg

Why using LVM storage pools makes a lot of sense

There are many good reasons to use LVM as the storage pool for your virtual machines. With LVM you can increase the storage assigned to a virtual machine whenever it’s needed. As a bonus, starting from LVM version 2.02.73, by using LVM you don’t have to worry about properly aligning partitions, everything happens automagically.

My preferred strategy is to create one Logical Volume (LV) on the KVM host for each virtual machine.

Why using LVM inside virtual machines makes a lot of sense

When you install the operating system on the virtual machine, you face the same dilemma: it’s hard to predict how much space you will need for each partition inside the virtual machine. Do you really need 20GB for /var? Or is it better to have a larger /tmp? Storage is a limited resource, there is simply never enough of it and you cannot waste it.

Again, the problem goes away if you use LVM inside the virtual machine as well. You create a single LVM partition which takes the bulk of the space available for this virtual machine and then create a LV for each mount point (/home, /var, /usr, /tmp, /var/tmp, …). You can be conservative in assigning the available space because you can extend the partition any time, right?

What happens when you nest LVM inside LVM

Now here comes the difficult part, if you are doing things like me, you will end up with the following situation.

The KVM host has a volume group (let’s call it “vg”) which contains:

  • /dev/vg/root (the root of the KVM host)
  • /dev/vg/swap (the swap of the KVM host)
  • /dev/vg/var (the /var partition of the KVM host)
  • /dev/vg/home (the /home partition of the KVM host)
  • [...more mountpoints here...]
  • /dev/vg/vmname1 (the virtual machine called vmname1)
  • /dev/vg/vmname2 (the virtual machine called vmname2)
  • [...more virtual machines here...]

 

The logical volume called /dev/vg/vmname1 on the KVM host contains the entire virtual machine called “vmname1″.

The virtual machine sees /dev/vg/vmname1 as “/dev/vda” (assuming you are using the virtio driver, it might be “/dev/sda” otherwise).

“/dev/vda” contains a partition of type “8e” for LVM and this partition is used as the physical device for a volume group (let’s call it “vmvg”) which contains the various LVs, like this:

  • /dev/vmvg/root
  • /dev/vmvg/swap
  • /dev/vmvg/var
  • /dev/vmvg/home
  • [...]

 

Now let’s say that you run out of space on /dev/vmvg/home on vmname1. Chances are that you still have some free extents on the volume group vmvg and you can easily extend it by logging in the virtual machine and running:

lvextend -L+5G /dev/vmvg/home

But what if you don’t have any free space available on “vmvg”? What if you only have 2GB available? In this case you need to do a bit more work…

 

The lame solution

I decided to write this post after digging the web for a solution to this problem and finding that many blogs and forums describe a wrong approach to the problem.

Their typical recommendation is:

  • create a new LV on the KVM host, call it “/dev/vg/vmname1-i-am-desperate”
  • edit the virtual machine configuration so that it sees “/dev/vg/vmname1-i-am-desperate” as “/dev/vdb” (or “/dev/sdb” if you are not using virtio)
  • reboot the virtual machine
  • from inside the virtual machine:
    • create a PV on /dev/vdb: pvcreate /dev/vdb
    • add the new PV to the existing VG: vgextend vmvg /dev/vdb
    • verify with “vgdisplay” that the disk space from /dev/vdb is now available in “vmvg”

What do you do when the virtual machine runs out of space again? Simple! You create ”/dev/vg/vmname1-i-am-desperate-again” on the KVM host, add it to the virtual machine as “/dev/vdc”, and repeat the whole procedure.

Obviously this is not the proper approach, it introduces a lot of fragmentation and it’s not something that you can do forever unless you want to end up with hundreds of PVs…

 

The proper solution

In this post I’m going to present two methods to extend a volume group nested into a logical volume the proper way.

With both my methods you will have a single physical volume for each virtual machine.

Method #1 is very simple, but it has got two downsides:

  1. it works only if the virtual machine uses a primary partition as the LVM partition. In other words: if the physical volume used by the LVM inside the virtual machine uses a partition number greater than 4 (that is an extended/logical partition), this method won’t work.
    Note to Debian users: if you use LVM guided partitioning when installing a Debian virtual machine, by default LVM is placed on partition 5, therefore method #1 won’t work for these virtual machines.
  2. if the size of the virtual machine is 10GB and you want to extend it to 15GB, you need to have at least 25GB available in the volume group of the KVM host. This method involves creating a copy of the logical volume of the virtual machine with the desired size. It doesn’t work by extending the existing logical volume, it creates an entirely new one. After creating the new LV and verifying that the copy was successful, you can delete the old LV.

Method #2 can be used with any kind of setup, but requires many more steps.

 

Method #1:

This method uses a nifty tool called “virt-resize” which is part of guestfs-tools.

On the KVM host, install the guestfs-tools:

  • apt-get install libguestfs-tools
  • apt-get update (without this, update-guestfs-appliance might fail)
  • update-guestfs-appliance

Verify the current disk layout of the virtual machine (NOTE: this command must be run on the KVM host!):

  • virt-df /dev/vg/vmname1

Shutdown the virtual machine:

  • virsh shutdown vmname1

Extend the partition holding the LVM (e.g. /dev/vda2) by running the following commands on the KVM host:

  • lvrename /dev/vg/vmname1 /dev/vg/vmname1-backup
  • lvcreate -n vmname1 -L 20G /dev/vg
  • virt-resize /dev/vg/vmname1-backup /dev/vg/vmname1 –expand /dev/vda2

Start the virtual machine:

  • virsh start vmname1

Log in the virtual machine, extend the PV and the LV:

  • pvresize /dev/vda2
  • lvextend -L+5G /dev/vmvg/home

If everything works as expected, delete the old LV. Log in the KVM host and run:

  • lvremove /dev/vg/vmname1-backup

 

Method #2:

First of all you need to extend the logical volume /dev/vg/vmname1, by running the following command on the KVM host:

  • lvextend -L+5G /dev/vg/vmname1

Reboot the virtual machine and log into it. All the following commands must be run inside the virtual machine!

Identify the device holding the PV of the virtual machine with:

# pvs
PV         VG   Fmt  Attr PSize PFree
/dev/vda2   vgname  lvm2 a–  15g 1g

Assuming the name of the device is /dev/vda, run:

  • apt-get install gnu-fdisk
  • gfdisk /dev/vda

(Note that I omitted the partition number, for obvious reasons)

Check the size of the device (use the ‘p’ command), verify that it is as big as /dev/vg/vmname1 on the KVM host. If that’s not the case, most likely you missed one of the previous steps.

Command (m for help): p
Disk /dev/vda: 21.0 GB, 20971520000 bytes
255 heads, 63 sectors/track, 2549 cylinders, total 40960000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ef979

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    39192575    19595264   83  Linux
/dev/vda2        39194624    40957951      881664   8e LVM

 

Now it’s time to let LVM know about the new space available on the device of the virtual machine. To make that happen, you first need to grow the partition which holds the PV in the virtual machine and then you need to make LVM aware that the PV has grown.

Make sure that gnu-fdisk is using sectors as the unit (not cylinders), use the command “u” to switch between the two.

Identify the partition holding LVM, in this example it’s partition number 2.

Write down the starting sector of the LVM partition (39194624 in this example), then delete the partition (type ‘d’ then enter ’2′).

Recreate the partition with the same starting sector. Do NOT accept the default suggested by gnu-fdisk, use the starting sector that you wrote down in the previous step!!!

Accept the default as the ending sector (i.e. the last sector of the device).
Print again the partition table and if everything  looks good, write the partition table with the “w” command.

Reboot the virtual machine and then run:

  • pvresize /dev/vda2

This will extend the size of the physical volume on /dev/vda2 to take all the space available in the /dev/vda2 partition (which has grown from 15GB to 20GB in this example).

If the command is successful, you will see the additional 5GB available in the vgvm volume group:

  • vgdisplay

You can now extend the /home partition of the virtual machine with:

  • lvextend -L+5G /dev/vmvg/home
  • resize2fs /dev/vmvg/home (if you are using the ext filesystem)

 

Note to Debian users:

As previously mentioned, if you install Debian using “Guided partitioning with LVM support”, by default the LVM partition is placed on a logical partition inside the extended partition, i.e. /dev/vda5.

In this case you must:

  1. make sure that gfdisk is displaying sectors and not cylinders
  2. write down the starting sector of the extended partition (typically /dev/vda2) and logical partition (typically /dev/vda5)
  3. delete the logical partition
  4. delete the extended partition
  5. re-create the extended partition with the same starting sector as the original /dev/vda2 and the default ending sector (i.e. the last sector of the disk)
  6. re-create the logical partition with the same starting sector as the original /dev/vda5 and the default ending sector (i.e. the last sector of the disk)
  7. Reboot and then run: pvresize /dev/vda5
Share
Categoria howto
Publicado por admin a las 1:49 am | Comments (0)

Cheap usb-ethernet adapter for MacOSX

February 15th, 2013

I recently purchased a cheap usb-ethernet adapter for my Macbook Air for just 5 EUR.
The model name is KY-LANRD9700, it’s a usb 2.0 to ethernet 100Mbps adapter and it uses the AX88772 chipset.
The USB id is: 0×9700:0x0fe6

This chipset is fully supported under MacOSX. You can download the driver from ASIX website.

No need to spend 30 EUR on the official usb-ethernet adapter from Apple!

Share
Categoria howto, hw support
Publicado por admin a las 12:39 pm | Comments (0)