I have looked around for good information on images and support for sparse storage and compression. After experimenting a bit I decided to compile a few key results. There are two attributes I care about here: storage and network transfer.
In order for efficient storage of an image, the image must be as small as possible. If there is a run of no data on an image (free space that has been filled with zeros) then that should not reserve bits on the filesystem. The key term here is "sparse image." Once you transfer a sparse image to another host, the "sparse bits" actually get transmitted as long strings of zeros... thus you transfer 10 GB of data for a 10 GB sparsely allocated image with (say) 6 GB of actual filesystem data.
In order to be efficient for a network transfer, the image has to actually be small which ends up meaning compression. (eg: gzip/bzip2/...)
I used an LVM backed instance for my source data and in order to fill zeros into the free space of the block storage I ran this inside the instance:
dd if=/dev/zero of=/tmp/zeros bs=4M; rm /tmp/zeros; halt
This basically just wrote zeros into a file under /tmp, removed it and then halted the instance. You may need to use a different location on your host because /tmp is sometimes mounted as a different filesystem (eg: tmpfs.)
Now, my lv is ready to convert to different image types. I used qemu-img in order to read from the block device to create images of different varieties:
qemu-img convert -c -O qcow2 /dev/vg/kvm-woof.snap host-woof.compressed.qcow2
qemu-img convert -O qcow2 /dev/vg/kvm-woof.snap host-woof.qcow2
qemu-img convert -O vdi /dev/vg/kvm-woof.snap host-woof.vdi
qemu-img convert -O vmdk /dev/vg/kvm-woof.snap host-woof.vmdk
The differences between sizes somewhat surprised me:
root@os-ph1:/staging# ls -lh host-woof.*
-rw-r--r-- 1 root root 2.1G Feb 25 17:01 host-woof.compressed.qcow2
-rw-r--r-- 1 root root 6.3G Feb 25 16:52 host-woof.qcow2
-rw-r--r-- 1 root root 10G Feb 25 16:42 host-woof.raw
-rw-r--r-- 1 root root 6.9G Feb 25 16:49 host-woof.vdi
-rw-r--r-- 1 root root 6.3G Feb 25 16:49 host-woof.vmdk
In order for efficient storage of an image, the image must be as small as possible. If there is a run of no data on an image (free space that has been filled with zeros) then that should not reserve bits on the filesystem. The key term here is "sparse image." Once you transfer a sparse image to another host, the "sparse bits" actually get transmitted as long strings of zeros... thus you transfer 10 GB of data for a 10 GB sparsely allocated image with (say) 6 GB of actual filesystem data.
In order to be efficient for a network transfer, the image has to actually be small which ends up meaning compression. (eg: gzip/bzip2/...)
I used an LVM backed instance for my source data and in order to fill zeros into the free space of the block storage I ran this inside the instance:
dd if=/dev/zero of=/tmp/zeros bs=4M; rm /tmp/zeros; halt
This basically just wrote zeros into a file under /tmp, removed it and then halted the instance. You may need to use a different location on your host because /tmp is sometimes mounted as a different filesystem (eg: tmpfs.)
Now, my lv is ready to convert to different image types. I used qemu-img in order to read from the block device to create images of different varieties:
qemu-img convert -c -O qcow2 /dev/vg/kvm-woof.snap host-woof.compressed.qcow2
qemu-img convert -O qcow2 /dev/vg/kvm-woof.snap host-woof.qcow2
qemu-img convert -O vdi /dev/vg/kvm-woof.snap host-woof.vdi
qemu-img convert -O vmdk /dev/vg/kvm-woof.snap host-woof.vmdk
The differences between sizes somewhat surprised me:
root@os-ph1:/staging# ls -lh host-woof.*
-rw-r--r-- 1 root root 2.1G Feb 25 17:01 host-woof.compressed.qcow2
-rw-r--r-- 1 root root 6.3G Feb 25 16:52 host-woof.qcow2
-rw-r--r-- 1 root root 10G Feb 25 16:42 host-woof.raw
-rw-r--r-- 1 root root 6.9G Feb 25 16:49 host-woof.vdi
-rw-r--r-- 1 root root 6.3G Feb 25 16:49 host-woof.vmdk
A raw image is pretty much the worst thing you can do. It turns out that only the qcow2 format supports compression within the image itself. There are some downsides to qcow2 with performance but certainly if you are creating/destroying VMs a lot then you can save yourself some network bandwidth by transferring smaller images. Creating the compressed qcow2 image took roughly twice as long (no hard numbers here) as the uncompressed version. YMMV as not all data is created equal (or equally compressible).
It's also interesting to note that a compressed qcow2 image may slowly expand through use. New writes to the image may not be compressed.
-- EDIT --
I use LVM to back instances on hosts in OpenStack. It turns out that regardless of the format chosen, the whole (raw) image still needs to be written out to the logical volume. Unless you have really fast disks, this is by far the slowest part of the process. At least in my environment (with spinning rust) all I gain is some space on my image store and less overall used network bandwidth. Also, images are cached in a raw format on each node. The image also acts something like a sparse image:
root@os-ph12:/var/lib/nova/instances/_base# ls -lh abcca*
-rw-r--r-- 1 nova nova 10G Feb 25 21:05 abcca5bbe40c5b147f8a110bf81dab8bbb65db25
root@os-ph12:/var/lib/nova/instances/_base# du -h abcca*
6.2G abcca5bbe40c5b147f8a110bf81dab8bbb65db25
-- EDIT --
I use LVM to back instances on hosts in OpenStack. It turns out that regardless of the format chosen, the whole (raw) image still needs to be written out to the logical volume. Unless you have really fast disks, this is by far the slowest part of the process. At least in my environment (with spinning rust) all I gain is some space on my image store and less overall used network bandwidth. Also, images are cached in a raw format on each node. The image also acts something like a sparse image:
root@os-ph12:/var/lib/nova/instances/_base# ls -lh abcca*
-rw-r--r-- 1 nova nova 10G Feb 25 21:05 abcca5bbe40c5b147f8a110bf81dab8bbb65db25
root@os-ph12:/var/lib/nova/instances/_base# du -h abcca*
6.2G abcca5bbe40c5b147f8a110bf81dab8bbb65db25
No comments:
Post a Comment