Qcow2 image creation with preallocation options to boost IO performance

While i started evaluating KVM IO performance, i leveraged qcow2 preallocation capabilities which helped me to boot IO performance. Its always good to preallocate image for better performance. For ex, while attaching disks to instances in openstack.

Earlier if we want to preallocate qcow2 image we had to do manually using falloc as shown below.

qemu-img create -f qcow2 -o preallocation=metadata /tmp/test.qcow2 8G
fallocate -l 8591507456 /tmp/test.qcow2

Now we have these options very useful.

qemu-img create -f qcow2 /tmp/test.qcow2 -o preallocation=falloc 1G

preallocation=falloc
Formatting ‘/tmp/test.qcow2′, fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 preallocation=’falloc’ lazy_refcounts=off refcount_bits=16

“falloc” mode preallocated space for image by calling posic_fallocate()

preallocation=full

qemu-img create -f qcow2 /tmp/test.qcow2 -o preallocation=full 1G
Formatting ‘/tmp/test.qcow2′, fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 preallocation=’full’ lazy_refcounts=off refcount_bits=16

“full” mode preallocates space for image by writing zeros to underlying storage. Similing to dd

Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1232570

Thanks to my IO Guru Stefan Hajnoczi for clarifying some of the IO part internals.

Memory hotplug support in PowerKVM

Bharata B Rao's Blog

Introduction
Pre requisites
Basic hotplug operation
More options
Driving via libvirt
Debugging aids
Internal details
Future

Introduction

Memory hotplug is a technique or a feature that can be used to dynamically increase or decrease the amount of physical RAM available in the system. In order for the dynamically added memory to become available to the applications, memory hotplug should be supported appropriately at multiple layers like in the firmware and operating system. This blog post mainly looks at the emerging support for memory hotplug in KVM virtualization for PowerPC sPAPR virtual machines (pseries guests). In case of virtual machines, memory hotplug is typically used to vertically scale up or scale down the guest’s physical memory at runtime based on the requirements. This feature is expected to be useful for supporting vertical scaling of PowerPC guests in KVM Cloud environments.

In KVM virtualization, an alternative way to dynamically increase or decrease…

View original post 1,338 more words

Install Apache Cassandrsa on Centos

Apache Cassandra is a NoSQL database intended for storing large amounts of data in a decentralized, highly available cluster.

yum -y update

yum -y install java

vim /etc/yum.repos.d/datastax.repo

[datastax]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/community
enabled = 1
gpgcheck = 0

yum -y install dsc20

systemctl start cassandra

systemctl status cassandra

systemctl enable cassandra

cqlsh

systemctl restart cassandra

Ref: http://www.liquidweb.com/kb/how-to-install-cassandra-on-centos-7/

Enable virtio-blk data plane in libvirt for high performance

Usage:

Previous post was based on old method..  New format:

https://libvirt.org/formatdomain.html#elementsIOThreadsAllocation

Will share performance results with data-plane soon.

 

Old Post.

Replace   <domain type=’kvm’>

with

<domain type=’kvm’ xmlns:qemu=’http://libvirt.org/schemas/domain/qemu/1.0′&gt;

And add qemu-commandline options

</devices>
<qemu:commandline>
<qemu:arg value=’-set’/>
<qemu:arg value=’device.virtio-disk0.scsi=off’/>
<qemu:arg value=’-set’/>
<qemu:arg value=’device.virtio-disk0.config-wce=off’/>
<qemu:arg value=’-set’/>
<qemu:arg value=’device.virtio-disk0.x-data-plane=on’/>
</qemu:commandline>
</domain>

Access RHEVM with CLI

Connect to rhevm cli:

[RHEVM shell (connected)]#  connect –url “https://&#8221; –user “@internal” –password “” –insecure

Deploy VM:

[RHEVM shell (connected)]#  add vm –name –cluster-name –template-name   –placement_policy-host-name

start vm:

[RHEVM shell (connected)]#  action vm start

[RHEVM shell (connected)]#  action vm stop

RUn script:

Include all rhev commands in file and run from rhevm shell:

Ex: If you would like to start 100 VM’s:

[RHEVM shell (connected)]#  file rhev_cmd.sh

 

Trace System calls using perf

This example shows how to trace sys_enter_io_submit, sys_exit_io_submit using perf.

1)Record while workload is running. ( My example runs IO on KVM VM)

perf record -ag -e syscalls:sys_enter_io_submit -e syscalls:sys_exit_io_submit

2) perf script

qemu-kvm 22423 [000] 35377.973736: syscalls:sys_enter_io_submit: ctx_id:  0x7fb1ee52c000, nr: 0x000000d1, iocbpp: 0x7fb1f7513110

7fb1ec79c697 io_submit (/usr/lib64/libaio.so.1.0.1)

2300000023 [unknown] ([unknown])

4e0000004e [unknown] ([unknown])

qemu-kvm 22423 [000] 35377.973751: syscalls:sys_exit_io_submit: 0x1

7fb1ec79c697 io_submit (/usr/lib64/libaio.so.1.0.1)

2300000023 [unknown] ([unknown])

4f0000004e [unknown] ([unknown])

subtract the exit – enter timestamps to find the duration

35377.973751 – 35377.973736 = 0.000015

Thanks to Stefanha Hajnoczi for help.