As of the Linux 2.6 kernel, the default I/O Scheduler is Completely Fair Queuing (CFQ). The scheduler is an effective solution for nearly all workloads.
With the release of Linux 2.6, the kernel has these schedulers as configurable options:
Completely Fair Queuing (cfq): CFQ is the default under many Linux distributions. CFQ places synchronous requests submitted by processes into a number of per-process queues and then allocates time slices for each of the queues to access the disk. The length of the time slice and the number of requests a queue is allowed to submit depends on the I/O priority of the given process. Asynchronous requests for all processes are batched together in fewer queues, one per priority.
NOOP (noop): NOOP is the simplest I/O scheduler for the Linux kernel based upon the FIFO queue concept. The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging. The scheduler assumes I/O performance optimization will be handled at some other layer of the I/O hierarchy.
Anticipatory (anticipatory): Anticipatory is an algorithm for scheduling hard disk I/O. It seeks to increase the efficiency of disk utilization by "anticipating" synchronous read operations.
Deadline (deadline): The goal of the Deadline scheduler is to guarantee a start service time for a request. It does this by imposing a deadline on all I/O operations to prevent starvation of requests.
The default scheduler will affect all disk I/O for VMDK and RDM-based virtual storage solutions. In virtualized environments, it is often not beneficial to schedule I/O at both the host and guest layers. If multiple guests use storage on a filesystem or block device managed by the host operating system, the host may be able to schedule I/O more efficiently because it is aware of requests from all guests and knows the physical layout of storage, which may not map linearly to the guests' virtual storage.
Testing has shown that NOOP or Deadline perform better for virtualized Linux guests. ESX uses an asynchronous intelligent I/O scheduler, and for this reason virtual guests should see improved performance by allowing ESX to handle I/O scheduling.
To implement this change, please refer to the documentation for your Linux distribution.
Note: All scheduler tuning should be tested under normal operating conditions as synthetic benchmarks typically do not accurately compare performance of systems using shared resources in virtual environments.
For example, this change can be implemented by:
The scheduler can be set for each hard disk unit. To check which scheduler is being used for particular drive, run this command:
cat /sys/block/disk/queue/scheduler
For example, to check the current I/O scheduler for sda:
# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq
In this example, the sda drive scheduler is set to NOOP.
To change the scheduler on a running system, run this command:
# echo scheduler > /sys/block/disk/queue/scheduler
For example, to set the sda I/O scheduler to NOOP:
# echo noop > /sys/block/sda/queue/scheduler
Note: This command will not change the scheduler permanently. The scheduler will be reset to the default on reboot. To make the system use a specific scheduler by default, add an elevator parameter to the default kernel entry in the GRUB boot loader menu.lst file.
For example, to make NOOP the default scheduler for the system, the /boot/grub/menu.lst kernel entry would look like this:
title CentOS (2.6.18-128.4.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-128.4.1.el5 ro root=/dev/VolGroup00/LogVol00 elevator=noop
initrd /initrd-2.6.18-128.4.1.el5.imgWith the elevator parameter in place, the system will set the I/O scheduler to the one specified on every boot.
Friday, July 20, 2012
Tuesday, July 17, 2012
How to activate cdp and lldp in a vSwitch in ESXi
From esxcli
to activate both in vSwitch0 enter:
esxcli network vswitch standard set -c both -v vSwitch0
to list cdp and lldp status enter:
esxcli network vswitch standard list -v vSwitch0
to activate both in vSwitch0 enter:
esxcli network vswitch standard set -c both -v vSwitch0
to list cdp and lldp status enter:
esxcli network vswitch standard list -v vSwitch0
Monday, July 16, 2012
Forcing a link state up or down for a vmnic interface on ESXi 5.x
To change the link state of the physical interface from the Management Network vmkernel log in to the ESXi 5.0 host using Tech Support Mode with root account privilege.
Change the link status of the uplink vmnic with one of these commands:
esxcli network nic down -n vmnicX
esxcli network nic up -n vmnicXWhere X is the vmnic number (for example,vmnic0)
Change the link status of the uplink vmnic with one of these commands:
esxcli network nic down -n vmnicX
esxcli network nic up -n vmnicXWhere X is the vmnic number (for example,vmnic0)
Tuesday, July 10, 2012
Issues related to updating the VMware vCenter Server Appliance
The vCenter Server Appliance update fails to install the update to 5.0 U1 and the log file /opt/vmware/var/log/vami/updatecli.log displays the following error message:
"ERROR: Not enough space on disk to migrate embedded database."
The solution to this problem is this:
"ERROR: Not enough space on disk to migrate embedded database."
The solution to this problem is this:
- Create a new virtual disk with size 20GB and attach it to the vCenter Server Appliance.
- Log in to the vCenter Server Appliance's console and format the new disk as follows:
- At the command line, type, "echo "- - -" > /sys/class/scsi_host/host0/scan".
- Type, "parted -s /dev/sdc mklabel msdos".
- Type, "parted -s /dev/sdc mkpartfs primary ext2 0 22G".
- Mount the new partition under /storage/db/export:
- Type, "mkdir -p /storage/db/export".
- Type, "mount /dev/sdc1 /storage/db/export".
- Repeat the update process.
Subscribe to:
Posts (Atom)