VMXNET3 vs. E1000 on RHEL5.7 on ESXi4.1

Some brief stats:

Same ESXi box (DVS switch):

E1000:

[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.4.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001ttcp-t: sockbufsndsize=16384, sockbufrcvsize=87380, sockbufsize=51882,  # tcp -> 10.0.4.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 19.569 real seconds = 408815.576 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.020, calls/sec = 51101.947
ttcp-t: 0.120user 1.522sys 0:19real 8% 0i+0d 258maxrss 0+3pf 13654+3csw

VMXNET3:

[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.5.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001ttcp-t: sockbufsndsiz
e=16384, sockbufrcvsize=87380, sockbufsize=51882,  # tcp -> 10.0.5.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 5.551 real seconds = 1441207.732 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.006, calls/sec = 180150.967
ttcp-t: 0.103user 2.119sys 0:05real 39% 0i+0d 252maxrss 0+3pf 1688+5csw

Different ESXi boxes (DVS switch), boxes with Gigabit networking:

E1000:

[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.4.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001
ttcp-t: sockbufsndsize=16384, sockbufrcvsize=87380, sockbufsize=51882,  # tcp -> 10.0.4.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 69.909 real seconds = 114433.805 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.072, calls/sec = 14304.226
ttcp-t: 0.143user 2.086sys 1:09real 3% 0i+0d 254maxrss 0+3pf 5671+6csw

VMXNET3:

[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.5.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001
ttcp-t: sockbufsndsize=16384, sockbufrcvsize=87380, sockbufsize=51882,  # tcp -> 10.0.5.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 69.963 real seconds = 114346.730 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.072, calls/sec = 14293.341
ttcp-t: 0.114user 1.873sys 1:09real 2% 0i+0d 254maxrss 0+3pf 5773+4csw

Conclusion – it does make a difference for intra-box traffic.

 

Advertisements

Updating Solaris 11 without fear

Quickly, here’s how I upgrade my Solaris 11 box, which has 4 SATA disks. Thr first 3 disks are part of a rpool, and the 4th disk a scratch disk or previous backup.

Before I did anything my disk config looked like this – a 3-mirror RAID 1 array:

  pool: rpool
 state: ONLINE
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c3t0d0s0  ONLINE       0     0     0
            c3t1d0s0  ONLINE       0     0     0
            c3t2d0s0  ONLINE       0     0     0

and c3t3d0s0 is a fourth disk with a previous backup.

Step 1, split off the third disk in case of problems:

# zpool split -R /mnt rpool rpoolbackup c3t2d0s0
# zpool export rpoolbackup

Step 2, mount the update ISO from Oracle:

# mount -o ro,loop -F hsfs /root/SOL11_1111_SRU6_06_INCR_REPO.iso /mnt

Step 3, Perform the update:

# /usr/bin/pkg set-publisher -g file:///mnt/repo solaris
# /usr/bin/pkg update
# umount /mnt

Step 4, Rename as desired, activate, and reboot

# beadm activate solaris
# beadm rename solaris-1 solaris11-sru6
# beadm activate solaris11-sru6
# init 6

Step 5, Remove the upgrade repo

# /usr/bin/pkg unset-publisher solaris
# /usr/bin/pkg set-publisher -g http://pkg.oracle.com/solaris/release/ solaris

Step 6, Swap the target 2 and 3 disks. Do this so that the previous backup becomes the third mirror and the fourth disk is the split off mirror

# cfgadm -a | grep c3
sata6/0::dsk/c3t0d0            disk         connected    configured   ok
sata6/1::dsk/c3t1d0            disk         connected    configured   ok
sata6/2::dsk/c3t2d0            disk         connected    configured   ok
sata6/3::dsk/c3t3d0            disk         connected    configured   ok

# cfgadm -c unconfigure sata6/2
Unconfigure the device at: /devices/pci@0,0/pci1458,b002@11:2
This operation will suspend activity on the SATA device
Continue (yes/no)? y
# cfgadm -c unconfigure sata6/3
Unconfigure the device at: /devices/pci@0,0/pci1458,b002@11:3
This operation will suspend activity on the SATA device
Continue (yes/no)? y

[swap, exchange disks]

Step 7, rescan disks

# cfgadm -c configure sata6/2
# cfgadm -c configure sata6/3

Step 8, reattach the third mirror
# zpool attach -f rpool c3t1d0s0 c3t2d0s0
Make sure to wait until resilver is done before rebooting.

Step 9, if necessary, we can import the previous rpool from the fourth disk:
# zpool import -R /mnt rpoolbackup