VMXNET3 vs. E1000 on RHEL5.7 on ESXi4.1
2012/05/31 Leave a comment
Some brief stats:
Same ESXi box (DVS switch):
E1000:
[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.4.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001ttcp-t: sockbufsndsize=16384, sockbufrcvsize=87380, sockbufsize=51882, # tcp -> 10.0.4.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 19.569 real seconds = 408815.576 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.020, calls/sec = 51101.947
ttcp-t: 0.120user 1.522sys 0:19real 8% 0i+0d 258maxrss 0+3pf 13654+3csw
VMXNET3:
[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.5.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001ttcp-t: sockbufsndsiz
e=16384, sockbufrcvsize=87380, sockbufsize=51882, # tcp -> 10.0.5.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 5.551 real seconds = 1441207.732 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.006, calls/sec = 180150.967
ttcp-t: 0.103user 2.119sys 0:05real 39% 0i+0d 252maxrss 0+3pf 1688+5csw
Different ESXi boxes (DVS switch), boxes with Gigabit networking:
E1000:
[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.4.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001
ttcp-t: sockbufsndsize=16384, sockbufrcvsize=87380, sockbufsize=51882, # tcp -> 10.0.4.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 69.909 real seconds = 114433.805 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.072, calls/sec = 14304.226
ttcp-t: 0.143user 2.086sys 1:09real 3% 0i+0d 254maxrss 0+3pf 5671+6csw
VMXNET3:
[root@rhel5utility ~]# ttcp -st -n 1000000 10.0.5.202
ttcp-t: buflen=8192, nbuf=1000000, align=16384/0, port=5001
ttcp-t: sockbufsndsize=16384, sockbufrcvsize=87380, sockbufsize=51882, # tcp -> 10.0.5.202 #
ttcp-t: connect
ttcp-t: 8192000000 bytes in 69.963 real seconds = 114346.730 KB/sec +++
ttcp-t: 1000000 I/O calls, msec/call = 0.072, calls/sec = 14293.341
ttcp-t: 0.114user 1.873sys 1:09real 2% 0i+0d 254maxrss 0+3pf 5773+4csw
Conclusion – it does make a difference for intra-box traffic.