Executive summary for Windows 2008 R2 and Broadcom multifunction adapters:
1. TCP offload - problems. Not OK for production use.
2. iSCSI offoad - 10G - still not supported, 1G - problems. Not OK for production use.
Lot of time passed since the last blog. Still, no good news about Windows 2008 but today that is an OS of the past. After new Windows server version release, in-depth testing of TCP offload and iSCSI offload was done on Windows 2008 R2 with the latest drivers of the Broadcom multifunction chips. QLogic add-on iSCSI adapters were used for comparison. Processor performance impact was not measured - of course the more offloads the less loaded processor, but the goal of this test was to measure througputs and stability.
Equipment was the same as used for W2008 tests described in previous post.
•BL495 G5, tested:
–Broadcom (LOM) network interfaces (FLEX-10 VC)
(custom throughput rates 9gbps, 5gbps,2gbps and 1gbps)
(Separate tests with TCP offload enabled and iSCSI offload enabled)
–QLOGIC network interfaces (separate tests with 2 and 4 interfaces)
–QLOGIC network interfaces with ISCSI offload enabled (separate tests with 2 and 4 interfaces)
–Broadcom (LOM) network interfaces (1Gbps, Virtual Connect 1/10)
–(separate tests with TCP offload enabled and iSCSI offload enabled)
•DL385 G5, tested:
–Broadcom LOM NIC
–QLOGIC network interfaces (separate tests with 2 and 4 interfaces), used as network adapters –QLOGIC network interfaces with iSCSI offload enabled (separate tests with 2 and 4 interfaces)
•Windows 2008 R2 Enterprise
•In all tests the same dual-controller MSA was used (MSA 2324i)
•QLogic QMH4062, two cards placed in BL495G5 (4 1Gbps ports total)
•QLogic QLE4062C used in DL385G5
• Latest drivers from hp.com used at the time of tests (5.0.32 for 10G, 5.0.16 for 1G)
•Duplicate HP Procurve 2910 switches are used to connect iSCSI target and initiator Network interfaces.
•Jumbo frames were NOT used in the test. We did several tests with jumbo frames enabled but no noticeable difference (packet/second possibility with one MSA connected is far below PPS that switches can achieve)
•During the tests, switches, MSA and servers were not used in any other environment and were not loaded with any other tasks except iSCSI Performance tests.
•10 LUNS (2 VDISKS, 24 X SAS-S, 73,4GB,15k RPM disks)
•IOMETER.org 2006.07.27, all tests 100% sequential (as the purpose was to test iSCSI performance). Processor load was low during all the tests, the purpose of this test was not to measure processor impact of iSCSI offload (that is a separate question)
•When testing with 4 adapters, one iSCSI session was opened from each server side port to each MSA2324i port (4x1Gbps total). When tested with 2 adapters, two iSCSI sessions were opened from each server side adapter to two separate physical ports of MSA2324i. N/O means NO Offloads. TCP offload means OS initiator + TCP offload ON. iSCSI offload means OS initiator was not used (except for configuration) and disks to OS presented by iSCSI card drivers.The lab test environment configuration is here (click for full sized picture):
Test No.1. The BL495G5 Flex-10 test results were as follows (click to see full slide):
Improvements over W2008:
• TCP offload gives noticeable speed increases in Flex10 configuration (in cases where it works).
• No stability problems (iSCSI time-outs and dropped disks) with W2008 R2. With 2008, iSCSI from LOM is not stable, and iSCSI disks are lost from OS under heavy traffic, data is corrupted if under heavy writes. Customers must be adviced to use W2008 R2 or W2003, otherwise they will loose their data. Only Qlogic are stable under W2008.
• iSCSI offload is still not available in Flex-10. In the FAQ it is stated that this is a Flex-10 limitation that is not yet addressed in firmware. This needs to be solved. Can be classified as the missing feature, but customers were waiting years for Broadcom iSCSI offload firmware/drivers to be fixed. NO RESULT YET, this severely hurts Flex-10 architecture as such, and hurts all iSCSI ecosystem in general.
• In all BL495G5+Flex10 tests write operation performance was hard-limited at 126MBps, regardless of I/O request size or NIC interface speed. This is a performance problem that needs to be solved.
• Problem: with TCP offload ON, write traffic FAILS with I/O request sizes larger than 124 (ok at 124, goes to nearly zero at IOMETER size 125 and above). This is not a problem of IOMETER. Microsoft Data Protection server 2007 for example never finishes backup jobs in this configuration as it probably uses blocks greater than 124.
Note: Flex10 traffic limiter configured at 1Gbps allows at least 1.2Gbps read traffic with low request sizes. Not a problem at all for production, just a note.
Test No.2. Broadcom LOM versus QLogic iSCSI performance (click for full size slide):
Improvements over W2008:
• No stability (iSCSI time-outs and dropped disks) problems with Windows 2008 R2. With W2008+Broadcom, iSCSI is not stable, and iSCSI disks are lost from OS under heavy traffic, data is corrupted if under heavy writes.
• Virtual connect 1/10 gives only a sligh decrease in performance in low I/O sizes, and very small increase in performance for big I/O sizes. = No impact to performance. Good.
• 1Gbps Broadcom LOM read performance becomes unstable and worse with TCP offload ON, and substantially worse with iSCSI offload ON (performance decreases >2 times). Server cannot be a problem as Broadcom adapters behave exactly the same in BL465 or in DL385. Traffic becomes very unstable. Network utilization charts look like seismograph chart under heavy earth-quake. Traffic from Qlogic adapters is always even (very stable at all times).
• With TCP offload ON, write traffic FAILS with I/O request sizes larger than 124kB (still ok at 124, goes to nearly zero at IOMETER request size 125 and above). This is not a problem of IOMETER. Microsoft DP server 2007 for example never finishes backup jobs in this configuration as it probably uses blocks greater than 124. Same bug as with 10Gbps adapters under Flex-10.
Executive summary: Broadcom multifunction adapter drivers and/or firmwares on ProLiant G5 servers are LOW QUALITY and DO NOT WORK PROPERLY. HP needs to either fix the firmware/drivers or replace Broadcom with different vendor for new HP servers.
• Write performance problems with Flex-10. Traffic from 10Gbps port is limited at ~126Mbps with no TCP offload, and fails at I/O sizes over 124kB with TCP offload.
• TCP oflload cannot be used in production environment both for 10G and 1G Broadcom multifunction adapters. Write operations fail under Flex10 and Read operations fail under Blade/Virtual Connect 1/10 or DL385G5/ProCurve. Its Broadcom chip/driver problem.
• iSCSI offload cannot be used in production enviroment. Not possible on 10G where it would be most relevant due to Flex-10 limitations, and 3 times less read performance with both blade and non blade 1Gbps LOM.
Expectations to HP from the customer:
Priority No.1: fix the Flex-10 problems with new firmware/driver version. Due to write performance problems Flex-10 cannot be used for iSCSI (even with no any offloads).
Priority No.2: fix the write operation failures in TCP offload Broadcom driver if I/O size is larger than 124kB (both 10G and 1G multifunction adapters).
Priority No.3: fix the TCP offload and iSCSI offload READ problems. Until they are fixed, support for TCP offload and iSC SI offload must be withdrawn from all Quickspecs. ON is the current default in drivers.
Priority No.4: enable iSCSI offload in Flex-10.
In overal, iSCSI and TCP offloads on Proliant via Broadcom based LOM or PCI cards were never stable and were not fixed, for years. The only properly working offloaded iSCSI solution at 1Gbps is to purchase expensive QLogic cards, HP resells them for blades. Low (1Gbps) speed, high price - but stable. But as Broadcom's can, starting with Windows 2008 R2, be used as simple NIC with OS iSCSI initiator properly, no reason to invest into QLogic. No iSCSI offload solution at all for 10G. Seems HP will try to abandon this good disk access technology and try to rush to another new technology - FCoE, convergent adapters, and so on.
Additional unneded complexity being pushed out to the customers. Focus is shifting from existing and still not fixed - to new, more complex, and untested. Hard to believe such a big company like HP cannot properly update drivers for their multifunction network adapters. For years. Situation is better with W2008 R2, at least we can use 1Gbps LOM adapters with OS iSCSI initiator, but still on Flex-10 there is no properly working any settings combination, and TCP and iSCSI offloads do not work properly for all Broadcom adapters.
Are they really testing "thorougly", as it is said on the QuickSpecs, before selling?
We put a ban on purchases of HP blades, Flex-10, and "malfunction" Broadcom network adapters, until HP will fix the problems. 2 years waiting - and still low quality drivers, with problems.