HP changed Broadcom to Emulex with G7 blades, and no problems were detected with Emulex chips.
Here is the measurement with BL465G7 for those who followed the test schemas in previous posts.
with processors getting faster, SW vs HW performance differences are not so high anymore. Additional benefit is the lower processor utilisation while iSCSI HW offload is used. Offload now only matters for 10G speeds.
To say it short - G7 blade servers and iSCSI are OK. Problematic BL495G5 with Broadcom based 10G chips were partially improved by driver updates during these years but never fully fixed, still small block writes do not work properly. But its time to refurbish such an old hardware, as it is a waste of electricity to use them novadays.
With HP ProLiant G8 servers its a new level of choice. You can select from a lot of HP network adapters, based on Broadcom, Emulex, QLogic and other chips, read about chip and wattages at the end of quickspecs sheet. If no particular requirements just select the inexpensive model, for example 554 that will do HW iSCSI if needed without extra licensing. Higher number in the name does not always mean better adapter, just a different chip manufacturer.
HP with G8 series of ProLiant's has several formats for network adapters:
FLR - special slot on rackmount G8 servers,
FLB - special slot on blade G8 servers.
SFP+ - usual PCI-Express cards.
FLR and FLB meaning is as follows:
Flexible Lan on motherboard, and then last letter Rack or Blade. FRL - FlexLOM Rack. FLB - FlexLOM Blade. It is easy to distinguish by last letter is the particular model for Rackmount or Blade servers.
Its no more possible to test them all :) . I will share here only what I tested and using without problems:
- Emulex 554FLR, 554FLB - work properly with HW iSCSI under Windows 2008R2, 2012 and 2012R2. Wire speed at 10GBbps for HW iSCSI. The systems under my responsibility currently standardised on Emulex for iSCSI applications. I do not know if QLogic and new Broadcom based adapters are working or not - not tested, and no reason to invest into testing as their price is the same.
- Intel 560FLR, 560FLB or 560SFP+ - these Intel network adapters do not have iSCSI HW offload but they have highest possible UDP packet send-out performance with lowest processor utilisation. I am able to reach under RedHat 6.4 full wire speed (10G) with UDP packets and no processor core exceeding 60%. 60Gbps can be served out of server if you have good VOD application. With Emulex 554, for example, with UDP only traffic the processor core that serves adapter reach 100% at about 7Gbps and it is impossible to reach wire speed. This was tested only for outgoing UDP traffic so if you have Video on demand servers serving lot of outgoing UDP packets with video, 560 is better choice than 554.
The conlusion is that particular application will perform best on particular adapter. Performances, features, processor utilization and costs vary.
This "broken iSCSI" saga is now over, iSCSI is stable and safe to use in production with HP ProLiant G7 or G8 servers. I will delete this blog after some time so people do not waste time reading old information. More use would be to open new blog like about creating terabit bandwidth and petabytes sized video delivery platform on ProLiants, with all the strange things happening on SmartArrays, for example if two disks fail under RAID6, replacement disks are inserted in the middle of rebuilding to spare, etc. Lot of illogical happens, but cases are at HP support and seems are handled properly.
Thanks for all, including the HP people who managed to eliminate these problems from G7 and later ProLiant server generations.