Recently, I was planning, preparing, and executing a network performance test plan, including TCP, UDP, HTTP, and HTTPS throughput benchmarks. The intention of the test plan was the network throughput comparison between two particular NICs
- Intel X710
- QLogic FastLinQ QL41xxx
There was a reason for such exercise (reproduction of specific NIC driver behavior) and I will probably write another blog post about it, but today I would like to raise another topic. During the analysis of testing results, I've observed very interesting HTTPS throughput results in comparison to HTTP throughput. These results were observed on both types of NICs, therefore, it should not be a benefit of specific NIC hardware or driver.
Here is the Test Lab Environment:
- 2x ESXi hosts
- Server Platform: HPE ProLiant DL560 Gen10
- CPU: Intel Cascade Lake based Xeon
- BIOS: U34 | Date (ISO-8601): 2020-04-08
- NIC1: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5
- NIC2: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm
- OS/Hypervisor: VMware ESXi 6.7.0 build-16075168 (6.7 U3)
- 1x Physical Switch
- 10Gb switch ports << network bottleneck by purpose, because customer is using 10Gb switch ports as well
Below are the observed interesting HTTP and HTTPS results.
HTTP
HTTPS
OBSERVATION, EXPLANATION, AND CONCLUSION
We have observed
- HTTP throughput between 5 and 6 Gbps
- HTTPS throughput between 8 and 9 Gbps
which means 50% higher throughput of HTTPS over HTTP. Normally, we would be expecting HTTP transfer faster than HTTPS as HTTPS requires encryption, which should end-up with some CPU overhead. Encryption overhead is questionable, but nobody would expect HTTPS significantly faster than HTTP, right? That's the reason I was asking myself,
why HTTPS overachieved HTTP results on HPE Lab with the latest Intel CPUs?
Here is my process of the "issue" troubleshooting or better to say, root cause analysis.
- When I execute the same test in my home lab (old Intel CPUs model Intel Xeon E5-2620) with both VMs running in the single ESXi host (everything running in software without hardware physics) I can achieve 22 Gbps for both protocols (HTTP and HTTPS). This means, that software can generate enough traffic to saturate 10 Gb NICs. However, the results are almost identical without a big difference between HTTP and HTTPS. It is worth to say, CPU in my home lab does not support Intel QuickAssist.
- I did further research and found the following documents
- Asynchronous Advantages for Web Servers (high-level explanation)
- Programming Intel® QuickAssist Technology Hardware Accelerators for Optimal Performance (low-level explanation)
- Data plane acceleration technologies: realizing the potential of network virtualization (high-level explanation)
Conclusion
- In my home lab, I have old Intel CPUs models (Intel Xeon CPU E5-2620 0 @ 2.00GHz), that's the reason HTTP and HTTPS throughputs are identical.
- In the HPE test lab, there are the latest Intel CPU models, therefore, HTTPS can be offloaded and client/server communication can leverage asynchronous advantages for web servers using Intel® QuickAssist Technology introduced in the Intel Xeon E5-2600 v3 product family.
- It is worth to mention, that it is not only about CPU hardware acceleration, but also about software code which must be written in the form, hardware acceleration can leverage for a positive impact on performance. This is the case of OpenSSL 1.1.0, and NGINX 1.10 to boost HTTPS server efficiency.
No comments:
Post a Comment