Comparison with 400 Clients – UD 5.0 Benchmarks – Part 4
This is a continuation of our series focusing on the performance difference of the Universal Driver 5 and 4.7. In what is likely to be the last article in the series, we resume testing the number of simultaneous clients and the rate in which they are able to read records from the Linear Hash service. Please note, however, that these tests were performed with UD 188.8.131.52. Since then, Revelation Software has released two more patches. The latest patch addresses some rather significant performance issues which were identified in the wild. Given that our tests are synthetic, we do not expect the following results to vary much. Therefore we opted not to retest our results in the latest release.
For an explanation of our test environment configuration, please see our previous article.
5×50 LAN Benchmark – 250 Clients
The graph shows the cumulative records read/sec by each of the clients on each of the servers was fairly consistent between each Universal Driver. The 4.7 average for the three test runs was 33,477 recs/sec and the 5.0 was 30,940.
The results are almost identical to the 50 client test from the previous blog article which is impressive even though 5 times the number of clients are added.
Let’s see what happens when the number of clients is once again increased.
5×80 LAN Benchmark – 400 Clients
In our last and final of this series we see the same pattern carry through from the 50 and 250 client tests. The 4.7 Universal Driver performs a few hundred recs/sec better on average than the UD 5.0. Remarkably both services run solidly without any decrease in throughput despite the increase in clients.
To wrap up these last tests, we present screen shots of the UD manager showing the number of connected workstations.
|Clients||Test Run||Recs Read||Time||Recs/Sec||3 Run Avg|
|250||UD 4.7 Run 1||9,760,015||293||33,311||33,477|
|250||UD 4.7 Run 2||9,876,802||294||33,595||33,477|
|250||UD 4.7 Run 3||9,856,940||294||33,527||33,477|
|250||UD 5.0 Run 1||9,081,695||293||30,996||30,940|
|250||UD 5.0 Run 2||9,077,936||294||30,877||30,940|
|250||UD 5.0 Run 3||9,067,876||293||30,948||30,940|
|400||UD 4.7 Run 1||9,578,567||295||32,470||32,329|
|400||UD 4.7 Run 2||9,478,445||295||32,130||32,329|
|400||UD 4.7 Run 3||9,522,045||294||32,388||32,329|
|400||UD 5 Run 1||8,878,239||294||30,198||30,205|
|400||UD 5 Run 2||8,859,730||294||30,135||30,205|
|400||UD 5 Run 3||8,902,956||294||30,282||30,205|
In each test since the last article’s 50 client test we see the upper throughput limit of the read tests. Adding more clients did not increase the read throughput, yet we do observe that 250 and 400 clients run at a consistent rate despite the added number of clients.
We also see a delineation between the 4.7 and 5.0 test results with the 4.7 consistently running marginally faster in our synthetic tests. Perhaps the new features in the UD 5 add a level of overhead to each record read that becomes apparent during high loads. Nevertheless, it seems clear that the combination of great new features and negligible performance differences makes the UD 5 a solid performer and well worth consideration.
We hope you enjoyed this blog series, it was a labor of love and required a lot of retooling to gather and present over 1.2 million data points used to generate this blog series in a meaningful way. We wish to thank Bob Carten for giving this article series a shout out in his Performance Tuning presentation at the 2016 Revelation User’s Conference. There are many other variations of tests yet to perform so if you enjoyed this series please leave your comments and questions as it will help influence future benchmark articles.