I often talk about the importance of having well performing applications. The reasons for my focus on performance are vast and range from the user experience, ability to handle load, Search Engine Optimization (SEO), and everything in between. One of the most common issues that I will encounter with client sites when optimizing is that the root of their performance problem might reside with a third-party component that they have used. Sometimes this was a component they selected, other times it was a component another consultant recommended, and even other times it was an item that was part of the framework that they were using and they did not have a choice.
This post is dedicated to a "happy story" with regards to performance optimization and experiences with third-party vendors.
The Back Story
While working with a client to scale a DNN 7.x installation to handle 50,000 concurrent users I was turning over every rock, looking at every aspect of the site and confirming every server setting that I could. (We even upgraded the site from 7.0.0 -> 7.2.3 to try and resolve the issues.) The initial performance of the application was abysmal. With a few third party module swaps we were getting closer, supporting around 20% of the total load that we wanted to support when I noticed something funny.
Utilizing numerous tracking tools I was focused on finding a “quick hit” win with regards to performance. It was at this time that I noticed a number of requests outbound from the website to 51Degrees. 51Degrees is the component that comes as part of DNN that is used for user interface optimization improving responsive web design (RWD) techniques and enhancing web site analysis. Upon further review this system seems to have a fairly large impact on the overall throughput that we were able to achieve once the load built up. At this point to confirm my theory I manually disabled all aspects of the 51Degrees product and re-tested performance. We noticed a radical improvement of more than 45% throughput under heavy load with this single change. It should be noted that this impact was not directly visible ore measurable at incredibly low volumes. (Less than 5 requests/sec to the server)
NOTE: Due to the extreme load, and other performance tweaks yet to be made on the server the impacts that I identified above might not be re-creatable and are not necessarily representative of the widespread impact.
With the above information in hand I decided to reach out to the folks at 51Degrees and provide them with the information I had and ask them for some assistance. They thanked me for the insight, in particular the detail I provided about the precise environment configuration, and confirmed they had reports of a suspected high load performance issue but so far had failed to recreate it in their own tests. After a short period of time they were back in contact with me, and responding like no-other third-party vendor had in the past. They not only wanted to improve, they had put a plan in place and wanted to work with me to sanity check the fix prior to implementation and to performance test it upon completion.
I welcomed the collaboration and looked forward to their solution. A few weeks went by and I was provided a test copy of the solution prior to its launch as a public beta for other testers.
Validation of Fix
The key piece of validation for the fix was to find a way to simulate a situation that would be the most representative of the average site. I wanted to avoid introducing any “high volume” stresses that could introduce anomalies that would cloud the results and bring other items into light.
As such the validation goal was not to prove any specific throughput, but to prove that the 51Degrees modifications had minimal impact on the performance of a DNN Installation.
As such I proposed and completed a three-step testing process:
- Test a clean installation of DNN without any 51Degrees systems in place
- Test the same installation but turn on basic 51Degrees detection
- Test the same installation but with 51Degrees detection & share usage enabled
This three-pronged approach should result in a performance test that establishes a clean baseline and with all other elements remaining stable each additional tests should show similar throughput. If any degradation in performance is encountered it would be easy to track it back to the particular component identified.
As part of this test we used a small Azure VM and a load test of 25 virtual users that were continuously hitting the site for a period of 20 minutes. We wanted to simulate a good load to the site, without overwhelming the system. We also wanted to be sure that the test sample size was more than enough to trigger the full range of 51Degrees data communication.
Well, the results of the testing were quite impressive. The table below show the key metrics regarding system throughput and overall performance.
|Total HTTP Requests
|Basic 51Degrees Features
|All 51Degrees Features
Looking at the server we were able to validate that all 51Degrees components were operating and all of the outbound requests were successful. The test installation shows a success with traffic numbers the same, or slightly better than that of the baseline.
With any testing you will have slight variations in the average response time and/or total requests due to environment issues that cannot be controlled. This explains the small deviations noted in the above.
I had a great experience working with the team at 51Degrees and thanks to their dedication those with DNN installations will see improved mobile web performance under heavy load once the update is applied to the DNN core in the next release. Once I have more information on the release of this change I will update this blog post.
It should be noted that the same 51Degrees code is used with other .NET based CMS and platforms such as Umbraco, Kentico, Ingenuix, nopCommerce, Sitecore, Sitefinity, Orchard, Ektron and EPIServer which will also benefit from the performance enhancements validated via these tests.
This post has been cross-posted to my company blog.