Building your testing practice – A practical guide
In five years of working for Login VSI, I have met countless customers starting on their journey to improving their end-user experience.
This normally comes after the organization has experienced a production stoppage (loss of revenue) or received new compliance requirements, and the status quo is no longer possible.
Often, IT professionals are being asked to “test” their ability to support thousands of users to improve their end-user experience. However, these IT pros are typically not given adequate instruction beyond that generic request. Without a plan in place, this can be a daunting undertaking, and the results may not be as valuable.
From my experience, I will think of the process as analogous to building a house. Like building a house, ensuring each component is thoroughly verified is necessary – otherwise, injury (or failure) can occur. You should apply the same level of specificity to mission-critical systems.
Your boards, cement, beams, and even the nails – Individual VM testing –
If you’ve ever deployed an application to an enterprise organization, you are usually provided with many instructions. You are going to get a list of leading practices from any key stakeholder involved in the deployment. These will include:
- How fast must the storage be? What are the block sizes? Where the data should be closed relative to the users, and more.
- How many virtual CPUs and GB of RAM necessary to support an individual user’s, or a multi-session machine.
- How your antivirus should be configured.
- How your policies should be set.
- What your network routing must be configured for.
This combines to culminate a “base” image or configuration necessary to support this application or desktop virtualization profile.
Due to the organizations’ unique complexities and workflow, these are often very specific. There is no way for a vendor or software manufacturer to supply recommended configurations specific to you without setting up a nearly replicated deployment to test. There are too many variables involved.
How do we solve this problem? By taking leading practices and watching how the system responds when individuals utilize it. In less mature testing organizations, this is done through REAL users logging in and testing what they typically do to complete their daily tasks. The issue with this is that humans are unpredictable and prone to error (not intentional). Any implementation with such pitfalls does not improve one’s likelihood of success, but we can do better.
Our more mature organizations have a well-developed practice of testing changes by combining synthetic users’ behavior and stack monitoring to achieve repeatable results while reducing error margin.
When you frame this level of uncertainty in a large enterprise, the problem becomes magnified significantly. Add in the age of the consumption model and every minute of the day; you may experience the consequences of this inconsistency ($$$).
The first step in our home building is to test an individual VM’s density and user experience. This is also known as individual “node” testing: meaning, a single piece of hardware or multiple individual VMs on a single piece of hardware.
Important Note – It is not uncommon for organizations to have a desire to include ALL of their applications and workflows at this stage. My strong suggestion is to start with bulk applications, crown jewel applications, and their workflows with the highest return on investment.
Testing YOUR functions – this is your kitchen, bathroom, the roof, etc. – Application Acceptance Testing phase
This is based upon your customization – Ensure the applications and workflows function after the changes to your base image.
Any modification to your delivery or base images can have unintended consequences. Updating one application may cause a completely unrelated application to fail. This is often compounded by an enterprise’s demand to support hundreds, if not thousands of apps. Of course, you would not want to accept the liability of deploying something that is not tested. In the dawn of automation, there is little justification for not moving towards this goal. Most software solutions support a RESTful API, which means an open standard for action and information exchange.
At this stage of testing, you are expanding your application-specific workflows. For example, if you are a bank, you want a set of behaviors inside of Bloomberg. If you are in logistics, maybe this is within SAP.
Testing each of the rooms in the house simultaneously – Live in it – Multi-node testing—maximum capacity of individual hardware chassis.
This is the multi-chassis test. Now you are moving multiple VMs around across potentially different sets of hardware.
Can we get everyone through the door? Do we have a lot of furniture (customization)? Our activities are specific to us. We test the broker (front door). Is this an apartment building? Can everyone get in? We have 2500 people living here. This is an enormous task. Consider the following:
- Licensing – Operating system and product licensing
- Necessary hardware resources – Testbed and testing infrastructure components
- Segmentation from production
- Databases – Redundant databases to support testing volume
Not to mention third party tie in – Citrix / RDSH / Profile Management Solutions / Application Delivery Mechanisms / VMWare Horizon / Workspace Solutions
- Brokering & Balancing
- Delivery Controllers
- Multifactor authentication
- Different geography
- Brokering & Balancing
But there is an undeniable value in testing at capacity. Once everything has been tested. We know that each of the components is structurally sound. We’ve eliminated these from being points of failure in future tests. We’ve load tested to make sure that we can withstand any weather thrown at the roof without compromising the house’ experience, as well through the brokers.
I’ve seen several examples in the last five years where the configuration deficiencies were not presented until AFTER the full user load had been placed on the systems. This includes network routing issues. Slow storage for user profile activities. Antivirus competing for resources and bugs in exclusion settings.
Don’t feel bad if you don’t quite make it to this stage. We do not see a lot of our customers have the resources to accommodate this. However, it is important to strive for the desired outcome.
If you consider your testing objective in its pieces rather than its outcome, it’s much more reasonable. If you would like to discuss your testing methodologies currently implemented or take Login Enterprise for a spin, drop us a line at firstname.lastname@example.org or reach me personally at email@example.com.
One last critical note:
Controlling the application change cycle during the testing is essential. I cannot stress enough that being able to isolate the root cause is critical for mature testing practice. Without knowing ALL of the changes that go into an A / B testing methodology, the consequences can’t be adequately quantified.