NVIDIA GPU Tech Conference – Login GFX and the unimaginable progress in computing
Imagine a future where everyone is walking around with opaque masks over their eyes, dancing around the floor slicing their hands through the air. You’d think that was pretty crazy right? Well… that was NVIDIA’s GTC. What an amazing place to be for a geek like me! Seriously though, I have never seen such a concentration of the highest tech in one place. The topics included Machine Learning, Artificial Intelligence, Unmanned or self-driving Vehicles, Molecular Science, Space and of course Virtualization. Overall, every day at this event has been visually palpable.
It was very important for Login VSI to be at this event since it has so much to do with what is at the top of mind for hosted desktop and application users, administrators and architects… a great experience, and that means a responsive and rich graphics experience. The BIG question at this event for those interested in NVIDIA GRID is “How many users can I, or should I, get per host that is vGPU enabled?” For me, this falls into 2 classes… Benchmarking and Sizing.
First let me just convey the general disdain for benchmarking I have heard around the GTC. Basically benchmarks are bad for GRID enabled environments because they are traditionally meant for physical workstations—typically gaming—and they are meant to push the processors to their limits for the user running it. So what happens when you are trying to benchmark VDI using GRID in this manner? Well, let me just say you’ll probably be spending 2x-4x more money on your solution because the workload in a hosted environment oversaturates 2x-4x faster due to these unrealistic usage patterns. What is also important is that games or video in an enterprise shouldn’t be what your users are doing at work.
GPUs are in many ways a shared resource when it comes to virtualization, and instead of fencing off a ‘portion’ of the processors, NVIDIA has done something really cool by time-slicing the processors. This basically means that users benefit from other users not actively using the system for every millisecond it is provisioned to them. That’s just human nature… humans—when compared to computers—wait around a lot and NVIDIA takes advantage of these wait times to give the cycles to other users who need them.
So where does that leave us? Great question! Login VSI was at GTC to introduce what we call Login GFX. We are grateful to the famous Thomas Poppelgaard for his contribution to the overall discussion and in co-hosting the offsite with our CTO, Jeroen van de Kamp. This is the gist: Take our virtual users, give them real line of business applications that are graphically intensive (i.e. AutoCAD, Revit, Catia, even MS Office, etc…) and let them put a much more “human” and realistic load on the GRID architecture. There are two ways to go about this. The first is to push the resources to their limits and then objectively compare the technologies you have narrowed down to the potential candidates in your solution—in other words, benchmark. The other is to then perform some sizing to determine the maximum efficiency of your solution that produces the best end user experience.
Login GFX workloads in action
As we introduced Login GFX to our closest advisors and vendors like NVIDIA, VMware, Citrix, and many more, one thing was clear… opening up a vGPU enabled VDI performance and benchmarking discussion will be like opening Pandora’s Box, to use the words repeated at GTC by others describing this topic. This is so exciting because that means there is an entirely new niche in the VDI market that is desperate for insight and guidance, which is exactly what Login GFX was designed to deliver.
Thanks to our community of experts for shaping this conversation. Stay tuned for more discussions and some GREAT stuff on the way.