Weblog of Mark Vaughn, and IT professional and vExpert specializing in Enterprise Architecture, virtualization, web architecture and general technology evangelism

Tag: Virtualization (Page 1 of 3)

VDI – Adding new life to aging desktops

Around 5 years of age, desktop computers may begin to lack the processing power required to meet the ever-growing demands of operating systems and applications. By this point, many people are looking to swap out desktops. With Windows XP finally reaching it’s final end-of-support date, a lot of organizations are finding themselves with a large number of desktops that are unable to upgrade to Windows 7, due to the age of the hardware.

This was the situation for one of my first VDI projects. Almost 4 years ago, a school district was facing a required hardware refresh to move to Windows 7. This was a requirement for some of their curriculum, and they had thousands of desktops that were over 7 years old and could not be upgraded. However, using VMware View as a VDI solution, they were able to repurpose those old PCs as VDI end points. Their preferred endpoint is a Teradici-based zero client, as they are easier to manage and offer improved management. However, the plan was to use zero clients for all new purchases and to replace the existing desktops through attrition.

They were early adopters in education, and had a very savvy IT group that was up for the challenge. Today, I received a report from this customer that wanted to show off how VDI was allowing them to reach outrageous lifespans on their old PCs. When I saw these numbers, I just had to share them. These are the numbers that make VDI work, and why I love working with VDI technologies.

With over 5,000 desktops in the district, 57% are between 6 and 9 years old. Many of these do not meet the minimum hardware requirements for Windows 7, yet they run it on a daily basis. But what’s more impressive is the fact that 32% of their desktops are over 10 years old and still in use to serve up Windows 7 desktops on a daily basis.

Wow, talk about return on investment! Our initial ROI analysis assumed that almost all of these desktops would have already been upgraded to zero clients by now. Ever year they are able to delay that, they are leaving money in the budget for other projects.

A strategy that was meant to delay desktop replacements for a year, to offset the costs of the initial VDI environment, has paid off better than we could have imagined. We have a happy customer, and their technology department has been able to improve the educational tools within their district while lowering costs. #winning

VDC-OS: Is it finally here?

VMware first announced its concept of the virtual data center operating system (VDC-OS) at VMworld 2008. Paul Maritz, VMware Inc.’s CEO at the time, took the stage and began to share his vision of a software-defined data center. Maritz is no longer in the driver’s seat, but this destination of a virtual data center operating system and the software-defined data center is finally coming into view.

At VMworld 2013, the concept of a VDC OS took two big steps forward. Learn more about this in my latest TechTarget article “Not lost, just recalculating: VMware’s route to a VDC-OS has been long“, then come back here and leave a comment.

The Changing Hypervisor Role

Not all hypervisors have reached a level of parity in features, functionality and performance (regardless of what some marketing campaigns might say). However, the virtualization heavyweights are beginning to see real competition, and they realize that the gaps between the leading hypervisors are closing quickly. Given these narrowing feature gaps, how will we compare hypervisors in the future?

As the hypervisor battle evens out, I foresee a kind of stalemate. Vendors will struggle to differentiate their products from the competition, and the short attention span of IT pros will move to areas that provide greater value.

What can this mean for your organization and your long-term IT strategies? For more on this topic, read my TechTarget article “As feature gaps narrow, how will we compare hypervisors in the future?“.

Storage Landscape is Changing

Virtualization transformed data centers and restructured the IT hardware market. In this time of change, startups seized the opportunity to carve out a niche for products like virtualization-specific storage. But are these newcomers like Nutanix and Fusion-io here to stay or will they struggle to compete as established companies catch up with storage innovations of their own?

For a long time, it appeared storage vendors were growing complacent. A few interesting features would pop up from time to time, and performance was steadily improving, but there were few exciting breakthroughs. Users weren’t demanding new features, and vendors weren’t making it a priority to deliver storage innovations. Virtualization changed that tired routine.

In many ways, now, it is storage vendors that are knocking down technology walls and enabling new technologies to flourish. I discuss this topic more in my TechTarget article “Virtualization storage innovations challenge market leaders“. Please give it a read and come back here to leave any comments.

Virtualization Paying Off?

Several years ago, server virtualization rolled into the data center with all of the outrageous promises and unbelievable claims of a sideshow barker. Experts and vendors claimed it was going to improve server efficiency, shrink your infrastructure, slash bloated power bills, make cumbersome administrative tasks disappear and cure the common cold. The sales pitch was smooth and we all bought in, but has virtualization fulfilled the promises?

Did your infrastructure shrink when you implemented virtualization?

I want to hear more from you about this. Take a minute to read my TechTarget article “Virtualization improved server efficiency, but did it meet the hype?“, then come back here and contribute to the conversation.

Beware: Storage Sizing Ahead

Managing data growth continues to be a struggle, and organizations are beginning to outgrow that first storage array they bought a few years ago. As they do, some are in for a big surprise. For years, the focus has been on adding storage capacity. In fact, the development of a storage strategy is still referred to as a “sizing” exercise. However, today the challenge is now accessing that huge amount of data in an acceptable amount of time, and the size or capacity of the drive will have little or no correlation to the performance of the storage. An IT administrator who focuses too narrowly on adding storage capacity can end up with an array that can hold all the data, but can’t support the IOPS demanded by applications.

If you are considering a storage upgrade, it is critical that you understand how this can impact your organization. I cover this in more detail in my TechTarget article “Adding storage capacity can actually hurt IOPS“. Please take a minute to read the article, the come back here and leave a comment to contribute to the conversation.

Software-Defined Data Center

When you think about a data center, you likely imagine a large building with diesel generators. Inside, you probably picture racks full of servers and networking equipment, raised floors, cable trays and well-positioned ventilation. After all the hours that I have spent in data centers, thoughts like this actually make me feel cold.

So, what truly defines the physical data center? Is it really defined by what we physically see when we step inside? Thousands of data centers may use the same network switches, but any two are rarely the same.

Purpose-built hardware provides a way to execute code-and-relay data. What makes it unique is software or the device configuration. Every data center has a core router, but the configuration of that device is what makes the router unique. The physical device is simply a conduit for executing the function, as defined by the software. Although the physical aspects of a data center have not changed much, the amount of end-user systems that directly touch the components has decreased dramatically.

Virtualization changed the way we defined the data center. The data center is no longer just a room of hardware, but a room of meta data about applications and services. To read more on this shift in data center thinking, please read my TechTarget article “How the software-defined data center changes the virtualization game“, then come back here and leave a comment to tell me what you think.

Virtualizing Hadoop

Large companies have long used big data analytics to comb through vast amounts of data in a short amount of time. Companies with deep pockets and ever-expanding amounts of data have built large server clusters dedicated to mining data. Hadoop clusters can help small and medium-sized businesses lacking big budgets benefit from big data.

Have you ever wondered how search engines guess what you want to type in the search field before you finish typing it, or offer suggestions of related queries? Maybe you’ve noticed that Facebook or LinkedIn recommend people you may know based on your connections? These are two examples of Hadoop clusters processing large amounts of data in fractions of a second.

Created by the Apache Foundation, Hadoop is an open source product built to run on commodity hardware. The most expensive part of building a Hadoop cluster involves the compute and storage resources. Server virtualization can help reduce that cost, bringing big data analytics to budget-constrained organizations.

“Big Data” and “Analytics” are more than just buzzwords, they are business gold mines waiting to be discovered. To read more on this topic, visit TechTarget and read my article “Virtualization makes big data analytics possible for SMBs“. As always, please come back here and leave any comments.

Server To Go

Desktop hypervisors, such as VMware Workstation and Parallels Desktop, open up a world of management and troubleshooting possibilities for server virtualization admins.

Whether you are new to server virtualization or a seasoned veteran, there is a very good chance that your first hands-on experience with the technology was in the form of a desktop tool such as VMware WorkstationVMware FusionParallels or even Windows Virtual PC. You probably installed it as a chance to kick the virtual tires or maybe to aid in a major operating system change.

Regardless of the reason, for many, the virtualization journey began with a desktop hypervisor. In fact, I don’t think we give enough credit to just how great of a role these desktop tools play in the world of server virtualization.

Desktop hypervisors may provide more value than you realize, and no IT admin has a good excuse to not be running one. For more on this topic, check out my TechTarget article ““Why virtualization admins and desktop hypervisors should be BFFs“, then come back here and leave any comments that you may have.

Rumors of the Data Center’s Demise

As with server virtualization in years past, new data center technology is unsettling for some IT pros. But, if you embrace change, you potentially make yourself more marketable and can vastly improve data center operations.

Several years ago, a hardware vendor ran a commercial that showed someone frantically searching for missing servers in a data center. He eventually learns that the large room, full of servers was consolidated into a single enterprise server platform.

It was an amusing commercial, although this trend of server consolidation did not play out as the advertisement implied. In fact, it was server virtualization that brought about large-scale consolidation efforts. But the premise was still true: Reducing the data center’s physical footprint caused a good deal of anxiety in IT administrators.

To continue this thought, read my TechTarget article “Embrace, don’t fear, new data center technology“, and then come back here to leave a comment.

« Older posts