Weblog of Mark Vaughn, and IT professional and vExpert specializing in Enterprise Architecture, virtualization, web architecture and general technology evangelism

Category: Technology (Page 4 of 7)

Beware of Free Puppies

One of my co-workers, Chris Reed (www.creedtek.com), uses the analogy that the most expensive pet you can get is a free dog. The initial cost is great. How can you get cheaper than $0? However, this transaction is followed by the vet bills, the inevitable property damage and the chewed up slippers. Free dogs rarely have their shots, which puppies need several rounds of. They usually need to be “fixed”, and I like to add in a location chip. Soon, you have shelled out a significant amount of cash on a free pet.

Many people will adopt a similar approach to virtualization, selecting a free product with the assumption that it will save them money. Can that work? Yes. Are there hidden costs to be aware of, and even expect in the near future? Definitely. Can those costs be significant? Yes, and they can be quite significant.

If you are comparing free hypervisors, then it comes down to features. Microsoft includes more features in their free version than VMware, but their features are less robust than the same features from VMware. And as you move into the higher licensing levels to migrate into an enterprise solutions, VMware’s features are significantly more robust. That could mean starting on one hypervisor at the free level, then having to change hypervisors as your environment matures. That can be a VERY painful process.

I wrote more on this topic in my last article “Free virtualization: It’s free for a reason“. I would also recommend going to www.virtualizationmatrix.com for a great break down of features between Citrix, Microsoft and VMware at various versions of their products. Andreas Groth has done a significant amount of research to build that matrix.

Mind the “Air Gap”

<IMAGE MISSING>

I regularly work with organizations that are wary of mixing public and private workloads in a common virtualization environment. Whether it is mixing public and private workloads, mixing multiple organizations on a common virtual infrastructure or simply mixing workloads from various internal networks, there is still a lot of concern around the security aspects of this discussion. Many people still look at one physical server, and get uneasy about different workloads sharing that server. Logically, many people relate it to sharing an operating system and that is the root of many concerns. This is an easy misconception, since traditional deployments have long been just that, one operating system for each physical server. If not properly explained, virtualization remains a black box to many people and old perceptions remain in place.

This is where we, as consultants and virtualization architects, need to do a better job of explaining new technologies. In this, case, it is not even a new technology, just a real lack of education in the marketplace. In 2001, the National Security Agency (NSA) worked with VMware on a project called NetTop to develop a platform for mixing secure and non-secure workloads on a common device. Previously the NSA maintained an “Air Gap” policy of not letting servers with mixed security needs touch each other. With the NetTop project, the NSA leveraged virtualization to bring these workloads onto a common server or workstation. This was not 2 years ago, but 10 years ago. And the security measures deployed in NetTop have only been improved on since then.

In fact, in 2007, the NSA came back to VMware to develop their High Assurance Platform (HAP). I won’t pretend to know your security needs, but I know virtualization has long been used for mixing highly sensitive data by people who live and die by data security.

You can read more on this in my latest TechTarget article:
http://searchservervirtualization.techtarget.com/news/2240036024/Mind-the-air-gap-Can-security-and-consolidation-coexist

Lessons from the clouds

In my MBA studies, many classes touched on Herb Kelleher and Southwest Airlines. Mr. Kelleher was an excellent example of how leadership should be done, and he led Southwest to growth in a very difficult market. As I revisit my previous studies, I now see technology lessons that parallel the business lessons.

<IMAGE MISSING>

Southwest simplified operational costs by selecting a single model of airplane and focusing on high-density routes. They also rode out some short term spikes in oil prices by leveraging advanced purhcases of airline fuel.

To read how these lessons relate to IT, visit my article “Successful virtualization strategies learned from the airline industry” at SearchServerVirtualization, then come back here to leave comments.

iPad vCenter Client

Had to throw out a quick comment on the new iPad vCenter Client from VMware.

For over a year now, VMware has offered the vCenter Mobile Access (vCMA) appliance. I have used it internally, but it has never caught on as well as I had thought. One drawback was the lack of SSL support, and that was fixed last week. Here are some quick screenshots of vCMA in action (these were on an iPad, it is really made to be viewed on a smaller PDA or phone screen, so some screens have excess whitespace):

<IMAGES MISSING>

vCMA was a great tool, but it just got better. VMware has developed a new iPad vCenter Client that leverages the vCMA to provide an even better user interface. Like the vCMA, the iPad vCenter Client can only do about 50% of the standard functions available in the Windows vCenter Client, but they are now committed to growing this application and adding more functionality. From some of the pre-launch discussions I was able to be in, VMware is very excited about this tool and anxious to begin expanding it’s functionality. The iPad client connects through the vCMA, and I am not sure I will be exposing it to the internet any time soon. I only operate a lab, and the vCMA now has SSL support, but I have VPN access and will likely use that to allow vCMA to stay behind the firewall…for now. Here are some shots of the iPad client, and you can see how much it improves on the previous vCMA interface:

<IMAGES MISSING>

As you can see in the images above (click on any to enlarge them), you can view the stats for ESXi hosts and for the VMs from the main screen. There is a small stats icon in the upper right corner of each VM’s image that will change its image form a banner representing the OS to a stats chart. Once you drill down to a VM, you can perform start/stop/suspend/restart functions, as well as restore snapshots. You can also view recent events, monitor stats and perform tests (ping and traceroute). Not bad for a convenient app you take with you on an iPad.

Steve Herrod, CTO at VMware, officially announced the iPad vCenter Client this morning, along with a link to this article on VMware’s CTO blog site.

Eric Siebert (virtualization guru and fellow vExpert) also wrote a great post on this at vSphere-Land. Be sure to follow the “full article” and “part 2” links at the bottom of the article to get more information and installation instructions.

As great as this client is, do not feel left out if you do not have an iPad (or if you use one of those inferior tablets…Aaron ;-), you can still use the vCMA from almost any mobile browser on a cell phone or tablet. Though the interface is not as refined, it will provide the same basic functionality.

Managing in Muddy Waters

Virtualization management tools are popping up everywhere, and some are much better than others. In fact, few really shine at this point.Part of the problem is that these tools are attempting to tame a wild animal. Virtualization technologies are expanding and growing at a blinding pace, and no one can truly keep up with the current pace of change…let alone manage it.

Vendors like VKernel, Veeam and Quest are doing a good job, but don’t hold your breathe looking for one tool to rule them all. There will always be advanced features within a hypevisor that management tools have not caught up to. You will either have to limit yourself to the tools supported by your management platform (better pick a robust platform or you will be crippling your hypervisor and destroying your ROI), or you will have to accept that you will still use the native hypervisor management tools to manage advanced features (limiting the ROI on the new management tool).

This trade off is frustrating, but one that will not go away until the pace of change within virtualization technologies slows down considerably. In other words, this will not change any time soon.

Though I generally recommend against it, I admit there may be reasonable cases for mixing hypervisors within an environment. As you evaluate decisions like that, be sure to consider the impact on ROI. OpEx can go through the roof in those scenarios and easily wipe out the CapEx savings used to justify the decision. If you are then looking to a management tool to bring the two hypervisors together in a single pane of glass, do not set your expectations too high on the capabilities of any tool to provide a high value in that scenario. The few tools that could make any real impact there may be cost prohibitive. Before you know it, you have pushed both the CapEx and OpEx through the roof trying to manage a mixed environment.

This topic can go pretty deep, and in a hundred directions. I welcome your feedback and comments. I written an article on this topic at SearchServerVirtualization – “Virtualization management tools: Navigating the muddy waters”. Be sure to check that out, then come back here to leave any comments or contribute to the discussion.

1201 Program Alarm

In July of 1969, the US was in a race for the moon. Astronauts Michael Collins, Buzz Aldrin and Neil Armstrong were entrusted with the Apollo 11 mission, taking the first shot at the momentous achievement. Last night, I caught a great documentary on this mission, waking through the many planning details and challenges involved in going where no man had gone before. In fact, knowing the technology of the time, I am still amazed that we were able to pull this off on the first attempt. These men were truly brave, trusting their lives to such new and really untested technologies.

<IMAGE MISSING>

One thing that caught my attention was how meticulous Mission Control was, as they faced a number of “go/no go” decisions from the launchpad to the moon and back. As Buzz Aldrin and Neil Armstrong undocked the lunar module and began their decent to the moon’s surface, they were faced with almost impossible odds. There were so many calculations that had to be made in a split second, with no prior experiences to draw from. Gene Kranz, NASA’s Flight Director at Mission Control, was faced with a number of tough decisions as the lunar module approached the surface of the moon.

They did not know exactly when they would touch down, or how much fuel they would burn in the process, but they did have a good idea of how much they would need to relaunch and connect back up with the command module for the return trip to Earth. With every second, fuel consumption was being calculated and measured to insure this was not a one-way trip. About 30 seconds after beginning their final approach, Neil Armstrong calls out “1201 program alarm”. This was a computer error code that simply meant there that the computer was unable to complete all of the calculations it was attempting and had to move on. Timing was critical, and the programmers of the flight computers knew that should this condition occur, it was more important to simply note that some data was lost and move on. I can imagine the concern this caused, both in Mission Control and cramped lunar module. This is where the many eyes and ears monitoring the situation at Mission Control had to step up and insure that the important information was not dropped.

As I watched this, I noticed much this is similar to how PCoIP handles data (you knew this had to have a virtualization tie in, right?). PCoIP uses UDP. UDP is stateless, it will drop data packets. At the most basic level of the solution, the networking layer, packets can be lost the data does not stop flowing. UDP is like the flight computers on-board the Apollo 11 lunar module. At the application layer, PCoIP becomes Gene Kranz and Mission Control. It is looking to determine what may have been lost and how that can impact the overall mission. Calculating the 100 feet in front of the lunar module was infinitely more important than continuing calculations for the 100 feet behind it. With PCoIP, the goal is an efficient and pleasant user experience. To achieve that, with a myriad of unknown factors that can come into play, UDP is recognizing that not all data is critical to the end goal. Instead, PCoIP is the watchful eye that makes the “go/no go” decisions. For USB communications, data input and other critical data, it will request a retransmit of the data to insure accuracy reliable delivery of information. For audio or video packets, where it would inflict more harm to pause communications and attempt retransmits of the lost packets, it simply makes the decision to move on.

Protocols build on top of TCP, like RDP or ICA/HDX, cannot provide this intelligent decision-making. TCP guarantees delivery of packets from the network layer, so the application has no option but to pause while waiting for the delivery of data. Sometimes, you really need to allow the software protocol to apply some intelligence to that decision.

As the Apollo 11 lunar module was rapidly approaching the surface of the moon, with nearly zero tolerance for errors, NASA knew that some data loss was acceptable. In fact, it was preferable to the penalty that could have been incurred by stopping the pending functions to complete previous ones. Had the Apollo 11 flight computers actually stopped measuring fuel consumption and distance to the moon for a few seconds to finish whatever computation triggered that 1201 program alarm, the event of July 20, 1969, may have ended very differently.

Put down the gum, and no body gets hurt

Virtualization has introduced a HUGE change in how servers are requested and acquired. In many cases, people have begun to think of virtual servers as being a “free” or “cheap” resource that has no lead time in requests and little cost for acquisition. This is very dangerous. Keeping adequate virtual resources available is critical to realizing the value of virtualization, but allowing this “sprawl” of virtual machines to steal these resources can be a serious issue. Replenishing resources for the virtualization environment is not free, so you cannot allow the consumption of those resources to be free.

In some ways, purchasing servers moved from the concept of a server being the large boxed item in the back of the store to being the pack of gum at the check out counter. The large box requires a considerable financial investment to purchase and some logistical considerations to actual get home. You don’t buy these unless you need them, and there is some pain involved that discourages waste in these purchases. The pack of gum, you buy that on impulse on the way out the door, and you grab a few extras for later. Low investment, little pain, lots of waste.

I go into this topic in much more detail in my recent SearchServerVirtualization article “Closing the VM sprawl floodgates” at http://searchservervirtualization.techtarget.com/news/column/0,294698,sid94_gci1523796,00.html

Please, come back here after reading this article and leave your comments. This is a topic that just won’t go away. Be looking for a future post on VM Stall, and how it relates to VM Sprawl.

Don’t put the cloud cart before the virtualization horse

The other day, I was digging into a discussion on thin provisioning, deduplication, snapshots and all of those great topics. I love these technologies, and I love to speak about them. However, after a few minutes, I came to a sad realization…I had ignored my audience and did more talking than listening.

There are a lot of very talented people in IT that have yet to begin their virtual journey. Sometimes, we can get so wrapped up in a virtual world that we forget that. When we let that happen, we lose value. This is a real paradigm change, and one that people need time to digest.

To read my entire article on this topic, please go to http://searchservervirtualization.techtarget.com/news/column/0,294698,sid94_gci1522243,00.html, then return here to leave feedback.

Where did who go……

Have you taken a look at your shared storage lately? If you look close enough, you are likely to begin asking “where did it all go?”. This is a very common situation for many people, finding that their storage bills have gone through the roof. As you start to look around, you will likely find free storage trapped in pockets all over your environment. This storage is not being used to actually store data, it is simply allocated and provisioned based on future projections. In some cases, it is simply poor judgment or bad business practices that have caused these allocations to be grossly overestimated.

Nothing is more frustrating than to be presented with a huge bill for a storage refresh, along side an analysis report showing that you are only using about 1/2 to 2/3 of the storage that you have provisioned. How do you combat this? It is never too late to correct course, and there is a lot that your storage vendor can do to help. Vendors like NetApp, EMC and Compellent have some great product offerings and feature sets to combat this wasted storage, while also helping you streamline your storage tiering and even your overall storage footprint.

For more on this topic, please check out my SearchServerVM article “Storage: Blinded by the virtualization light” at http://searchservervirtualization.techtarget.com/news/column/0,294698,sid94_gci1520838,00.html

After you read the article, please come back here and leave any comments. There is a lot that can be discussed on this topic.

VMware Express – The Challenge

If you have not had a chance to see the VMware Express, you should stop now and go read up on it HERE. To see it in person, look at the schedule HERE.

This is an amazing vehicle, and the result of a strong commitment by VMware to demonstrate both their own technologies and those of their partners. You have a world class data center on wheels, traveling the country to promote the game changing virtualization technologies VMware has to offer.

Since seeing the data center in a trailer at VMWorld 2008, I have loved the concept of how much processing power and functionality you can pack into a mobile data center with virtualization. As a customer, I was excited when VMware announced the VMware Express several months ago and was checking the schedule for my first chance to actually get inside. Now that I am a partner, I see this as a mobile marketing tool (which it really always was). I am now checking the schedule for opportunities to bring customers to see the VMware Express.

In fact, that brings up a second point. Many partners are working on mobile demo platforms, looking to create platforms that can demonstrate core virtualization functionalities with a price tag and footprint that actually make the unit feasible. To that point, two people have put together some very impressive mobile demo platforms. Simon Gallagher (@vinf_net) has developed the v.T.A.R.D.I.S., and describes it on his blog HERE and HERE. This is a configuration made of two cheap PC-grade desktops, hosting 4 ESXi servers as VM’s and another 60 VM’s running on those hosts. With this, you can easily demo vMotion, DRS, SRM, VDI and many other technologies. At VMWorld, Simon also pointed me to Didier Pironet’s (@dpironet) post on a similar setup using slightly more powerful computers. You can read Didier’s post HERE.

So what is the challenge? What if there was a vehicle with a portion of the VMware Express functionality, in a footprint that was more feasible for partners to use? Maybe utilizing a setup similar to Simon’s or Didier’s, throwing in a wireless network and a Cisco Cius, an Apple iPad and a smaller zero client that could be used for a demonstration anywhere in the range of the wireless network. Maybe even throw in one wireless repeater for a little boost. Imagine showing a nice SRM or VDI demo, then saying “And all of this is running from that small vehicle out the window.”

I would recommend the Mini Club-S, taking out the back seats and leveraging the slightly expanded interior room. Besides, if you are making a miniature version of the VMware Express, why not use a “Mini”? Ladies and gentlemen, I present the VMware Mini-Express concept car.

So, that is the challenge. VMware, could you make a VMware Mini-Express, come up with a good competition and award it to a partner at VMware Partner Exchange next spring? Maybe even create a few additional Mini-Expresses to use in a regional role within VMware sales/marketing. Either way, it would make a great showcase for just how much power you can fit in a small package with VMware technologies. In fact, you could almost fit this in the conference room of the VMware Express trailer for transportation and have them tour together. Now I see images of Knight Rider or Smokey and The Bandit, but that is another blog post all together 😉

If anyone does decide to take up this challenge, even if only for your own use, please let me know. I would love to see the final product!

« Older posts Newer posts »