Weblog of Mark Vaughn, and IT professional and vExpert specializing in Enterprise Architecture, virtualization, web architecture and general technology evangelism

Category: TechTarget (Page 3 of 4)

Post related to TechTarget articles.

The Server is Dead…Long Live the Server!

IT pros still argue over horizontal vs. vertical scaling, but the evolution of virtualization hardware appears to have subtly shifted the data center design debate to horizontal scaling vs. converged infrastructure.

Virtualization is winning the battle over installing an operating system on bare metal, and I think most people will concede that point. Not everyone has adopted virtualization, but time will soften their defenses and allow for improved virtualization  and options — until all are assimilated.

But what will happen to server hardware? Do people still care about the server? Even hypervisors need a physical home, and there are still plenty of discussions on how to best architect that foundation.

To read more on this topic, jump over to my TechTarget article “Virtualization hardware and data center design: did the debate shift?”

Mind the “Air Gap”

<IMAGE MISSING>

I regularly work with organizations that are wary of mixing public and private workloads in a common virtualization environment. Whether it is mixing public and private workloads, mixing multiple organizations on a common virtual infrastructure or simply mixing workloads from various internal networks, there is still a lot of concern around the security aspects of this discussion. Many people still look at one physical server, and get uneasy about different workloads sharing that server. Logically, many people relate it to sharing an operating system and that is the root of many concerns. This is an easy misconception, since traditional deployments have long been just that, one operating system for each physical server. If not properly explained, virtualization remains a black box to many people and old perceptions remain in place.

This is where we, as consultants and virtualization architects, need to do a better job of explaining new technologies. In this, case, it is not even a new technology, just a real lack of education in the marketplace. In 2001, the National Security Agency (NSA) worked with VMware on a project called NetTop to develop a platform for mixing secure and non-secure workloads on a common device. Previously the NSA maintained an “Air Gap” policy of not letting servers with mixed security needs touch each other. With the NetTop project, the NSA leveraged virtualization to bring these workloads onto a common server or workstation. This was not 2 years ago, but 10 years ago. And the security measures deployed in NetTop have only been improved on since then.

In fact, in 2007, the NSA came back to VMware to develop their High Assurance Platform (HAP). I won’t pretend to know your security needs, but I know virtualization has long been used for mixing highly sensitive data by people who live and die by data security.

You can read more on this in my latest TechTarget article:
http://searchservervirtualization.techtarget.com/news/2240036024/Mind-the-air-gap-Can-security-and-consolidation-coexist

Lessons from the clouds

In my MBA studies, many classes touched on Herb Kelleher and Southwest Airlines. Mr. Kelleher was an excellent example of how leadership should be done, and he led Southwest to growth in a very difficult market. As I revisit my previous studies, I now see technology lessons that parallel the business lessons.

<IMAGE MISSING>

Southwest simplified operational costs by selecting a single model of airplane and focusing on high-density routes. They also rode out some short term spikes in oil prices by leveraging advanced purhcases of airline fuel.

To read how these lessons relate to IT, visit my article “Successful virtualization strategies learned from the airline industry” at SearchServerVirtualization, then come back here to leave comments.

Managing in Muddy Waters

Virtualization management tools are popping up everywhere, and some are much better than others. In fact, few really shine at this point.Part of the problem is that these tools are attempting to tame a wild animal. Virtualization technologies are expanding and growing at a blinding pace, and no one can truly keep up with the current pace of change…let alone manage it.

Vendors like VKernel, Veeam and Quest are doing a good job, but don’t hold your breathe looking for one tool to rule them all. There will always be advanced features within a hypevisor that management tools have not caught up to. You will either have to limit yourself to the tools supported by your management platform (better pick a robust platform or you will be crippling your hypervisor and destroying your ROI), or you will have to accept that you will still use the native hypervisor management tools to manage advanced features (limiting the ROI on the new management tool).

This trade off is frustrating, but one that will not go away until the pace of change within virtualization technologies slows down considerably. In other words, this will not change any time soon.

Though I generally recommend against it, I admit there may be reasonable cases for mixing hypervisors within an environment. As you evaluate decisions like that, be sure to consider the impact on ROI. OpEx can go through the roof in those scenarios and easily wipe out the CapEx savings used to justify the decision. If you are then looking to a management tool to bring the two hypervisors together in a single pane of glass, do not set your expectations too high on the capabilities of any tool to provide a high value in that scenario. The few tools that could make any real impact there may be cost prohibitive. Before you know it, you have pushed both the CapEx and OpEx through the roof trying to manage a mixed environment.

This topic can go pretty deep, and in a hundred directions. I welcome your feedback and comments. I written an article on this topic at SearchServerVirtualization – “Virtualization management tools: Navigating the muddy waters”. Be sure to check that out, then come back here to leave any comments or contribute to the discussion.

Don’t put the cloud cart before the virtualization horse

The other day, I was digging into a discussion on thin provisioning, deduplication, snapshots and all of those great topics. I love these technologies, and I love to speak about them. However, after a few minutes, I came to a sad realization…I had ignored my audience and did more talking than listening.

There are a lot of very talented people in IT that have yet to begin their virtual journey. Sometimes, we can get so wrapped up in a virtual world that we forget that. When we let that happen, we lose value. This is a real paradigm change, and one that people need time to digest.

To read my entire article on this topic, please go to http://searchservervirtualization.techtarget.com/news/column/0,294698,sid94_gci1522243,00.html, then return here to leave feedback.

Where did who go……

Have you taken a look at your shared storage lately? If you look close enough, you are likely to begin asking “where did it all go?”. This is a very common situation for many people, finding that their storage bills have gone through the roof. As you start to look around, you will likely find free storage trapped in pockets all over your environment. This storage is not being used to actually store data, it is simply allocated and provisioned based on future projections. In some cases, it is simply poor judgment or bad business practices that have caused these allocations to be grossly overestimated.

Nothing is more frustrating than to be presented with a huge bill for a storage refresh, along side an analysis report showing that you are only using about 1/2 to 2/3 of the storage that you have provisioned. How do you combat this? It is never too late to correct course, and there is a lot that your storage vendor can do to help. Vendors like NetApp, EMC and Compellent have some great product offerings and feature sets to combat this wasted storage, while also helping you streamline your storage tiering and even your overall storage footprint.

For more on this topic, please check out my SearchServerVM article “Storage: Blinded by the virtualization light” at http://searchservervirtualization.techtarget.com/news/column/0,294698,sid94_gci1520838,00.html

After you read the article, please come back here and leave any comments. There is a lot that can be discussed on this topic.

Wire once and walk away

This is a phrase I am hear more and more of lately, and I love the concept behind it. Virtualization has consolidated servers, and blade servers have consolidated the footprints of those virtualization hosts. Now, blade solutions are beginning to employ even greater efficiencies with creative and flexible wiring strategies. As FCoE and converged networking matures, blade technologies are deploying these to collapse cables and leverage a single cable to provide multiple services (network and both block level and file level storage solutions).

Physical servers are also leveraging this, though blade solutions will often be more efficient in this arena. I also think that Cisco’s UCS has really taken this and run with it, others are catching on and making progress in this arena (i.e. HP’s Blade Matrix technologies). Regardless of your blade solution, this is a concept that you need to get familiar with and evaluate in relation to your future technology goals. In reducing infrastructure, you lower costs, ease administration and provide greater agility. Who doesn’t want that?

For more on this topic, please read my article “‘Wire once and walk away’ boosts data center efficiency” at SearchServerVirtualization, then come back here to leave any comments.

Will Cloud Computing level the playing field?

Lately, we hear about “cloud” everything. But the cloud is forming, it is available today. I admit, it is still in it’s infancy, but you need to be thinking about the impact it will have on business and IT. How do you plan to take advantage of this technology/service offering?

For many, I think cloud computing technologies and concepts will allow them greater agility and effectively eliminate the barrier of entry into many technology services. A startup can leverage the cloud to quickly deploy an enterprise class infrastructure, and the big players need to be preparing for this as well.

I discuss this in more detail in my article “Public cloud computing levels the playing field” on SerchServerVirtualization. Please take a minute to read the article, then come back here to leave any comments.

It’s 10pm, where is your capacity?

You can no longer simply set an alarm on capacity measurements and trust that to keep you clear of capacity problems, especially if you are looking at virtualization and/or cloud computing. Shared resources are a tremendous gain for efficiency, but can be a double-edged sword when it come to managing that shared capacity. It is well worth the extra effort, but you need to know how critical that extra effort is.

You can no longer just know where your capacity is, you have to know how it got there, where it is going, why it is going there, WHEN it is going there, what factors may speed up or slow down that growth rate…I think you get the point. The role of resource/capacity management has just stepped into the spotlight, and you need to adjust your policies and practices to recognize that.

Read more on this topic in the SearchServerVirtualization article “It’s 10p.m. Do you know where your capacity is?“, then come back here to leave your comments. These are big topics, I would love to hear what you think or how you may be adjusting to these changes.

Shiney new IT toys

All that glitters is not gold…sometimes it is simply a distraction. Sometimes, and I have been guilty of this, we let the desire to implement technology get in the way of meeting business needs. It can be very tempting, after evaluating an amazing new technology, to then begin looking for excuses opportunities to use it. Sometimes you find that true win-win scenario where that technology is the exact fit, and sometime you end up making it fit in the hopes that it will show increased value in the future.

Fred Nix hit this point very well with his post on 1/4 inch drill bits. Sometimes we simply need to step back and evaluate why we are looking at a new technology. If you are impressed with a presentation or excited after evaluating a new technology, then make note of that and add it to your toolbox of solutions. Then, when the right opportunity presents itself, reach into your toolbox and pull out the right solution for the problem in front of you.

Read more about this in my article “New IT Trends: Are they right for you?“, then come back here to leave any comments. As always, your thoughts and feedback are encouraged.

« Older posts Newer posts »