Weblog of Mark Vaughn, and IT professional and vExpert specializing in Enterprise Architecture, virtualization, web architecture and general technology evangelism

Category: TechTarget (Page 2 of 4)

Post related to TechTarget articles.

Beware: Storage Sizing Ahead

Managing data growth continues to be a struggle, and organizations are beginning to outgrow that first storage array they bought a few years ago. As they do, some are in for a big surprise. For years, the focus has been on adding storage capacity. In fact, the development of a storage strategy is still referred to as a “sizing” exercise. However, today the challenge is now accessing that huge amount of data in an acceptable amount of time, and the size or capacity of the drive will have little or no correlation to the performance of the storage. An IT administrator who focuses too narrowly on adding storage capacity can end up with an array that can hold all the data, but can’t support the IOPS demanded by applications.

If you are considering a storage upgrade, it is critical that you understand how this can impact your organization. I cover this in more detail in my TechTarget article “Adding storage capacity can actually hurt IOPS“. Please take a minute to read the article, the come back here and leave a comment to contribute to the conversation.

Software-Defined Data Center

When you think about a data center, you likely imagine a large building with diesel generators. Inside, you probably picture racks full of servers and networking equipment, raised floors, cable trays and well-positioned ventilation. After all the hours that I have spent in data centers, thoughts like this actually make me feel cold.

So, what truly defines the physical data center? Is it really defined by what we physically see when we step inside? Thousands of data centers may use the same network switches, but any two are rarely the same.

Purpose-built hardware provides a way to execute code-and-relay data. What makes it unique is software or the device configuration. Every data center has a core router, but the configuration of that device is what makes the router unique. The physical device is simply a conduit for executing the function, as defined by the software. Although the physical aspects of a data center have not changed much, the amount of end-user systems that directly touch the components has decreased dramatically.

Virtualization changed the way we defined the data center. The data center is no longer just a room of hardware, but a room of meta data about applications and services. To read more on this shift in data center thinking, please read my TechTarget article “How the software-defined data center changes the virtualization game“, then come back here and leave a comment to tell me what you think.

Virtualizing Hadoop

Large companies have long used big data analytics to comb through vast amounts of data in a short amount of time. Companies with deep pockets and ever-expanding amounts of data have built large server clusters dedicated to mining data. Hadoop clusters can help small and medium-sized businesses lacking big budgets benefit from big data.

Have you ever wondered how search engines guess what you want to type in the search field before you finish typing it, or offer suggestions of related queries? Maybe you’ve noticed that Facebook or LinkedIn recommend people you may know based on your connections? These are two examples of Hadoop clusters processing large amounts of data in fractions of a second.

Created by the Apache Foundation, Hadoop is an open source product built to run on commodity hardware. The most expensive part of building a Hadoop cluster involves the compute and storage resources. Server virtualization can help reduce that cost, bringing big data analytics to budget-constrained organizations.

“Big Data” and “Analytics” are more than just buzzwords, they are business gold mines waiting to be discovered. To read more on this topic, visit TechTarget and read my article “Virtualization makes big data analytics possible for SMBs“. As always, please come back here and leave any comments.

Server To Go

Desktop hypervisors, such as VMware Workstation and Parallels Desktop, open up a world of management and troubleshooting possibilities for server virtualization admins.

Whether you are new to server virtualization or a seasoned veteran, there is a very good chance that your first hands-on experience with the technology was in the form of a desktop tool such as VMware WorkstationVMware FusionParallels or even Windows Virtual PC. You probably installed it as a chance to kick the virtual tires or maybe to aid in a major operating system change.

Regardless of the reason, for many, the virtualization journey began with a desktop hypervisor. In fact, I don’t think we give enough credit to just how great of a role these desktop tools play in the world of server virtualization.

Desktop hypervisors may provide more value than you realize, and no IT admin has a good excuse to not be running one. For more on this topic, check out my TechTarget article ““Why virtualization admins and desktop hypervisors should be BFFs“, then come back here and leave any comments that you may have.

Rumors of the Data Center’s Demise

As with server virtualization in years past, new data center technology is unsettling for some IT pros. But, if you embrace change, you potentially make yourself more marketable and can vastly improve data center operations.

Several years ago, a hardware vendor ran a commercial that showed someone frantically searching for missing servers in a data center. He eventually learns that the large room, full of servers was consolidated into a single enterprise server platform.

It was an amusing commercial, although this trend of server consolidation did not play out as the advertisement implied. In fact, it was server virtualization that brought about large-scale consolidation efforts. But the premise was still true: Reducing the data center’s physical footprint caused a good deal of anxiety in IT administrators.

To continue this thought, read my TechTarget article “Embrace, don’t fear, new data center technology“, and then come back here to leave a comment.

Multiple Hypervisors Ahead: Proceed with Caution

Multi-hypervisor management tools are great for common provisioning and monitoring tasks, but they fall flat when it comes to deeper functionality, which provide the most value.

Looking past the feature parity and marketing hype in the hypervisor market, there are compelling reasons to deploy a heterogeneous virtualization environment (i.e., reducing costs, improving compatibility with certain applications, avoiding vendor lock-in). Do you keep fighting it off? Do you concede and draw lines of delineation between the hypervisors? Do you throw your hands up and simply let them all in?

The answer certainly depends on your skillsets, needs and tolerance for pain. But if you’re looking for an easy way out through a multi-hypervisor management tool, you may be disappointed.

For more on this topic, check out my TechTarget article at “Proceed with caution: Multi-hypervisor management tools“, then come back here and leave a comment.

Circle of Innovation

I was recently talking with a colleague about new technologies and noticed a pattern: Many of last year’s innovations in hardware are very similar to next year’s new software features. The more I thought about it, the more I realized that this IT trend is nothing new.

Ten years ago, for example, server sprawl was a very real concern. As end users demanded more servers for a particular application or task, hardware vendors turned to small-appliance form factors and blade configurations. These new server technologies helped promote denser server environments.

At first, the need for a large number of small servers was met in the physical world. Not far behind this hardware innovation came server virtualization, a software answer to the need for higher server counts. While this need was initially easier to meet with hardware, software vendors soon arose with a more efficient solution.

Fast-forward to today and there are similar IT trends. Many networking devices — from load balancers to firewalls — are now available as virtual appliances. What once required purpose-built hardware can now be run on virtual hardware and deployed almost anywhere. In fact, virtual appliances are actually becoming the preferred format for networking equipment in many data centers.

When you think about, this leap frogging of technology is really just following Moore’s Law, which states that computing power doubles every two years. Initially, when a difficult problem emerges, purpose-built hardware is coupled with custom application code to create a device that fills a very specific need. Over time, computing power and scaling matures to a point where more flexible and dynamic software can replace the hardware solution.

You can read more on this topic in my article on TechTarget’s Search Server Virtualization site at: http://searchservervirtualization.techtarget.com/news/2240150998/IT-trends-Where-hardware-innovation-leads-software-follows. As always, please come back here and leave any comments that you may have.

Chargeback vs. Showback

While nice on paper, chargeback and showback present complicated challenges for virtualization- and cloud-based data centers. To successfully implement these concepts, organizations must carefully evaluate their company-wide and IT objectives.

First, you need to determine if you are looking to simply report on resource consumption or potentially charge tenants for their resource utilization. If you sell hosting services, the answer is obvious: You need chargeback. But even within the corporate environment, more IT departments are looking to become profit centers, and must evaluate whether they need full-fledged chargeback, or simply showback.

Where showback is used purely to track utilization without billing, chargeback is used to bill tenants for resource consumption.  And each model has its own considerations and challenges.

To explore this topic in more details, check out my TechTarget article “Chargeback vs. showback: Which method is best for you?“, then come back here to leave any comments.

Saving the Best for Last

Virtualizing tier-one applications has begun to catch on. If your virtualization strategies do not include business-critical workloads, you may want to reevaluate that decision.

Over the past three to five years, a major shift took place in server infrastructure. As budgets tightened, and virtualization technologies matured, server virtualization exploded into a perfect storm on the data center scene.

By the time most IT shops began exploring virtualization, their server environments were rife with low-end servers, which consumed rack space at an unprecedented pace, stressing data center infrastructures to their breaking points.

In many of these IT shops, the return on investment and other benefits of server virtualization, such as reduced power draw and greater resource efficiency, had an immediate effect. But many IT departments found themselves with a daunting list of servers to virtualize. Between migration lists and the constant stream of new servers, IT first attacked the “low hanging fruit,” or the easy-to-virtualize servers that presented little to no risk.

That was a good strategy, but now you are through the “low hanging fruit” and need to take a serious look at how to virtualize your tier one applications. For more on this topic, read my TechTarget article “Tier-one applications: Part of effective virtualization strategies“. Feel free to come back here and leave comments.

Please Ask Stupid Questions

Too often, people are afraid to ask stupid questions. In my opinion, stupid questions may be the best and most valuable questions.

It is very easy to develop “group think”, where people within a group or organization adopt a common view of a situation. This is good for teamwork, but bad for vetting complicated designs or enacting long term strategies. About the only thing that can be guarantees in a long term strategy is that the decision criteria will change over the course of time. However, if you blindly stick with the strategy, even after the criteria used to develop the strategy have changed, you may no longer be going in the right direction for the organization.

3 years into a 5 year project, once everyone is fully educated on the goals and the project plan, you need someone that will stand up and ask the stupid question…”why are we doing this?”. Don’t let the question annoy you, and don’t let your pride keep you from truly considering the question. Every project and design needs a devil’s advocate, forcing you to revisit decisions and justify why they are (still) relevant.

For more on this topic, check out my article in TechTarget’s Search Server Virtualization site at: http://searchservervirtualization.techtarget.com/news/2240118451/Asking-stupid-questions-can-lead-to-server-virtualization-success. As always, feel free to come back here and leave comments.

« Older posts Newer posts »