Weblog of Mark Vaughn, and IT professional and vExpert specializing in Enterprise Architecture, virtualization, web architecture and general technology evangelism

Tag: Technology (Page 2 of 7)

Cloud Alignment

As an IT admin, it is often difficult to explain what you do with a simple answer. Your job is not simple or one-dimensional; system administration is merely the tip of the iceberg. You manage networks, storage arrays and servers so your company can provide services to customers and reach business goals. When considering cloud strategies for your organization, you can’t lose sight of this bigger picture and you must keep these ultimate goals in mind.

In the case of cloud strategies, maybe you choose to deploy a business model similar to an Internet service provider and establish a private cloud to provide services to business units. You could also employ a public cloud to offload time-consuming maintenance tasks. You might even use cloud services to move non-revenue-generating tasks out of your data center. All of these are useful cloud strategies, but you must ask yourself if they truly align with your business goals.

To continue this line of thought, read my TechTarget article “Aligning virtualization and cloud strategies with business goals“, then come back here and leave a comment to start the discussion.

Beware: Storage Sizing Ahead

Managing data growth continues to be a struggle, and organizations are beginning to outgrow that first storage array they bought a few years ago. As they do, some are in for a big surprise. For years, the focus has been on adding storage capacity. In fact, the development of a storage strategy is still referred to as a “sizing” exercise. However, today the challenge is now accessing that huge amount of data in an acceptable amount of time, and the size or capacity of the drive will have little or no correlation to the performance of the storage. An IT administrator who focuses too narrowly on adding storage capacity can end up with an array that can hold all the data, but can’t support the IOPS demanded by applications.

If you are considering a storage upgrade, it is critical that you understand how this can impact your organization. I cover this in more detail in my TechTarget article “Adding storage capacity can actually hurt IOPS“. Please take a minute to read the article, the come back here and leave a comment to contribute to the conversation.

Software-Defined Data Center

When you think about a data center, you likely imagine a large building with diesel generators. Inside, you probably picture racks full of servers and networking equipment, raised floors, cable trays and well-positioned ventilation. After all the hours that I have spent in data centers, thoughts like this actually make me feel cold.

So, what truly defines the physical data center? Is it really defined by what we physically see when we step inside? Thousands of data centers may use the same network switches, but any two are rarely the same.

Purpose-built hardware provides a way to execute code-and-relay data. What makes it unique is software or the device configuration. Every data center has a core router, but the configuration of that device is what makes the router unique. The physical device is simply a conduit for executing the function, as defined by the software. Although the physical aspects of a data center have not changed much, the amount of end-user systems that directly touch the components has decreased dramatically.

Virtualization changed the way we defined the data center. The data center is no longer just a room of hardware, but a room of meta data about applications and services. To read more on this shift in data center thinking, please read my TechTarget article “How the software-defined data center changes the virtualization game“, then come back here and leave a comment to tell me what you think.

Virtualizing Hadoop

Large companies have long used big data analytics to comb through vast amounts of data in a short amount of time. Companies with deep pockets and ever-expanding amounts of data have built large server clusters dedicated to mining data. Hadoop clusters can help small and medium-sized businesses lacking big budgets benefit from big data.

Have you ever wondered how search engines guess what you want to type in the search field before you finish typing it, or offer suggestions of related queries? Maybe you’ve noticed that Facebook or LinkedIn recommend people you may know based on your connections? These are two examples of Hadoop clusters processing large amounts of data in fractions of a second.

Created by the Apache Foundation, Hadoop is an open source product built to run on commodity hardware. The most expensive part of building a Hadoop cluster involves the compute and storage resources. Server virtualization can help reduce that cost, bringing big data analytics to budget-constrained organizations.

“Big Data” and “Analytics” are more than just buzzwords, they are business gold mines waiting to be discovered. To read more on this topic, visit TechTarget and read my article “Virtualization makes big data analytics possible for SMBs“. As always, please come back here and leave any comments.

Server To Go

Desktop hypervisors, such as VMware Workstation and Parallels Desktop, open up a world of management and troubleshooting possibilities for server virtualization admins.

Whether you are new to server virtualization or a seasoned veteran, there is a very good chance that your first hands-on experience with the technology was in the form of a desktop tool such as VMware WorkstationVMware FusionParallels or even Windows Virtual PC. You probably installed it as a chance to kick the virtual tires or maybe to aid in a major operating system change.

Regardless of the reason, for many, the virtualization journey began with a desktop hypervisor. In fact, I don’t think we give enough credit to just how great of a role these desktop tools play in the world of server virtualization.

Desktop hypervisors may provide more value than you realize, and no IT admin has a good excuse to not be running one. For more on this topic, check out my TechTarget article ““Why virtualization admins and desktop hypervisors should be BFFs“, then come back here and leave any comments that you may have.

Rumors of the Data Center’s Demise

As with server virtualization in years past, new data center technology is unsettling for some IT pros. But, if you embrace change, you potentially make yourself more marketable and can vastly improve data center operations.

Several years ago, a hardware vendor ran a commercial that showed someone frantically searching for missing servers in a data center. He eventually learns that the large room, full of servers was consolidated into a single enterprise server platform.

It was an amusing commercial, although this trend of server consolidation did not play out as the advertisement implied. In fact, it was server virtualization that brought about large-scale consolidation efforts. But the premise was still true: Reducing the data center’s physical footprint caused a good deal of anxiety in IT administrators.

To continue this thought, read my TechTarget article “Embrace, don’t fear, new data center technology“, and then come back here to leave a comment.

Multiple Hypervisors Ahead: Proceed with Caution

Multi-hypervisor management tools are great for common provisioning and monitoring tasks, but they fall flat when it comes to deeper functionality, which provide the most value.

Looking past the feature parity and marketing hype in the hypervisor market, there are compelling reasons to deploy a heterogeneous virtualization environment (i.e., reducing costs, improving compatibility with certain applications, avoiding vendor lock-in). Do you keep fighting it off? Do you concede and draw lines of delineation between the hypervisors? Do you throw your hands up and simply let them all in?

The answer certainly depends on your skillsets, needs and tolerance for pain. But if you’re looking for an easy way out through a multi-hypervisor management tool, you may be disappointed.

For more on this topic, check out my TechTarget article at “Proceed with caution: Multi-hypervisor management tools“, then come back here and leave a comment.

VMWorld 2012 Voting

It is that time again, and I am asking that you take a minute to vote on my submissions to VMWorld 2012. This year, I am focusing on End User Computing/VDI topics. I have worked on a number of VDI projects over the last year, and wanted to share some of those experiences with the community. In particular, I wanted to focus on VDI in education. Having worked on both commercial and education implementations of virtual desktop environments, there are key differentiators that can seriously impact the success of VDI in educational deployments.

My first session is “Tyler ISD: One Year Later” (session 2812). Tyler Independent School District is a large K-12 school district in East Texas, and a well respected leader in educational technology. John Orbaugh, the Tyler ISD Director of Technology, serves on a number of technology boards and committees, as well as presenting at conferences and other technology events. Just over a year ago, Tyler began deploying Phase One of a very aggressive VDI project. In Phase One, 2,500 VMware View seats were deployed on a VCE Vblock. Over the last school year, some conditions changed, software conflicts were discovered and rapid growth led to performance concerns. In this session, John and I will discuss these issues, their impact, how they were addressed, and how they impact future phases of their VDI initiative that will take their user count from 2,500 to over 15,000. This is a great session for anyone thinking about or preparing to deploy VDI within an educational environment.

My second session is “VDI in Education” (session 2872). In this session, Chris Reed and I will discuss the many nuances involved in designing and deploying VDI within an educational environment. Chris and I have each worked on a number of VDI project in both commercial and educational environments. For this session, we will focus on how educational deployments differ from commercial deployments, and how even higher education may differ from a K-12 deployment. This will address all aspects of the project lifecycle from technology selection, to budgeting and funding considerations, through to technical design and final implementation. Each step along the way has unique challenges when applied to educational institutions, and knowing how to effectively account for these challenges can improve the effectiveness of your future deployments.

Another session that I would recommend is Steve Kaplan’s session “Virtual Desktops: The Gateway to the Cloud” (session 1446). Steve is a very gifted speaker and is extremely knowledgeable. Steve has authored several books and speaks at large conferences and technology events all over the United States.

How To Vote:
To vote, go to: http://www.vmworld.com/www.vmworld.com/cfp.jspa. You will have the option to sign in either using an existing VMWorld account or to create a new VMWorld account.

Once signed in, click on the “Filter Options” button above the sessions on the right-hand side. Simply type the word “vaughn” into the “Keywords” field and click on the Submit button. There you will find my sessions (#2812 and #2872). Please click on the “thumbs up” icon to register a vote for these sessions.

While you are there, I would also recommend the sessions with Chad Sakac of EMC and Vaughn Stewart of NetApp. I am in the vExpert program with both Chad and Vaughn, they are not only experts in storage but also excellent presenters. You leave one of their sessions entertained, well informed, and almost unaware of the fact that they work for competing storage vendors.

After that, go back to the filter options and type “Kaplan” into the Keyword field to find Steve Kaplan’s session and vote for that as well. In fact, you can type “Presidio” into the Keyword field and find sessions from myself, Steve Kaplan and some of our other colleagues at Presidio. Your votes are greatly appreciated, and I will see you at VMWorld!

Circle of Innovation

I was recently talking with a colleague about new technologies and noticed a pattern: Many of last year’s innovations in hardware are very similar to next year’s new software features. The more I thought about it, the more I realized that this IT trend is nothing new.

Ten years ago, for example, server sprawl was a very real concern. As end users demanded more servers for a particular application or task, hardware vendors turned to small-appliance form factors and blade configurations. These new server technologies helped promote denser server environments.

At first, the need for a large number of small servers was met in the physical world. Not far behind this hardware innovation came server virtualization, a software answer to the need for higher server counts. While this need was initially easier to meet with hardware, software vendors soon arose with a more efficient solution.

Fast-forward to today and there are similar IT trends. Many networking devices — from load balancers to firewalls — are now available as virtual appliances. What once required purpose-built hardware can now be run on virtual hardware and deployed almost anywhere. In fact, virtual appliances are actually becoming the preferred format for networking equipment in many data centers.

When you think about, this leap frogging of technology is really just following Moore’s Law, which states that computing power doubles every two years. Initially, when a difficult problem emerges, purpose-built hardware is coupled with custom application code to create a device that fills a very specific need. Over time, computing power and scaling matures to a point where more flexible and dynamic software can replace the hardware solution.

You can read more on this topic in my article on TechTarget’s Search Server Virtualization site at: http://searchservervirtualization.techtarget.com/news/2240150998/IT-trends-Where-hardware-innovation-leads-software-follows. As always, please come back here and leave any comments that you may have.

Chargeback vs. Showback

While nice on paper, chargeback and showback present complicated challenges for virtualization- and cloud-based data centers. To successfully implement these concepts, organizations must carefully evaluate their company-wide and IT objectives.

First, you need to determine if you are looking to simply report on resource consumption or potentially charge tenants for their resource utilization. If you sell hosting services, the answer is obvious: You need chargeback. But even within the corporate environment, more IT departments are looking to become profit centers, and must evaluate whether they need full-fledged chargeback, or simply showback.

Where showback is used purely to track utilization without billing, chargeback is used to bill tenants for resource consumption.  And each model has its own considerations and challenges.

To explore this topic in more details, check out my TechTarget article “Chargeback vs. showback: Which method is best for you?“, then come back here to leave any comments.

« Older posts Newer posts »