Mitigating Cyber Risks: Part 1

My entire career has been surrounded by risk; mostly in the technology arena.  Regardless of the job role, from a technology engineer through risk manager, solutions architect, IT leader, and into my current role as a consulting engineer one thing has been commonplace – risk must be mitigated.  Today, risk is common place with every organization and thrives in the form of cyber threats; among others.  Technology has brought us vast advances in manufacturing, banking, medicine, and retail – with it comes significant increase in our risk footprints leading to financial, data, or reputation loss.

Before we can begin the process of mitigating cyber risks (some call this risk management; which is an incorrect term in my opinion), we need to understand them and their potential impacts.  The risks themselves are varied but from a high-level can be categorized as:

  • Accidental & Intentional Security Breaches
  • Operational Systems Failures
  • Downline & Upline Risks

Let’s break these down.

  • Security breaches are the exposure of systems or data beyond their intended and authorized access footprints. When looking at accidental security breaches, this can include things like a database backup left unsecured, private data sent to the wrong party, or something as simple as a data center cage left open while the engineer was on a smoke break.  These may seem trivial, but when your data is breached, or your corporate secrets are exposed – you will be left shouldering the responsibility.  Then there are intentional security breaches, those that wreaked havoc on the NSA, Adobe, and the Veterans Administration.  These come in the form of virtual or physical attacks intended to either steal data or disrupt services to an organization or individual.  These are the attacks that most organizations try to prevent first and foremost – often at the expense of other attack vectors.
  • Operational system failures are a form of cyber risk that I see frequently as a direct result of poor systems maintenance, lifecycle management, and a general overuse of the phrase, “If it ain’t broke, don’t fix it.” Remember, just because something is working does not mean it should not be replaced, patched, upgraded – or in the world of vehicles, have its oil changed – on a routine basis.  Remember, a five-year lifecycle is about the maximum you should try to squeeze out of IT systems – and really, three years is where you should be to mitigate risks.  Before we move on here, how long can your business run without access to any of its data because you failed to replace your SAN before it failed due to drive age?
  • So, what are downline and upline risks? Well, these are risks that you assume because of doing business with vendors/suppliers. Truly, most of these types of risks fall to the business side of the world, right? Wrong. What happens when your phone systems are down, internet, international circuits, hosted email, CRM, payroll systems?  People tend to get upset right?  These are the risks that you can’t control completely, but are responsible for.

Next up in, how do we mitigate against security breaches from a high level?  Certainly, there are already thoughts in your mind – areas that you know you need to address.  Well, that is a good place to start until next post…


IT Strategic Roadmaps: A Commentary

What is an IT Strategic Roadmap?  Most people look at them as a plan to defines the long and short-term goals for a product or solution within Information Technology.  While this is technically true, I believe this is a rather shallow way of looking at things.  If we look at this from a holistic view, a proper IT Strategic Roadmap – or any roadmap for that matter – can be a key driver in the success of the overall business unit.  To utilize these roadmaps in a such a way, it becomes necessary to stop looking at them with a focus on the product, but rather on the process.  When you do this right, you gain insight into the business – being able to make better technology decisions with strategy at the core.

How do we do this?  Well, first you must identify the product that will be the focus on the roadmap in question.  When I say, “product”, I mean solution.  In some cases, you may be product specific, but it pays to avoid it where possible.  Use the following phrase:

We need a technology solution that provides ________.

Avoid, the use of vendor and/or technology specific language that can pin you into a box.  We don’t like boxes.  This solution may need to provide more than one feature or function.  For example, let’s answer this question (in priority order):

  1. Provides user authentication for systems access across a broad spectrum of environments.
  2. Provides a framework for storing user access control information.
  3. Provides a foundational infrastructure for development of a security boundary to ensure information security.
  4. Allows for easy integration into other systems and environments using industry standard protocols.

Really, we all know what I am talking about here, right?  NetIQ eDirectory.  Perhaps Active Directory, IBM Tivoli Directory Server, OpenLDAP, or Samba4?  Keep it open.  Allow yourself flexibility.

Once you have the provisos (we all know that my list is short – when you work with the Business Unit, which you should, you will have a much larger list) that are required, you can identify the products that may fit the bill.

From this list, you can create a requirement(s) to product(s) matrix that will be used to identify all the key requirements from the business and the technology units and map them to the technology products which you can then start eliminating from the options pool.  Once you eliminate, then you can pick your product.  From there – you build out your roadmap to include:

  • Key Business & Technology Sponsors
  • A Review Schedule
  • Scope and Boundaries – You have essentially done this. However, roadmaps are living and breathing documents that should be cyclic in nature so this data needs to be in it for future cycle reviews.
  • Implementation Plans
  • Maintenance Plans

Is there more to it that this?  Yes.  Much more.  However, the keys to success with any planning engagement at any level that are often forgotten can be summed up with:

  • Don’t plan, roadmap, purchase, budget, or develop in a box. Information Technology needs to engage the business, and the business needs to engage Information Technology.  Use a consultant to help bridge the gap if needed.
  • Keep an open mind. Be technology agnostic.  Your way may not always be the right way.  Be flexible.
  • Don’t neglect your plan. Keep it up to date.  Review it – with all stakeholders – on a periodic basis.

These apply to all aspects of planning.  Remember, failure to plan is a sure fire way to plan to fail.

Boot From SAN? I Don’t Think So…Anymore.

Several years back, when I was a munchkin – okay not really, there was new trend called Boot from SAN.  This trend continues today.  I hopped on that bandwagon quickly when UCS released and the ability to use Service Profiles with Boot from SAN was a brilliant way of being able to recover server environments in a hurry, and even automatically, when a server failed. This was High Availability at its finest.

Then, I started thinking about it.  In doing so, I realized that the base reasoning for Boot from SAN, in most environments these days, is not necessary anymore.  The ability abstract the server OS install from the underlying hardware for HA purposes, in my opinion, is the primary reason for Boot from SAN.  Now, if you know me, you will understand that I have stated for a while now that unless you are doing something weird – like using a dongle in your environment – there  are very cases where you cannot virtualize all your systems.  If you can virtualize (the right way), then you have high availability built in – so why do you need that twice?  You don’t.

This leads to my biggest pain point with Boot from SAN, migrations from SAN to SAN.  It is just plain painful and time consuming.  Imagine pulling your toenails out of one foot, and having them moved to the other foot.  It just does not work.  This is the same as moving a Boot LUN from SAN to SAN unless you own expensive migration tools, and are willing to go through the extra effort and hassle.  Without the tools, you need to:

  1. Build the new SAN
  2. Create new Boot LUNs
  3. Install the OS
  4. Migrate the VMs

Whereas, if you are local disk boot, you just attach the new SAN and VMotion the virtual machines.  Quick, fast, easy-ish.

Now, I will admit that there are a few cases where Boot from SAN may be useful – where you are not running a hypervisor or a good one anyway –but if that is the case, perhaps you should rethink that as well.  If you don’t believe me that you you can virtualize almost any, if not all, servers in your environment – reach out.  I would be happy to be proven wrong – but I won’t be.


VDI Oh My!

This is a post originally written for the company blog — posted here for posterity.

Have you seen the cost analysis sheets from various entities over the years pointing out how much money you can save with Virtual Desktop Infrastructure (VDI)? In most cases, they’re wrong. But like most things, there are outliers. Today I want to look at VDI and break it down and tell you why you might want to use it – and why you might not. Then we’ll take a look at a few options for VDI, along with their specific advantages and maybe even a few disadvantages thrown in.

Why VDI?

  • Security: I believe that the number one benefit to any organization that VDI brings to the table is security. Security advantages to VDI include:
    • When you abstract the desktop away from the end-user environment, you also have the ability to abstract the data away and into the data center where you can better manage, backup and protect that data.
    • When you use VDI, you create a smaller attack surface. It also makes the attack surface easier to patch, update, monitor and audit.
    • Through proper policies, a VDI environment can be centrally controlled and harder to subvert – basically you have the ability to restrict data transfers, unauthorized access, and even revoke unwanted access from miles away. In the simplest terms, you can better control the number one cause of data breaches: people (Source: Baker & Hostetler, LLP. “BakerHostetler 2016 Data Security Incident Response Report”).
  • Application Management: This one may get me in trouble from VDI purists. I tend to look at VDI today as more than just delivering a desktop, and I suspect most consumers do as well. Most major VDI products have the capability to handle application package management, provisioning and access controls. What this allows you to do is maintain a stranglehold on software access and subsequently licensing usage. Licensing costs are HUGE in enterprises, and true-up and/or violation costs can be surprisingly daunting. Avoid them (or get really close) with VDI. It can make a real difference in cost. I won’t tell anyone if you don’t.
  • Availability: When you put your VDI in your data center, you are inherently gaining redundant power, UPS backup, dual connectivity and typically a better hardware class for your VDI infrastructure than you would have with haphazard desktops. Need I say more?
  • Management: Management become much easier. While I hinted at it above in the security section, it is necessary to point out that you make things easier to manage when you can update a single shared image, application or host server and have that roll out to all your users with the click of a button (or two).

Why Not VDI?

  • Security: If you are looking to invest in VDI and you do not take the time to properly secure the solution, it can be a disadvantage too. Security disadvantages to VDI include:
    • You just allowed all of your users to access their desktops from anywhere…maybe. If you have not properly locked down remote access to the right groups, secured peripheral access, and/or set up security policies, you could be opening some additional risks while eliminating others.
    • When you implement VDI using best practices, your VDI environment will become isolated from your server platforms. If you just throw VDI in without working through proper segregation, you can end up with users in the same network space as the server farms. This is generally not a good thing.
  • Management: It may be easier to manage those desktop images and you won’t need to manually go to desktops as much anymore, but the trade-off is that you’ll likely need a more skilled engineering staff to manage the underlying VDI infrastructure. With the proper staff, training, and/or the right partner (like Sentinel), you can head this off at the pass fairly well.
  • Cost: I don’t deal in money much, but I can tell you that you would be sorely mistaken to think that you will save money with VDI. You may lower either capital or operational expenditures, while increasing the other. The reality is, you are gaining features (security, application management, central management and even controlled costs) while spending the same if not more in some cases. Your mileage will vary.

Which VDI Is Best?

There are two major players in the VDI and published application world: Citrix (XenApp & XenDesktop) and VMWare (Horizon/View). Both are fully capable application and desktop delivery platforms. Citrix has the historical install base and decades of experience, but VMWare has been making leaps and bounds with their very solid product offering. VMWare owns the hypervisor space that most deployments will be installed on, yet there are some bells and whistles in Citrix that the advanced VDI deployments may need. The truth is, without sitting down and having a discussion to review your specific needs, no one can tell you which is best. I won’t try here.

Outside of the vendor platform, there is always Desktop-as-a-Service, which is available through Sentinel CloudSelect®.

Bottom Line

The bottom line is this: If you plan it well, implement it on solid technology (check out my previous article on HyperFlex as an example) with the right policies, procedures, and partner, your business and customers will be very happy. Just don’t expect to fill up a piggy bank with the extra savings.

The article here is my opinion, I wrote it.  I work for/with the companies/technologies mentioned here — if you don’t like that, tough.  If you want to learn more about Virtual Desktop Infrastructure (VDI) and determine the best solution for your business, please contact Sentinel; they pay me and that allows me to keep work on technologies like these and writing these blogs.  If you ask really nice, you might even be able to work with me.  Never know.  If you really want to help me out, contact me directly — I will get you all setup with the right people to help you out.

A Data Center Engineers Look At Ransomware Protection

Your worst nightmare just happened.  You have just been presented with a screen popup telling you that your data is being encrypted and that you have so many days to pay.  It gets worse.  Every day you delay, chunks of your data are deleted.   It happens.  I have been on the outside – helping restore data, remove the threats, and salvage business operations and prevent further damage.  I have a unique perspective on ransomware.

We have things like Cisco AMP, a truly robust and state of the art anti-malware system that integrates at every level from the firewall to the endpoint with telemetry systems and response rates beyond anything else in the industry at a crazy fast 13 hours!  However, even Cisco will tell you that it is not possible to rely on prevention alone.  That is why they embed their AMP endpoints into everything – they keep watch for Malware that may have slipped through that 13-hour window.  If you don’t have Cisco AMP, you could have a much larger “window”, an industry average of 100 days.

So, how do you as a data center engineer protect your data when something gets through.  How do you guarantee the ability to recover and restore?  Will you be ready when that popup box shows up?


This may seem obvious, but backups are critical to a recovery.  When your data is encrypted, there is a very strong chance you will not be able to recover it using decrypting tools, so having proper backups will help.  So, what is a proper backup?  I like Veeam Backup & Replication as it has some really handy features that can make it, and its backups a little more resilient to malware.

  • Out of the box, it will backup its own configuration and repository data. If the Veeam Controller is infected, you have the ability to restore the Veeam system quickly.
  • It can replicate itself. In fact, it should if you can.  Having a secondary copy in another location can be handy – especially if you have non-real-time replication scheduled for the Controller.
  • It can backup to a variety of locations, including the cloud to protect your data from afar.
  • You can backup to deduplication appliances like Dell EMC Data Domain appliances that use protocols that are not typically prone to malware attacks.

If you don’t have Veeam, you can still achieve some of this, or all of it.  It just may not be as easy.  Either way, multi-location and multi-type backup and/or replication strategies are critical to protecting data.

SAN Snapshots

How can snapshots help?  If you are using a snapshot inclusive EMC Unity or Nimble Storage array, you can configure snapshots with retention schedules to allow you to quickly rollback to a point in time – BEFORE the malware got into the system.  You might lose a few minutes or an hour of data, but it is much better than the alternative.  If your SAN supports snapshots and you are not using them, set them up as soon as you can.


Patching is crucial.  Without patching you are more vulnerable to ransomware and malware attacks.  Typically, security patches for software are free.  A no cost or low cost measure to protect your systems, yet they are so often overlooked.  Why?  Usually it is a lack of a management system or personnel to perform the patching.  This can be addressed with things like WSUS, Microsoft System Center, or VMware Update Manager.  Use them.

Control System Access

This is more than making sure you have passwords for employees – it is often protecting employees from themselves.  You can invest in, or make use of several of the following options to restrict access to data – which is how ransomware propagates in general – through data access.

  1. Grant read-only access. If ransomware can’t write – it can’t encrypt.  Write access should be only as necessary.  This applies to databases, file systems, servers, Active Directory, etc.  You can use things like Microsoft Identity Manager to help control and automate that access.
  2. Use VDI to your advantage. Create an air-gap of sorts between your end user’s local systems and the critical systems.  Lock down folder redirection and USB redirection.  Both Citrix XenDesktop and VMware Horizon with View apply here.
  3. Use Group Policies to lock systems down. If allowing users to set a screen background worth losing all your data?
  4. Use things like Cisco ISE with posturing to ensure that only secure systems connect to the network.

You will be attacked.  That attack may not breach your systems – or it might.  Don’t say I did not warn you.  Protect your data.  Thank me later.

For the record, I work a lot with Microsoft, EMC Unity, VMware, Cisco, VCE, EMC Data Domain, Veeam, and the other technologies here.  They are what I know best, so if you feel I am biased — well, I am.  They are what I know best!

Musings On Employment

I love where I work.  I have liked my job before.  I have liked the people I work with before.  I have hated where I worked before.  I have been way underpaid before.  I have been laid-off before.  I have been on all sides of the spectrum.  I have never, been able to say that I love where I work — up until now.  Many times I have mused over why.  I can’t be certain that I speak for everyone — as not everyone can thrive in the same work environments, but I can say that there is a distinct set of differences that add up to make my current employer the best place I have ever worked.  If you want to learn a few lessons from them, here are some take-away thoughts:

  • Challenge: The work is very challenging at times.  We get to work on all kinds of technologies, at every skill level.  We are constantly dabbling in a variety of technologies, with a variety of customer types and people.  I have the type of mind that enjoys being tried and challenged — this works for me.  Employees thrive with challenge in my experience.
  • Opportunity:  There is always opportunity to participate is company projects, helping to lead change, and to even dig in on projects and problems that can arise.  I can choose how much — or how little — I can volunteer for.  Make those options available, and keep your doors open — literally and figuratively.
  • Work/Life Balance:  I know.  You are thinking that most companies claim to offer this.  Well, I have been at some of those companies and I can tell you, I understand — most don’t.  What I do know is that currently, and for the last several years, I work.  We all do.  However, the company consciously makes efforts to ensure that we are not overworked, that we can take time when we must, and that we are able to put family on the pedestal they deserve.  There is a balance, and while we may not get every request we ask for, the management I deal with tries exceptionally hard to make sure we do.
  • Fairness:  The management team where I work, and who I report up to, have a habit of being fair — and trying hard to be.  Be fair to the company and the company will be fair to you.  Make that your motto.
  • People:  This is the most important one to me.  Everyone in the office has mutual respect for one another.  I think this comes from having some of the best and brightest minds on staff.  When you earn the respect, instead of demand it, it just makes for better relationships.  Learn to make sure the people that work for you are smarter than you are, that you foster teamwork and collaboration, that you allow a little play, and you will be in a great place.
  • Training:  This goes without saying.  Don’t promise it and never deliver.  Take a page out of my employer’s book and deliver on it.  It does not have to be formal training, but there should be some.  Balance on the job training, shadowing, collaboration, video training, classroom, and self-study.  It works.
  • Benefits:  Don’t pinch pennies here.  Ever.

Ultimately, I find that if you keep evaluating yourself (and the company) and look to be better, you will be.  My current employer has earned my service and I hope to serve them well into the future.  Help your employees feel the same way.  If you don’t they might just end up working for my employer.

HyperFlex: An Enhanced Look

This is a post originally written for the company blog — posted here for posterity.

In the IT industry, the phrase “we are pretty much a 100% physical shop” is one that you dread to hear – especially from a fast-growing company. Such was the case with a leader in the financial services industry recently when they asked Sentinel to install a Virtual Desktop Infrastructure (VDI) solution for a new call center rollout of around 250 desktops as well as fully re-deploy their physical desktop and server infrastructures. They were pretty set on a hyper-converged solution and were looking for something scalable and easy to manage. To be successful, in the eyes of the business, the solution had to:

  1. Be solid. With internal hesitation to virtualization from the business, there had to be reliability.
  2. Be fast to deploy. To meet the aggressive deadlines, there could be zero delay on delivery or deployment.
  3. Be lightning fast. To aid in business buy-in and adoption, the solution had to deliver a better end-user experience than the current desktops. Performance was critical to that.

After reviewing the vendor options, the customer ultimately chose Cisco HyperFlex and VMware Horizon for their hyper-converged VDI solution. Aggressive deployment timelines were set and equipment was on the way. From there we moved onto the fun stuff.

The HyperFlex cluster was delivered quickly. Really quickly. Once the gear was on-site it was time to deploy. Before we go there, I want to touch on one particular aspect of the solution. Sentinel knows that maintaining data integrity and availability is essential to our customers as they adopt and adapt to new technology. How the Cisco HyperFlex solution delivers that can be summed up pretty easily:

  • The Cisco HyperFlex product line is a variant of the Unified Computing System (UCS) product line, and with that you have the full redundant design of dual fabric interconnects, full multi-pathing, and server hardware that is designed with zero single point of failure. In this particular deployment, we had four nodes (N+1) with dual fabric interconnects, and two 10GB paths from each of the HX240c nodes. Everything also ran on fully redundant power. It was a strong platform to begin from.
  • The SpringPath HALO Architecture is a file system – I am simplifying things here a bit – that allows for distribution of writes onto multiple solid-state drives (SSDs) across multiple nodes BEFORE acknowledging the writes. This maintains the data integrity by ensuring that there are multiple copies of the data on separate nodes in the cluster to prevent potential data loss.
  • The HALO Architecture enhances the data integrity by using a Log Structured Distributed Object Store to allocate the data as small objects across multiple servers in a sequential pattern, which are in turn replicated to other pool members to achieve data redundancy. By doing so, they increase not only performance, but the life of the flash layer disk in the servers as well as redundancy overall.

Back to the deployment. In a post on my personal blog, I mentioned that the HyperFlex deployment was pretty fast. Once you rack and cable the cluster, the HX installer is a breeze. What I love about the HX installer is the fact that it really does build the entire UCS deployment and makes adding a node to an existing cluster just as easy. Click. Click. Done. Overall, the deployment of the HX system after rack and cable took less time than installing the vCenter server that was required for the deployment (Note: The vCenter must be on separate hardware but can be moved into the HyperFlex cluster for ongoing operations).

After meeting the first two objectives, we needed to look at the speed. Since this was a VDI cluster, we made one small change (one line in a configuration file) to optimize the cluster’s L3 Cache for a read-heavy environment. Once that small change was made, it was time to run some tests. Since Sentinel doesn’t own the environment I will only include the following observations:

  • During testing of the 4-Node cluster with 4xVMs pushing I/O, the cluster achieved well over 125,000 I/Ops. Even in the worst-case boot storm of 250 users logging in within a one-minute period you would only really require 117,500 I/Ops, leaving plenty of room to spare. Keep in mind, this was not done in a controlled lab under ideal circumstances.
  • I was able to clone a 100GB (65 Used Thin) VM from template in less than three seconds. Seriously.
  • I deployed 250 linked clone desktops including two boots, customization, and domain join in under seven minutes. The bottleneck was the VDI limit on the maximum concurrent operations sent to vCenter (which I tweaked to 25) and probably the Active Directory domain join tasks as part of the customization. It was fun watching the vCenter task pane roll by so fast I couldn’t keep up with it.

The customer was extremely happy with the performance, scalability and easy management of their new infrastructure. The Cisco HyperFlex and VMware Horizon solution met the requirements so well that I better understand the hype around Cisco HyperFlex and the SpringPath HALO Architecture.

Of further interest in terms of scalability comes confirmation from Cisco that node capacity expansion beyond the current self-imposed limitation is in the works and will not be limited to hardware. External storage is also fully supported. This means you will have the capability to hyper-converge your core systems and still make use of external storage area networks (SAN) where business needs dictate.

All in all, HyperFlex is a rock solid platform with a fantastic and robust architecture that you would be wise to evaluate. Couple it with VMware Horizon for desktop deployment, and you have an infrastructure built to help your business achieve unprecedented levels of success. If you would like to learn more about HyperFlex or other converged/hyper-converged infrastructure solutions, please contact Sentinel for more information.

SAN Based Snapshots & P2V Conversion Failure

This is a post moved over from my old blog — still relevant.

I love P2V.  It works like a charm — except when it doesn’t…

When it fails, it usually fails in the strangest ways possible, the errors are obtuse, and finding the underlying cause is a nightmare.  Well, I ran into the issue where the P2V would fail referencing an error with the snapshot and/or the disk ID.  Well, I assumed this was related to the Source VSS Snapshot that is taken during the P2V process.  After spending several hours tracking that down, and realizing I was barking up the wrong tree I started looking to the destination.  Well, there was the problem.  A snapshot in the target machine.  I was pretty sure that the P2V process did not use snapshots on the target — I mean, why would it?  Time to look elsewhere.

Well, turns out that elsewhere was the SAN that was housing the storage for the target datastore.  The array, in this particular case a Nimble SAN, has a feature where you can quiescence the VMs on a LUN using the Native VMware snapshots to allow for better point in time recovery options with the SAN based protection.  If this is on, it tries to take a snapshot of all the VMs on the target LUN.  Now, if you are doing a small VM that will convert in between the snapshot window, no issue.  If it is a larger machine — turn that feature off during the P2V window and save the headache.

SharePoint Large File Library Via Windows Explorer – Error

This is a post moved over from my old blog — still relevant.

Recently, I had the opportunity to take a look at an issue with accessing SharePoint file libraries through Windows Explorer UNC shares.  When those file libraries have HUGE numbers of files, the client will hang for upwards of five minutes and then error out with the following error:

“[\\UNCLocation\] is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions.  A device attached to the system is not functioning.”

Frustrating right?

Before I get to the fix, lets review the why.  When you open a folder on a Windows system (remote or local), the system has to do a couple of things:

  1. Obtain a listing of all the objects in the folder.
  2. Pull the attributes for every file.
  3. Display the files.

Well, in this case, the number of files pushes the limit of the attributes that the Windows system can load at one time due to restrictions put in place to prevent Denial of Service attacks on WebDAV Clients.  This can also happen when you are downloading VERY LARGE single files due to the same type of restrictions.

The Fix:

For Large File Libraries:

  1. Open Regedit & Go Here: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\
  2. Edit the “FileAttributesLimitInBytes” value from 1000000 to 20000000

For Opening Large Files:

  1. Open Regedit & Go Here: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\
  2. Edit the “FileSizeLimitInBytes” value to anything larger than the file you intend to download.  The default value is 50000000.

Cooking Up A Data Center — With A Salmon Recipe To Boot

Technology is an art.  If you use the wrong cables, servers, storage, switches, routers, etc. you can be sure to have a data center that puts a bad taste in your mouth.  You can achieve that bad taste by using poor quality ingredients, by assembling good or bad ingredients in the wrong way, or by designing a great system – but for the wrong purpose.  To illustrate that, let’s take a look at another art — cooking.

Cooking is the preparation of a fantastic meal through the perfect blending of raw ingredients, spices, heat, and cold.  It is also necessary to know that if you are cooking Asian food as opposed to Mexican food, you are not going to use cayenne pepper or Jalapenos generally.  Let’s take this to the extreme and cook up a data center shall we?



  • 2 Fresh Wild Caught Salmon Fillets (Skin On One Side)
    • AKA: EMC Storage Array & Cisco UCS Servers
  • 2 Cedar Planks – Soaked in Water for 2 Hours & 2 Cedar Wraps W/Ties
    • AKA: Proper 10GB & FC Network Cabling
  • ¼ Cup Soy Sauce
    • AKA: Solid Up Line Core Network — Cisco 4500-X Switches
  • ¼ Cup Brown Sugar
    • AKA: Solid Storage Area Network – Cisco MDS Fiber Channel Switches
  • 2 Tbsp Sake
    • AKA: Solid & Stable Power
  • Salt & Black Pepper
    • AKA: A Proper, Well Tested Hypervisor Platform – VMware vSphere Baby
  • 1 Tbsp Minced Garlic
    • AKA: Proper Cable Management & Velcro
  • 1 tsp Lemon Juice
    • AKA: A Quality Security Infrastructure – Cisco ASA, FirePOWER, ISE
  • 1 BBQ Grill
    • AKA: Cisco Nexus Data Center Grade Switches


Now, you can go cheap, with less tried and true, potentially cheaper solutions – even stuff from the new kids on the block.  But, what are you risking?  When you use oven instead of a grill, you lose the smoked goodness it brings to the dish.  When you skip the lemon juice, it leaves your mouth desiring something more – if you do that with your security, are you leaving a gaping hole in your environment?  Skip the brown sugar and you have a tart dish that won’t move from plate to mouth very fast – kind of like what happens when you skimp on a good fiber network as opposed to iSCSI over your core network.

The point is, you have to use quality – tried and true ingredients, and mix them in the right proportions to ensure you end up with a data center dish that truly shines.  Sure, there are other brands out there besides Cisco, EMC, & VMware that make good products – okay, not sure you can beat VMware on the hypervisor aspect – but they are what I know works well most of the time; and when they don’t they have the knowledge and experience to get the taste back in balance.  Go forth – data center well, and enjoy the fruits of your labor for the next three to five years.  Do it wrong, you will be making another dish sooner than you like.


For those of you that want to, here is the rest of the recipe:

  1. Preheat your BBQ grill to 350 (Medium).
  2. Mix soy sauce, brown sugar, sake, garlic, and lemon juice in a bowl, set aside.
  3. Place salmon skin side down on a cedar wrap and lightly dust with salt and pepper.
  4. Place the cedar wrap on a cedar plank. Tie the wrap loosely around the salmon.
  5. Place the plank directly on the grill and BBQ for 12-15 minutes – covered.
  6. While cooking, use a spoon to generously cover the salmon with the sauce mixture. This should be done two or three times during cooking, to build up a nice glaze.
  7. The salmon will flake with a fork when ready.
  8. Eat & Enjoy.


Just like a good data center, this dish is sure to be mouthwatering!