The Challenge of DevOps

Disclaimer: I am not an expert in this area, this is just a blog over difficulties and conflicts created in the search for DevOps. I will go through some fallacies that will make each group look bad, so stick with it if you can. I really would like to get conversations started where they weren’t.

The Phoenix Project. One of the greatest books of this IT age gives a good concept when it comes to how Infrastructure and Development work with each other. However, there are a plethora of definitions for “DevOps”. How do you take a company that has existed just fine with two (if not more) warring parties and get them not only to work with each other, but to help each other.

Fallacy 1: DevOps is Infrastructure working for Developers

This was my first collision with the term. The idea here, again, is basic. Infrastructure and its core individuals exist to give a foundation and support to the developers, and nothing more. A good example of this is a simple ticket request for a new server with specifics of the hardware. Any infrastructure worker worth his salt would look at those requirements and ask “why”, but this adoption of DevOps is “Why not?”. Obviously I’m taking things to an extreme here, but I have seen too many server requests for extreme hardware requirements (really…a virtual machine with 64 gb of ram?) and the inevitable fight that ensues in response to that three letter question “why?” There are so many other examples I’ve seen, from resource adjustments of the ridiculous, to the blatant opening of security holes. Obviously this is in varying environments, but the idea is the same.

This expands past infrastructure into Operations. For example, when there is a “priority one” ticket, and everyone jumps on the call, the first phrase is, “If operations did their job and fixed this we wouldn’t have to deal with this. I don’t even know why we are here.” Which is something I’ve heard this stated multiple times for multiple reasons.

Like all Fallacies there is truth here. Developers are the reason companies make money. Unless you work in the past, Devs work daily to create and stabilize performance that generate monetization for the rest of IT. The greatest example of this is Uber. Of course the top Taxi company in the nation is an app. So without it there is no real company at all.

For IT workers that, will lead them to a mindset of denial. No they’re not in denial, but  the next fallacy.

Fallacy 2: Developers should come to heel for Infrastructure

So ok, here we go. This is so prevalent in my career that I can’t even think of that many specific instances, because it’s almost daily that I hear something along this term.

Developer asked to be able to monitor or help maintain resources with their machine,  “No”. Developer asked to be able to create snaps for code roll, “No”. Developer asks to be local admin on a box, “No”. Developer asks to have a new dev environment that mirrors production, “No.”, and so on.

Its amazing and a marvel to me that people can just shutdown Developers for minimal reasons and yet, this is the normal situation I find myself involved in. I see a ticket and instead of doing investigation or adjustment the immediate response that I hear is either, “No”, or, “We cant do that”, or “Thats not our problem”.

This is just a few examples, and I’m sure everyone could think of their own examples. One thing that has really stirred this up and made my brain hurt is Containers. Developers ask for a container solution from infrastructure, and they get a big fat, “Why? It’s in the OS, so you figure it out.” This is a very frustrating stalemate.

This pushes to an extreme called “Shadow IT”. Shadow IT is basically if infrastructure wont grant the needed support or help getting things off the ground, Dev will use their individual budget and spin up an entire instance in AWS, GCP, Azure, or a basic private cloud. Just a license  for VMware Workstation can create Shadow IT, it’s that easy (don’t get any ideas). I heard a developer talking about the public cloud saying, “It makes me happy knowing no infrastructure people are touching my boxes.”

This fallacy again comes from a smidge of truth. Development doesn’t know everything about infrastructure. Infrastructure spends all their time doing resource management, monitoring, and adjustments trying to keep the infrastructure running in the best way possible, so they should be the main contributors to the how/what/why of infrastructure.

Fallacy 3: Security CONTROLS ALL

With Ransomware and Wannacry still being buzzwords in our time period, this is definitely a big deal. Security is extremely needed this day and age, from blocking things like DNS floods, to removing patch vulnerabilities. There are countless reasons to keep security up to date. Where does this come in for Infrastructure or Development? Well, Security is the Uber-Deny group. Almost every security individual I’ve met has stated something along the terms of, “My job is to make sure we don’t do something stupid.” I feel like the answer to this is both “Yes”, and “No”. I’m sure people do stupid stuff all the time that has nothing to do with security. The proper statement would be something like, “My position is to rectify security vulnerabilities throughout the stack” Or something like that.

Security will try to block where networks are, how they are setup, where infrastructure is placed, and the list goes on. Are there legitimate reasons? Of course! Security is the necessary part of IT that helps keep things in line, away from prying eyes, and malicious intent.

The Common Factor

All of these fallacies are connected to an amount of truth. They all start when the truth is twisted and exploited by individuals to make their position greater than it is.

It’s like the problem with DevOps is even more internal than IT. The problem with DevOps is us.



I love this quite so much, because at its core it grasps the internal hierarchy of DevOps. It all starts with People.

Obviously each group mentioned is run by People. These people are all built around their own past experiences and troubles. Like iron, these experiences and troubles have made us stronger and sharper, but like the fingers of a string musician we become callous, and those callouses help generate better solutions in both good and bad ways.

There is also the problem with Ego. My favorite statement with immediate meetings are, “Everyone check your ego at the door. It has no place here.” If we only were able to do that and bring only the strengths of the department and not the callous overreaching, “where would we go?”

Finally for us, the problem of listening and understanding.


I know this picture hits a chord for all us. How many of us have listened to what seems like the dumbest ideas and kept our mouths shut to it? DevOps starts with listening, and by listening, I mean the whole management stack. Is this easy to do? Absolutely not, in fact, it’s extremely hard to listen for nuggets of truth in a whirlwind of ideas. However, thats what we need to do. To start this, the greatest thing we can do is just listen, and if you don’t understand, ask the dumb questions that everyone is too proud to ask, so that you do. You are probably not the only one who has that same question.

Finally: People… Again

End Users…the concept is lost by all groups now and again, but everyone in IT works for end users. The hardest concept to grasp, and the easiest road to DevOps, is how to create a better solution for them. Developing a new patch, a more secure Dev infrastructure, or new storage solution. This all has DIRECT impact on an end user, and each group works individually to create a solution for them. Now how to mix them all together? A good start is a CI/CD pipeline and securing an automated solution for Developers to run continual delivery. It exists on infrastructure, it has to run in the best HA stability, and it has to be secured. This is a great solution that involves all groups together. There is so, so much more, but that’s the journey we are all on.

Call me “Optimistic”, or “Crazy,” Or “Nuts”, Or “Dangerous”, but I believe this is the future of our industry. With the kubernetes, containers, CI/CD buzzphrases, and the dominance of public cloud, the old standards needs to be replaced with the promise of DevOps. Now, how will you do it?

Things I learned this week!

  1. Error 500 in VCenter when deploying OVF.. Verify your VCenter certificate is trusted… Also How to do this on MAC and then trust it by selecting “get-info” and setting it to “always trust” Needed this for LifeCycle Manager and NSX deployment
  2. INSTALL PowerCLI on da MAC (Includes Homebrew, Powershell Core).
  3. Initial setup of vLCM

More HomeLab Pitfalls

This is going to be another one of those blogs of the pitfalls I’ve ran into. I really want to get into how to fix specific errors and troubleshooting. However, since I’m trying my hardest to get this lab setup to dig into the new releases that came out a couple weeks ago,

VUM – Update Manager

So if your like me you know that the first thing you do when deploying anything new is to update it. So why not update your hosts? Well here are some findings that I ran across with my 3 node vSAN cluster. First DNS and I know the statement, “99% of all IT issues are DNS issues.” But thats true! In my case I built a standalone DC with DNS before I deployed my VCenter so that I could use my A records to resolve the traffic. This worked great for the VCenter. It didn’t work great for VUM. Turns out that on your ESXI hosts when you have a single DNS DC it must be running on the first DNS server field. If its running on the second, it will not be able to scan the nodes for compliance or reach them. (If you setup vSAN you would already see the errors showing it cant scan when its doing its vSAN health checks). Second iSCSI controller, if its VMware certified, why are we having to deal with the warning before we can run the updates? Well because their really not… Here is a great Blog on it to look through. Pretty simple fix but something that should just work right?

The whole Network thing.

I was pretty sure that my decision for a 2 subnet with the ability to cross-talk was not a bad idea. One for the home lab, and one for the house would make a stable environment. Nope… For this to those out there about to start, I’d say to do your homework or start your home networking solution first. Ubiquity has a ton of great solutions to look into and a lot of blogs out there to help set you up. There are, of course, a lot of other options that do the functions you may want, but try to be as specific as possible and get things setup before your lab devices come in.

For me, I went from 1 wifi router with 0 ability to create subnets, to a wifi router and ER-x from Ubituity. To the full ubiquity setup.jYjmXdNWTmGEVADkHgbokg

However, I found out the best practice is to setup a gateway -> router -> AP. So because I’m missing that middle piece my AP is running at 1/3 of the speed of my gateway(ie. Hardline in the AP I get 450mb down. Now on my AP…Image 4-27-19 at 7.24 AM       Obviously the speed just isn’t where it should be in this configuration. Once I get my ER-X I’ll update as to what the changes look like. But for now, its actually stable. so I cant complain. But to state again. Plan your goals, pre-define the architecture, research and verify the solutions, then implement it. Its a lot more stressful when its your $$ and you don’t have support available on the phone.

Quick pitfalls

  1. Don’t put your VCenter on your vSAN deployment. It “Should” keep working if the VCenter crashes, but its’ not easy to get back. I found that in my configuration it was actually faster to just rebuild… and thats not too fun.
  2. Remember to get your VCenter off your switch before you disconnect your uplinks… U2 actually made it so that if it fails it falls back. Not that I’ve done it on both updates or anything…
  3. When troubleshooting networking issue, having a centralized location for logging makes your life so much easier. Ubiquity gave me that help.
  4. VROPs deployment thats now built into VCenter will only deploy a thick provisioned VM. This can be annoying when trying to move it off and get it to a thin-Provisioned VM for vSAN.

Its sad that all this stuff came about over months of dragging myself through the mire, but now that I’m stable, I hope to start getting into things soon.

I’m thinking of cutting my blogs into shorter quicker blogs more technically focused. It wont be that crazy, but trying to find the best and newest stuff out there is starting to slow me down in terms of just getting content out there. I really want this content to support people and help them in their IT journeys with VMware products. I know I’ve come a long way because of others.


HomeLab Rookie – Networking Mis-steps-stakes

So going along with last post about how I am really not great with the administration of vSphere, or the setup(last time was 5.5). Its time to look at the more fundamental stuff, and how bad I am with the Network component.

Addendum: I hope these help someone out there grow themselves. I know I’m growing in leaps and bounds as I learn through doing.

My Goal was to create 2 subnets, one for home, and one for the lab. I want these to be open to each other to an extent(l2) but still be stable(still working on that part)


So I decided to grab some ubiquity networking pieces to start. I grabbed the Ubiquity Edge Switch to go along with my Netgear Nighthawk router. I was looking for VLAN capabilities, and my goal was to setup the subnets on the router and then pass them through on the switch.

Learning point 1: VLANs

So its worth pointing out that VLANs on the features of a product don’t really mean SUBNET/VLANs. This kinda bit me in the butt a while, because I ended up trying to create one basic subnet and tried to create a VLAN with a different subnet… No Bueno… In fact when I created VLANs on the router, the whole thing crashed.  However, I found out that the Ubiquity switch I had was a dream to work with (after I updated the firmware). However, I looked and looked and the switch can pass through VLANs, but again, not subnet it.

Learning point 2: Devices

So for anyone looking into doing this, Its worth looking into your ISP and see if more speed is needed. I found out that I was more than doubling my devices and would need to look into my speed usage. It may not be an issue, but for me I found out that for a small figure I’d double my speed. So sure! I got a lot of OVA’s to download anyway 😉

Learning point 3: Unified Management

So there I was swapping from Netgear to Ubiquity and back. Finally I gave in and bought the Ubiquity Edge Router. I went with this one cause the price point didn’t phase me and the functionality of the router looked tremendous. Well, I learned how much this thing could do. I literally love this little box that could. It does the subnet VLANs that I wanted as well as DHCP servers for both subnets. I went with the WAN+2LAN2 connection and set my home to a 192, and my lab to a 10. Oh man, I love this thingubntrouter

The firewall, services, natting, just so much for me to learn in this tiny little box. Once I got this setup, I changed my Netgear into an access point, and set the ISP connection to static from the router. One thing about the Ubiquity Edge Router though, these small boxes use a big plug than so they take up like 2-3 spots in the UPC. Which leads me to the next point.

Learning point 4: Power

If you saw where I am going, it gets better. So every hour or so my whole network would just crash. I’d lose both my LAB and home networks and it would cause some severe anger in my brain(I think I have a couple extra knots in my back from it). I went through SOOO many settings to figure out what it is. I reset the firewall settings(which wasn’t easy considering all I’ve done before was Windows Firewall). Set specific VLAN subnets and reset. Set port forwarding when I couldn’t figure out why I needed to. Well, this went on for about a month(which is also why I have been slacking on posts). I just couldn’t move forward with an unstable lab. Well yesterday I was at the end of my tether. I troubleshooted each device one at a time. During troubleshooting each device my anger boiled. Finally I found out the little box that could was the culprit. It would crash and everything would just die. So I pulled it out to RMA it to get another one, or the gateway(As I hear good things about it). When lo and behold I realized it… I had plugged the central router into a crappy extension cord. *Le Sigh*. Just… no…. If you use a UPC like me and find your missing ports… Get These


I cant express how much I’ve grown doing these things. I’ve figured out so much and learned in this past month more about architecture in the past couple years.

IT is so segregated right now that we lose sight that each feature has to troubleshoot differently, and its really hard. Especially for a Rookie to try to keep swapping gears. I’ve learned from the pure windows standpoint, then PowerShell automation, then vRA. I’ve never been allowed to play with the other parts. But with this Lab, I’m getting to. If your on the edge thinking if a home lab is worth it… It is. Even in a corporate lab, I still wouldn’t learn this much. However, If your not interested in the whole stack, why deal with the trouble right?(and it is Trouble).


This week VRA7.6 was released doing some EXTREMELY needed updates to Orchestrator, Vrops 7.5 was also released plus ESXI 6.7U2 Get to downloading and updating folks! Now go break stuff, and learn how to fix it.


A Good Adjustment

I’m busy working on the homlab trying my best to duplicate a homelabber and failing miserably. But more information will be coming on that later.

For now I found a great KB that needs some sharing! VMware has been known for some great pointers to fix issues. This one fell into my lap from an issue I was seeing.

The Problem

So every VRealize Automation environment is different so let me be straight. This change will only help vRO extensiblility actions and automations. For me it was a good improvement over the vRO XaaS workflows that I had published.

We were seeing timeouts and errors showing “Form not found” when trying to open workflows that had actions to pull specific information(AD, VSphere, etc.) because of this the workflows were in the tank and sometimes even IaaS Deployments would return with an error 400.

The KB can be found here:

The Steps:

In Embedded vRealize Orchestrator Server:
  1. Open the /usr/lib/vco/app-server/bin/ file using a text editor.
  2. Modify the memory by setting the Xmx and Xms values to the MB value required.For example:

    2.5 GB memory is allocated to each Xmx and Xms(this is the default setting):

    JVM_OPTS=”$JVM_OPTS –Xmx2560m Xms2560m -Xmn896m -XX:MetaspaceSize=512m -XX:MaxMetaspaceSize=1024m -Xss256k”

  3. Edit the /etc/vr/memory-custom file using a text editor.
  4. Add this entry:add_service_mem vco-server *NUMBER*

    Note: The *NUMBER* is equal to the sum of -Xmx and -MetaspaceSize as configured in step #1. The memory is in MB.

  5. Stop the vRealize Appliance and increase/decrease the memory to match the increased/decreased memory of the vRealize Orchestrator.
  6. Start the vRealize Appliance.
  7. Repeat steps #1 to #6 to rest of the nodes in the cluster.

Just a short, quick blog for today, but this was a very good change for me, and I saw a marked improvement in response time from my embedded vRO.

Hopefully some hilarity from homelabbing coming, but I hope it helps someone out there. Some highlights:

  2. WAN +2LAN2? or WAN+LAN2??
  4. “Its just making them talk to each other right?”

The Life of a Home Lab…Rookie

Normally my blogs are more technical and at least get the information accross about how to do different things within VMware’s toolbox. Today, I’m starting a series(ish) on doing a homelab. Yes this is new for me and i’m working through some basic things i havn’t had to do since ESXI 5.5. SO there is some learning to do. Thoughts so far

  1. Distributed switches… where did they go??
  2. Oh man supermicro boot delay is killing me…
  3. Cable management… this is why i got into coding…
  4. Networking… should the edge go to the router to the system or a different way?

These are all thoughts that went through my brain. Not very helpful i know, but maybe some of this can help those like me, who spend all their time in already built enterprise solutions, has a rack/stack team that bring the server up, then a networking team to add the needed networking.

But, thats the annoying, The good is I’ve got some great stuff to dig into and to worth through. I’m going to be slamming through this now and getting this done. First though..

The Setup

Current List of assets:

  1. 3 Supermicro E200s
  2. 1 Ubiquity edge switch
  3. 3 Samsung nvme SSDs for storage
  4. Alot of cables


So first things first, I put the SSD into M2 slot on each of the machines. There was two philips screws on the back, then an overhead plate covering the ram/SSD slot etc. Once i removed that i was able to access the screw that would hold the SSD in place.

Cabling the three with the switch wasn’t too bad, I purchased a miniature server case to put everything in from amazon. It doesn’t look too bad, I pulled a 1×6 from the garage and built a table with no topper to allow cables to come up from the bottom. Pictures incoming!!

This slideshow requires JavaScript.

So cabling is completed, and everything is “racked”… lol

ESXI 6.7

YUP, Lets get to imaging.

For those that did anything with Supermicro the pain of that 1 ms default boot screen is rough. Immediately save yourself the pain and change that to 10000 once you get through the bios.

I wont go through the settings etc for imaging ESXI because its pretty well documented and not too difficult.

However, Imaging for VCSA has been a bit of a sludge for me, but again, there is alot of already written documentation. Heres a good youtube vid that will lead you the right way.

Which leads us to where we are currently… I’m trying to figure out how to setup networking and get it setup properly.



VRealize Deployments: Part Two – The Network

So this is FAR overdue.. Sorry about that. In Part one we basically looked through setting up an AD structure for the computer object and setting the machine to install and join domain. Basic stuff really, but going from a purely manual build to this process pretty sweet. Here is the next part. IPAM, and the Network.


There are a lot of solutions out there for IPAMs, from Solarwinds, to Infoblox the IPAM space has a lot of prospect. Some even use an excel spreadsheet right out of the 90s. Well, In all these prospects the built in Network Profiles of VRealize Automation can definitely make your life easier.

The research

The first thing you need to do is get a block of IP’s. Lots of different ways to do this. If your IPAM solution is extremely trusted you would go through that solution and reserve a block(depending on your resources for the VLAN, would determine the size of the block). From here you would need to go to VRealize portal to create your profile.

  1. From the login of the portal you will go to “Infrastructure” -> “Reservations” -> “Network Profiles” and click “New” -> “External”network1
  2. From here you will create a Name for the profile(this will be used later). Description. IPAM endpoint for this will just be the internal IPAM. Select the proper Subnet mask from the drop-down and input the Gateway.network2
  3. On the DNS tab you will input the Primary  and  Secondary DNS, DNS suffix, DNS Search Suffices and any appropriate WINS inputs.network3
  4. Finally you will input you IP block. Inputting the Name, Description, Start IP and End IP.
  5. Next you will need to set this in the Reservations.
  6. On the Infrastructure tab go to Reservations, and select the reservation that will be using that profile. On the Network tab of the Reservation click the drop-down to select the newly created profile.                                    network4
  7. Now Save your settings.
  8. Now you can go to Design and create a new blueprint to take advantage of your profile.                                                                                                        network5
  9. On the blueprint Canvas to the left, select Network & Security and drag over Existing Network. This will put a new item on the canvas to go with your machine type.                                 network6
  10. Click the “…” and select the profile that should be utilized for the machine.network7
  11. Now Click on the Machine Item and go to the Network Tabnetwork8
  12. Select the same profile on the machine, and click OKNetwork9
  13. You should now see a link on the machine, and that machine will not pull from that IP block for its IP settings.

Pretty awesome in my opinion and far superior to 5 custom properties. VMware has really done a lot to help engineers get the best out of their solution.

Now the next step follows what needs to be adjusted in your IPAM solution, weather that is to set the IP to a specific setting or call it and adjust some notes. This is specific to your environment and to your solution. I would suggest an extensibility subscription that runs on deployment/Destroy and adjusts as needed. Both Infoblox, and Solarwinds have good plugins for vRO along with custom API calls that can be utilized to solve these issues.

Thanks for reading! Hope this helps!

Docker runs on Windows 2016 Core

So I’ve read alot of blogs out there for getting ready for containers, but since I’ve hated anything dealing with fruit and tech for so long, I’ve disregarded a whole side of scripting. This is something I’m remediating in further blogs, but for this one, lets focus on Windows 2016 non-desktop experience containers…. And throw in some vRA because its fun…


So first you need to get your hands on a Windows 2016 iso. From my research the nonn-desktop experience is built-in to the basic ISO(HUZZAH).

Go through the normal Setup for your environment to setup a new machine in vSphere or whatever IAAS your using.

Once you can Console in you’ll be greeted with a familiar friend:docker1Thats right friends, its OLD SCHOOL TIME!

So set a password thats totes legit, and get to work. Once your in the cmd prompt its time to whip up an old friend:


YUP, Sconfig will setup everything from here on out in terms of firewall, domain, ip, etc. For this purpose I’ll just set the ip on the machine.

The Docker

Now run the following
Install-Module DockerMsftProvider -Force

This will ask for confirmation so hit the ‘Y’.docker2
Now run the next command:
Install-Package Docker -ProviderName DockerMsftProvider -Force

This will finish up the docker install and you will probably want to reboot the machine.

GREAT Docker is now installed and your ready to go! not really…

For people like me, you want to get Docker within a centralized management(VRA would be nice). So for this you need to continue some setup.

The Management

First you would create the docker config file otherwise known as “daemon.json”.

Run the following to create the file:
CD C:\Programdata\docker\config
New-Item -ItemType 'file' -name daemon.json

This creates the blank file. Now to populate it. First stop Docker service…
Stop-Service docker

Now run “notepad” and open the file. Insert the following
"hosts": ["tcp://", "npipe://"]

Where = your Ip address.

Finally run,
docker trust key generate role

Where role = the role of the public key. Once created open the key (the command should show you where) in “notepad” and copy from “—-” to “—–“. Now login to vRA.

On the “Containers” tab go to “Identity Management” and click on the “+CREDENTIAL”docker6Create a name for the creds(this will be used later) and paste the public Key that you copied from creation in the text field.

Now on the “Containers” tab go to “Container Host Clusters” and click on the “+CLUSTER”docker7This should create your Docker Container Master.Docker8

From here you can pull from the registry to build a “hello-world” container and deploy it from vRA


Installing a core OS can be difficult for those that don’t remember those days(i had to google myself). Installing VMtools can be difficult… Mount the Tools on the OS and run the following on the disk drive:
.\setup64.exe /s /v "/qn reboot-r"

Happy containering!