Automation: Getting Started with vRealize Automation (Part 3)

So at this point you should have the following:

  1. Endpoint connection with your VCenter
  2. Fabric Group allowing the resources to be granted
  3. Business Groups allowing specific group of users access to..
  4. Reservations setup to grant resources to business groups
  5. Reservation policies to allow specific Reservations to be called
  6. Network Policy to allow specific IP pools to be pulled/pushed from
  7. Active directory policy to place computer objects in specific OUs

You should have All this from the following blogs:

It sounds insane to say it but now that you have all these building blocks completed. Now you can setup your blueprints and start your deployments.

The Design

Before you start your blueprint plot out on paper or your mind or whatever how you want the end goal of the deployment. Do you want a SAAS solution tacked on? Are you using Enterprise vRA or Advanced? All of these questions lead up to how you deploy your instances. For this purpose we’ll assume the following

  1. You read my blog, and you did the stuff
  2. You use Customization specifications for vSphere deployments
  3. You utilize Templates for your deployment

With the following assumed I’m just going to walk through a basic setup for a basic deployment. The goal here isn’t to get fancy. Its really just to get your feet wet so that the next steps will be to custom build the server from there on.

The Canvas

Once your logged back into vRA you will go to design, and then click the green + for a new blueprint. Here is the screen you will see:Screen Shot 2019-07-19 at 10.53.29 PM

So this is pretty basic,

  • Name is the name of the blueprint your creating
  • Id is the name of the blueprint as well, but no spaces
  • Description is pretty much what it is
  • Deployment limit, This limits the number of the deployments per request. This will allow users to build multiples of the same machine, However, you will need to have your naming statically set.
  • Lease Days – For this you can set the minimum of a lease, at the end of that lease if the user does not extend within the last two days it’ll shut down the machine and then the user can extend. If he doesn’t extend it will just stay shutdown. Once it gets to Archive day(s) it will shutdown the machine and delete it(So be careful). The deployment limit lease and archive are optional and do not need to be set.

Now lets look at the design canvas:Screen Shot 2019-07-19 at 11.14.51 PM

On the left you see your different categories and the assets you can deploy, From machines, Software components(ONLY IN ENTERPRISE LICENSE. Advanced will still see the option but not have the ability to change add one). Blueprints for nested deployment, Networks, XAAS, Containers, Config Management, and Other Components. These are all fun and great things to work with, but for this we’re going to keep it simple.

  • Machine Type – For now drag a VMware machine type onto the canvas and have that stick. This consists of the majority of properties, and is the central hub for the additional assets. Once you add the machine type into the canvas it’ll open up alot of other properties that we’ll get into.
  • Software Components – This is setup in Enterprise Licensed vRA and then can be attached to machine types in the canvas.
  • Blueprints – If you want to place an already setup blueprint in the canvas and attach multiple pre-built machines to a deployment.
  • Network & Security – Here is where a lot of you NSX automatons come in. For now we are just going to use the external network and connect it to our network profile.
  • Xaas – This attaches an automation for basically anything to a machine deployment
  • Containers – Deploys a container app into a cluster. However, there are pre-reqs there that are needed to be deployed properly.
  • Configuration Management – This deploys built in Ansible and Puppet workflows(I believe this started in 7.5 but perhaps 7.6)
  • Other Components – This is any other resource component or solution published for the deployment workflows

Videos:

Instead of giving a written write-up on how to do this, I figured its WAY easier to just see some of the pros do this in a video. @virtualJad has some amazing stuff thats older but still useful to get your feet wet, and if you’ve followed this blog the pre-reqs should still be there:

HOLs

Along with Videos, There are hands on labs that VMware uses to teach you how to create a deployment. Here are the labs:

 

I think these videos and HOL’s are a great way to pick up where this leaves off and get you going.

I hope this series has been a help to get you started using vRealize Automation.

 

 

The Positive Side of Failure

Disclaimer: This isn’t another Mental Health post. I’ll share that part of my story sometime later. This is going to be raw, so if you can’t read this, don’t. I am not looking for empathy or sympathy. These experiences brought me to where I am and I don’t resent them. I want this to be something that helps others know they can be better, and push themselves to find it. Even though the road is rough.

The first road

One thing people find strange about me is that I never intended to work with computers. The idea of IT work was something that sounded so tedious and horrible, that when I was 7 I knew it wasn’t something I wanted to do. No, 8 year old Nathan knew he was going to be a pastor, or a missionary but definitely something to do with the Gospel message of Good news. I taught Sunday school, lead children’s church as a teenager. Led worship once even, and while I was in college I was a youth minister for a small church. However, whatever religion, belief you are, you know that things don’t always turn out how you plan.

While I was in seminary our church went through a very rough time. So rough in fact that the pastor had lost the trust of the session. To those that know what that means its a pretty messed up place for all involved; the church, the session, and most importantly the pastor. The church voted to remove the pastor, a man who had been mentoring me in becoming a better version of myself behind the pulpit, and a strong help in my life.

This was rough, but then I was also flunking one of my classes. I talked to my teacher who told me,

“Just memorize this passage and I’ll pass you.”

Instead of taking the win, my reaction was, “Then why is this class mandatory?” The straw that broke the camels back was we were in seminary housing, and we were told we could have a cat. But when they found out we had one, they wanted us out. This lead to a perfect storm that taught me one thing. I was in the wrong place.

One thing that came out of it was my wife (who’s still married to me, don’t ask me how). Came home after another day of bringing home the $$ for the family, and found me in the middle of the living room, with all the pieces of my computer spread out. “What happened??”, I replied, “I found the noise in my computer, I KNEW it wasn’t the fan.” This led my patient wife to ask me later on, “So why don’t you work with computers??”

The second road

“So why don’t I?? Its always been a hobby. I’ve been ok with it from time to time, and for some reason my wife and others think I’m good at it… Nah, I’ll never be good at it.” 

This was my thinking after we moved out of seminary and into a house. I took a job as a barista at Starbucks over a job with Bomgar (which is now Beyond Trust). At the time we were living in Mississippi, and Bomgar headquarters was only 20 mins from my house. That nagging feeling stayed there. Especially when I couldn’t say what “DNS” stood for.

I worked as a Barista for about a year, and man I was good at it. I loved working with the people, and I REALLY loved my boss, Summer. She was a hard taskmaster but a good listener. She really helped me out once or twice at work. Some of the assistant managers didn’t take too kindly to me when I started, but I worked my way up, and moved to the only barista on the 6AM drive through (Which meant I took orders and made the drinks). Man I was good at that. It helped that I was good talking to people, after about 6 years in retail from selling toys, to books, and music.

As I was working at Starbucks my dad told me of one of his friends who needed a tech hand. I asked what was needed and he said, “Oh you just need to be able to image and re-image machines.” Thats childs play I though. I told my dad I’d be happy to look into that job all the while that voice was in the back of my head. “You can’t do IT… Its not your thing…” was the voice in my head. Arguably it was a jump. At this point I’ve only worked on my own computers and sold them at Circuit City(dating myself here). So I picked up the job.

Third Road

The interview process for this job was strange. I came to this mans house and gasped at how nice it was. I went to the front and rang the doorbell. After meeting his wife and son, I met the friend of my dad’s. He was a military man and I could tell from talking to him he was a strict man. After our discussion(which was over an hour), He told me I had the job and told me to start Monday. This wasn’t a problem as my manager at Starbucks was stellar and a friend from seminary. She let me off and I took the position.

Its worth mentioning this was my first time to touch a server. It didn’t help that it was a desktop server and not rails. In fact, I never touched a rail server through this job. I pushed updates, Ran CCcleaner and did random tasks. I basically did whatever his client asked me to do. I started to learn from them and him, but I was distracted. That voice never left. I never could break the idea that I could do well in this environment.

One thing I immediately realized was I had an issue with checklists, I had an astigmatism in my right eye that made me skip lines when I was reading, so I would miss steps.

I only worked for the man 3 months. Within the first 1.5 months we were at odds. I’d ask for help from him on the phone and he would talk me through things now and again, but one time I stayed on the phone after I said goodbye just out of tiredness and I heard the words,

“What an Idiot!”

To say my confidence was shattered at that job from then on is pretty easy to understand. I kept trying though. I became unable to think for myself, and leaned on him more which led him to get more angry at me. You see, he wanted me to ONLY do things his way, but my personality was more curious than that. If I had a document to follow, I’d still have questions about it.

In a troubleshooting session I called him and after working through some issues he said he would reach out to support. When I was back on the phone with him I was greeted by a friendly, “Good to meet you Nathan, let’s take a look.” So I began to talk through what was going on and what the issue was, after about 30 seconds, my boss interjected, “I’ve already told him the issue you don’t have to go into it.” I shook that off this time, because this time I thought I found the issue. I let him talk for a second then asked, “Well I think the problem is over in…” and was interrupted again,

“Nathan this is the guy who helped code the application. Your opinion is not needed or wanted, you can go home.”

So I did. I never heard what the issue was, and I asked. My boss never told me. It was a week later my boss said he believe my heart to not be in the job. A week after that I told him that I was going to quit. I’ll never forget his response.

Thats good Nathan, I think your a great artist. You should do guitar or choir or something in music but stay out of IT. You will never be successful in that field. You don’t have what it takes. I just don’t think you can comprehend it”

The artist statement was because I was a music minor in college. I played classical guitar and sang in the choir mostly. I also took a couple of classes here and there. The problem with going into music was I was in my mid 20s at the time and I knew I didn’t have the skills.

The pit stop

Calling the next couple years a pit stop is a generous statement. I literally had no idea what I was doing or where I was going. I took on odd jobs, played lots of video games.

That awesome Starbucks manager took me back, and even though she already had the staff to run the shop, she found a place for me. So I went back to Starbucks for another 6 months. That time was pretty critical for me because I started re-gaining some very needed confidence.

I began realizing that no matter what people said, I am good at things. I just don’t know what I should be doing.

My amazing wife continued to be my cheerleader, friend and hope each day. She took on some rough jobs to keep the bills paid, but she was just stellar through this whole process. She would come home and talk about the people at work and I would tell her how I played video games each day, and did some studying.

That studying was because I had an amazing cousin who spun up a Hyper-v cluster for me to utilize to learn Microsoft Active Directory and other computer issues. To be fair, I didn’t utilize it enough or appreciate the gift that he had created for me. My mind was in such a state that I just didn’t know what to do, and I needed a good kick to get me out of this funk.

The kick

During this time we had moved from Mississippi to Fort Worth, Texas. We were living in a house that my in-laws had and wanted to keep clean/kept up, instead of just being dormant. Well it was summer and we were told the house needed to be sold and we needed to find a new place. Also during this time, we were trying to sell the house in Mississippi that we bought. That house stayed on the market for over a year, and after swapping realtors we found out that we were asking well over what that house was worth. However, we got an offer and took it having to pull a 12k loan to get out of that house. Debt piling up and the fear of no roof over my head, the only thing I could do was go back into the job market. I went to a legitimate temp agency just to get something. Surprisingly, they said my skills are mostly fit for IT, and they place me at RadioShack as a graveyard shift tech for setting up new stores.

Its kinda funny how that 12$ an hour job was a godsend to us. My income had doubled and I thought, “This is ok for now till I find a better solution.”

The Road I’m On…

Working at RadioShack ended up being a great experience. The people I worked with were fun and hilarious, the bosses were nice (mostly), and some were even tough taskmasters. But one thing they all had that I didn’t know existed in IT, was resilience. Failure wasn’t the worst thing, nor was following documents. The worst thing you could do was to be lazy and complacent.

While I was there I learned a lot, but still thought I’d never be great at higher level IT. I didn’t even think this was IT. I just slung hardware at problems and never really dug into much. Then after the project was over they were going to cut the contractors. However, I had worked through the night and into the day shift and had grown a good relationship with that team and manager. They had even put me in the day phone queue for a while to help with calls. After all this my bosses boss came up to me and said,

“Stop working overtime! I’m not going to pay it if you can’t watch your hours!” That shook me for a second, then the dayshift manager (who wasn’t my manager) piped up and said,“He’s worth it”. Then his boss responded, “Well, HIRE him.”

The guy that said that probably had no idea, and still doesn’t know how big those three words were to me. For the first time, I had worth in the IT field. I was GOOD at it, at a professional level. Even though phone help is seen at a lower scale, to me it was still a “big boy job”.

RadioShack definitely had its hardships, but I built great relationships with people, learned a lot, and went from a phone tech, to a level 2 server admin, to a network operator. My next job was a graveyard tech at an Oil company. I worked graveyard for about a year, and then moved to the day shift doing data analysis with BMC Remedy, and then the VMware guy who was dealing with vRealize Automation asked me to code the automation for creating, adjusting, modifying tickets and the CMDB.

My issues with checklists, went away with automation, and I finally found my home. I’ve been working automation ever since pushing either DevOps code based solutions, or Gui based automation that will push one thing to the next without having to manually click buttons off a procedural checklist.

The Positive Side of Failure…

This may read very negatively, but I’m very glad for each step I took along the way. I love my wife and remember each day that she did for me, what I do for her now. I love that she can be what she wants and has that freedom. I push myself each day to learn and grow, but more importantly, I push myself to ‘fail’.

Out of my career I’ve learned that failure isn’t an option, it’s a part of life. It’s going to happen no matter what you do. It is how you respond to it that defines your success. It took me a long time to learn that lesson. I still know people who deal with failure incorrectly. They think if they fail, they will never be successful. I find that I’m successful, because I fail, and I’m not alone. Here are some quotes by some interesting people:

“Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent. ” – Calvin Coolidge

I have not failed. I’ve just found 10,000 ways that won’t work.  – Thomas A. Edison

As a kid, falling was embarrassing. As I got older, I got used to falling and picking myself back up. There’s not a sense of failure. It’s of disappointment. You train so hard to not make mistakes. When you do, you’re learning from that. How do I improve? How do I get better for the next time? Through every failure, there’s something to be learned. – Michelle Kwan

“Failures, repeated failures, are finger posts on the road to achievement. One fails forward toward success.” – C. S. Lewis

“Around here, however, we don’t look backwards for very long. We keep moving forward, opening up new doors and doing new things, because we’re curious … and curiosity keeps leading us down new paths.” – Walt Disney

Let these be the voices in your head, and each time you fall, remember that fall, accept what happens, take the noise from the fall out of your head, and move on. Each fall is teaching you something, learn from it. Michael J. Fox uses the word vacuum very well:

“There’s always failure. And there’s always disappointment. And there’s always loss. But the secret is learning from the loss, and realizing that none of those holes are vacuums.”  – Michael J. Fox

This hits home for me. That voice in my head so many years, was a vacuum, and these take multiple forms. You can let it be a boss, a job that you lost, a friends issues, etc. The challenge of life is to learn from these and move on. Because if you don’t they may suck you in and down a hole that takes entirely too long to get out.

If you made it this far, thanks for staying with me. This is a bit of my story telling how I made it to where I am. I’m not the most amazing person out there, and I don’t have the technical knowhow to know everything, but that’s the path I’m on. Wether I learn from failing or reading, or doing, everything is learning. I may fail in how I teach my kids, or how I fix up the shower, or anything. This is my perspective now in life.

“I will fail. I will fall. But I will learn the best that I can from it.”

Automation: Getting started with vRealize Automation (Part 2)

In this part two, we have an endpoint with your vcenter, we have a fabric group to consume those resources, now that the infrastructure is prepped, it’s time for the squishy element….those dang humans.

In my previous post, we brought in AD, so users can be utilized from the domain to populate groups. These groups are critical to dividing your resources up, and allowing your users to consume it.

Business Groups

So now that users are a part of the solution, let’s divide them into groups called “Business Groups”. These groups have layered roles that will allow individual group management and resource management. Let’s go through that:

in vRA go to “Administration ->Business Groups” and the green “+ New” sign for new.Screen Shot 2019-06-28 at 7.20.33 PM

From here you will see the set for the group. You can include the name, Description, email for capacity alerts, and custom properties (if this business group ALWAYS has the same properties).Screen Shot 2019-06-28 at 7.23.44 PM

The next page will allow you to select the members of the group. This allows you to disect the group and allow layered rights as needed for the group. Here is a snippet from VMware about the rights:

 

Group manager role Can create entitlements and assign approval policies for the group.
Support role Can request and manage service catalog items on behalf of the other members of the business group.
Shared access role Can use and run actions on the resources that other business group members deploy.
User role Can request service catalog items to which they are entitled.

Now, create the roles as needed. Here is an example:Screen Shot 2019-06-28 at 7.28.23 PM

Click Next. You will see the settings for a custom name and AD group. Now you can set these dynamically in the blueprints, which is what I prefer, but if the AD OU is always the same for that group, AND the naming is a standard constant(always DC-APP-SRV*** for all servers) you can utilize these fields:Screen Shot 2019-06-28 at 7.30.56 PM

Now you’ve created a business group. It’s time to create reservations. Let’s start with the Reservation Policy.

Reservation Policy

The reservation policy is kind of like a tag. The policy is used in blueprints to simply label a Reservation to be utilized by the policy.  To create one, go to “Infrastructure -> Reservations -> Reservations policy” CLick the “+ New” to add a policy:Screen Shot 2019-06-28 at 7.34.24 PM

Now we have a group and policy, Let’s make our reservation and grant resources to the users.

Reservations

Reservations are basically what they sound like. They reserve resources for the users to utilize. Once the resources defined in the reservation are exhausted the deployments fail stating “No Resources Available.” Pretty nifty for those that need to put a harness on sprawling server builds. To get to Reservations, Go to “Infrastructure -> Reservations -> Reservations” Click the Screen Shot 2019-06-28 at 7.40.07 PM to see the dropdown of possible endpoints. Of course we only have a center at this point, so select “vSphere(vCenter)” Here is an example of the first tab:Screen Shot 2019-06-28 at 7.43.38 PM

Now go to resources, and here you will see the actual resources in your vCenter. After you select your compute resource(Datacenter), You can set your quota, if you want a hard quota, the amount of Ram for the reservation, and the amount of storage and what storage cluster(I’m using VSAN) to use. Example:Screen Shot 2019-06-28 at 7.45.05 PM

The next tab is all about the network. Here you will set what VLANs are allowed to be used by the group, and also if you have an IPAM solution in a Network Profile, it can be selected here. I have another blog about Networking in vRA here. Here is an example with the VLAN and policy:Screen Shot 2019-06-28 at 7.48.46 PM

The last two tabs(Properties, Alerts) I don’t really use much myself. I can set the alert to notify at specific resource usage but, I don’t normally use them. Maybe I’m a horrible human being? meh…

So now the framework is all in place, you got resources, you got users, but next it’s time to get blueprints!

Automation: Getting started with vRealize Automation (Part 1)

Acronyms used:

  1. vRA = VMware vRealize Automation
  2. vRO = VMware vRealize Orchestrator
  3. vROPS = VMware vRealize Operations

I was hit with a shocking realization this past week. During a conversation with a VMware representative about automation and the success that we have found within it, he stated, “You know we could sell the cloud suite license to ten customers and probably two of them would use automation, and maybe one would be successful.”

I bypassed this statement and just moved on for the next couple days, but then a friend asked me how to create a blueprint on how to get started with vRA. I spent a huge amount of my time scrounging the internet for blogs to tell me how to do one thing or another, which is one of the ways I have helped our company be successful. Because of blogs, I found the answers I was looking for. I lost sight of what this blog was suppose to accomplish, which is to help others start. So… Let’s start.

You’ve installed vRA… Now What?

So with a fresh install of vRA, you now have a shell. Nothing is being managed, no domain users are able to login, and no machines are able to be built. It’s kinda a pointless stubb when its first deployed and needs someone to start the setup. During the installation you will stipulate the administrator password. This is your first login. Once you’re in, the screen you’re met with is kinda bleak.

Once your logged in, go to the “Administration” tab and select “Tenants”. You should be met with your default tenant for vRA, so, select it. From here, select “Local Users” and add a new user (Most will just name this account “Admin”).

Screen Shot 2019-06-22 at 7.58.55 AM
Pay no heed to the 2nd “Administration” Tab to the right, you shouldn’t see this.

After the account is created, go to “Administrators” and add your new account as “Tenant Administrators”, and “IaaS Administrators”. Screen Shot 2019-06-22 at 7.46.59 AMThis will grant the needed access to start utilizing vRA. *BONUS POINTS*: you can configure your incoming and outgoing email servers here. Probably a good idea to do that too.

Fabric Groups are basically what allows resources to be consumed by vRA. It really doesn’t do anything until the Fabric Group is created. So lets do that… first thing to do is create your Fabric Group Endpoint(Basically what resources are to be consumed). Log Out of your “Administrator” account and log into the account you created above. Go to “Infrastructures -> Endpoints -> New ->vsphere”

Screen Shot 2019-06-22 at 7.16.23 AM

Now you will see the needed information to create your endpoint. Please note the examples that VMware gives you before you start typing. Many gung-ho automation enthusiasts have lost hair because they didn’t look first. TAKE NOTE** The name you input here, SHOULD MATCH the name you install as your vcenter agent during installation of vRA. If you have forgot that, you can go on the agent box and look at the service. Most will dupe the name of the agent with the service. If it’s not you will get a message “The Vsphere agent does not exist or may not be running“. The correct inputs and test connection should show:

Screen Shot 2019-06-22 at 7.23.28 AM

So click “OK”. Now that you have your endpoint we need to create a Fabric Group. On “Infrastructure -> Fabric Groups -> New” you should see your new vCenter ready to be managed:Screen Shot 2019-06-22 at 7.27.32 AM

Configure the name of the group, its administrators and select the resource to manage it. Now we have resources, we have tenant admins. We need users. “Administration ->Directory Management -> Directories -> New” This will allow you to create a new Domain to sync to vRA so user management at the base level is controlled in AD. Custom groups can still be utilized, but in a different way after is pulled in from AD. You can use this over LDAP, IWA, Or Local. You should see the following:Screen Shot 2019-06-22 at 7.38.57 AM

Input the Directory name, and the Sync Connector will default to Master node. Select your search attribute, and your Bind User information(Again Note the defaults VMware puts in the fields before you input your data, as it helps you). Test your connection and now you have a directory. To sync users, Go to the Directory and click “Sync Settings”. From here you will see config tabs for management of the users. Here is an example of syncing the domain users:Screen Shot 2019-06-22 at 7.41.59 AM

Now you have users, and resources. You probably want to give yourself God Rights in this environment(Doesn’t everyone?). From here, logout of your admin account and back in as the default “Administrator” account. Go back to “Tenants -> Default Tenant -> Administrators” Now that you synced yourself in the directory, and you should now be able to add your domain account as “IAAS”, and “Tenant Admins”. Here is an example of how it should look:

Screen Shot 2019-06-22 at 7.46.59 AM
vcoadmins is a default built in “Custom Group” for vRO administration.

After you add your domain accounts here, log in now with your God-Mode Domain Credentials(The new Directory, or Domain, will be available on the login screen).  After your in go to “Administration -> Users and Groups -> Search your username”Screen Shot 2019-06-22 at 7.51.05 AM

Select your username and go to the “Add roles to this user” window to the right. Go hog, you earned it.Screen Shot 2019-06-22 at 7.55.39 AM

Now you have god rights, you have all the roles your heart could wish for; you have an endpoint and resources. The next steps are Reservations, Network Profiles, and then Blueprints.

I’m going to try to get things out more often. Sickness has plagued my house, but I’d love to help at least one person learn how automation helps life.

Terraform: A Noob continues…

Ok, so you have Terraform installed, well… neat. Remember this is from the perspective of a guy learning Terraform, so come along with me and let’s learn together.

Let’s get into the nitty-gritty and start learning what this whole ‘infrastructure as code’ is really all about. The first thing we are going to do is open a Terminal/cmd prompt and go to the Terraform folder.
Note: On mac I had to run all terraform commands with “sudo”. I went through some changes so I could “sudo -su” the terminal to raise the permissions.

touch example.tf

or for cmd prompt
type nul > "example.tf"

Now that you have the example folder, it’s time to fill it in with the needed information for a basic apply…

Note: It’s best to go ahead and create a free account with AWS. Here is a great link to start your free account.

Now open the “example.tf” file to edit using either “vim example.tf”, (Probably need sudo) or opening notepad.

Paste the following:

provider "aws" {
access_key = "ACCESS_KEY_HERE"
secret_key = "SECRET_KEY_HERE"
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-0338bce19a7cb103e
instance_type = "t2.micro"
}

Update the access_key and secret_key with the needed information. To find this in AWS you would go to the IAM to create a user. Once the user is created, go back into the user in IAM and under “Security Credentials” and click “Create Key”.

I updated the ami to a newer template. When I tried to run the apply, the base template would not work from the site. On vim run your “:wq” or “ctrl + s” in notepad to save the work. Now run the following command

terraform init

You should see something similar to the following:

Screen Shot 2019-05-09 at 7.14.52 PM

Ok, now the fun begins. You now have the AWS provider, and you have a file you can utilize for deployments. Next run:

terraform plan

You should see the following:Screen Shot 2019-05-09 at 7.18.42 PM

And finally:

terraform apply

Which will deploy the instance to your EC2 account.
Screen Shot 2019-05-09 at 7.26.14 PM

Cool! But whats cooler than building stuff?? DESTROYING IT WHA HA HA… ok..

Terraform makes that easy to:

terraform destroy

This will run in realtime and loop until the instance is destroyed. I really liked that as a feature because once the destroy command is done, the machine is really gone. Pretty awesome. I plan to move through this into more variables, and into file structures per provider. Also, I really want to dig into how this will work with vRA and Azure. Stay tuned.

Terraform: The Noob Starts

Terraform. We’ve all heard the following phrase, or, if you haven’t, here ya go;

“Terraform has become the de-facto platform for infrastructure as code in the public cloud.”

Well, guess that means some old dogs need to learn new tricks.

What is Terraform?

The definition pulled straight from the source:

“HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.”

What this means, is you have a document that specifies the blueprint for the deployment, and you can copy-paste the needed preferences, (as well as adjust afterwards which is a total gas.) So upon reading this, it made me immediately want to get into it. Let’s be real, no one likes using the cloud client to migrate your blueprints…it’s just not enjoyable (If you enjoy it, why/how?). This is really something I wanted to look into. And well Then this happened.

WHAT IaC and with vRA…..

 

giphy

Idk what more needs to be said. Sounds cool, looks cool. Lets get to it…

Terraform Install

There are a lot of docs out there for installing Terraform. Terraform.io has some great links itself. Obviously their link for doing the install of linux and windows works well out of the box. But what about Mac? Well, my recent purchase of a Mac to prove to others that I wasn’t a normal windows snob, has driven me to extremes lately. Its just not the same, and WHY DON’T THE DANG WINDOWS CLOSE!

Anywho, I tried to follow the linux installation for Mac. Then I google’d how to get to the elusive /usr/ folder, then I realized I was an idiot, and installed Terraform. Then the path setting was the next thing. I’d set it and try… nope… try to reset it again and try… nope…. I only had about 10 minutes left then God opened the cloudy skies above…

Homebrew…

For those that don’t use Homebrew, Here’s Cody De Arkland(anyone with De in the name = De Man).

For a guy that uses the term “dope” a lot. I dig it. So, I typed in “brew install terraform” after installing Homebrew (See link) and life was good again. Did a quick Terraform -version..

Image 5-8-19 at 7.17 PM

VOILA. Good times man… now to start with this whole Terraform configuration thing…

The Challenge of DevOps

Disclaimer: I am not an expert in this area, this is just a blog over difficulties and conflicts created in the search for DevOps. I will go through some fallacies that will make each group look bad, so stick with it if you can. I really would like to get conversations started where they weren’t.

The Phoenix Project. One of the greatest books of this IT age gives a good concept when it comes to how Infrastructure and Development work with each other. However, there are a plethora of definitions for “DevOps”. How do you take a company that has existed just fine with two (if not more) warring parties and get them not only to work with each other, but to help each other.

Fallacy 1: DevOps is Infrastructure working for Developers

This was my first collision with the term. The idea here, again, is basic. Infrastructure and its core individuals exist to give a foundation and support to the developers, and nothing more. A good example of this is a simple ticket request for a new server with specifics of the hardware. Any infrastructure worker worth his salt would look at those requirements and ask “why”, but this adoption of DevOps is “Why not?”. Obviously I’m taking things to an extreme here, but I have seen too many server requests for extreme hardware requirements (really…a virtual machine with 64 gb of ram?) and the inevitable fight that ensues in response to that three letter question “why?” There are so many other examples I’ve seen, from resource adjustments of the ridiculous, to the blatant opening of security holes. Obviously this is in varying environments, but the idea is the same.

This expands past infrastructure into Operations. For example, when there is a “priority one” ticket, and everyone jumps on the call, the first phrase is, “If operations did their job and fixed this we wouldn’t have to deal with this. I don’t even know why we are here.” Which is something I’ve heard this stated multiple times for multiple reasons.

Like all Fallacies there is truth here. Developers are the reason companies make money. Unless you work in the past, Devs work daily to create and stabilize performance that generate monetization for the rest of IT. The greatest example of this is Uber. Of course the top Taxi company in the nation is an app. So without it there is no real company at all.

For IT workers that, will lead them to a mindset of denial. No they’re not in denial, but  the next fallacy.

Fallacy 2: Developers should come to heel for Infrastructure

So ok, here we go. This is so prevalent in my career that I can’t even think of that many specific instances, because it’s almost daily that I hear something along this term.

Developer asked to be able to monitor or help maintain resources with their machine,  “No”. Developer asked to be able to create snaps for code roll, “No”. Developer asks to be local admin on a box, “No”. Developer asks to have a new dev environment that mirrors production, “No.”, and so on.

Its amazing and a marvel to me that people can just shutdown Developers for minimal reasons and yet, this is the normal situation I find myself involved in. I see a ticket and instead of doing investigation or adjustment the immediate response that I hear is either, “No”, or, “We cant do that”, or “Thats not our problem”.

This is just a few examples, and I’m sure everyone could think of their own examples. One thing that has really stirred this up and made my brain hurt is Containers. Developers ask for a container solution from infrastructure, and they get a big fat, “Why? It’s in the OS, so you figure it out.” This is a very frustrating stalemate.

This pushes to an extreme called “Shadow IT”. Shadow IT is basically if infrastructure wont grant the needed support or help getting things off the ground, Dev will use their individual budget and spin up an entire instance in AWS, GCP, Azure, or a basic private cloud. Just a license  for VMware Workstation can create Shadow IT, it’s that easy (don’t get any ideas). I heard a developer talking about the public cloud saying, “It makes me happy knowing no infrastructure people are touching my boxes.”

This fallacy again comes from a smidge of truth. Development doesn’t know everything about infrastructure. Infrastructure spends all their time doing resource management, monitoring, and adjustments trying to keep the infrastructure running in the best way possible, so they should be the main contributors to the how/what/why of infrastructure.

Fallacy 3: Security CONTROLS ALL

With Ransomware and Wannacry still being buzzwords in our time period, this is definitely a big deal. Security is extremely needed this day and age, from blocking things like DNS floods, to removing patch vulnerabilities. There are countless reasons to keep security up to date. Where does this come in for Infrastructure or Development? Well, Security is the Uber-Deny group. Almost every security individual I’ve met has stated something along the terms of, “My job is to make sure we don’t do something stupid.” I feel like the answer to this is both “Yes”, and “No”. I’m sure people do stupid stuff all the time that has nothing to do with security. The proper statement would be something like, “My position is to rectify security vulnerabilities throughout the stack” Or something like that.

Security will try to block where networks are, how they are setup, where infrastructure is placed, and the list goes on. Are there legitimate reasons? Of course! Security is the necessary part of IT that helps keep things in line, away from prying eyes, and malicious intent.

The Common Factor

All of these fallacies are connected to an amount of truth. They all start when the truth is twisted and exploited by individuals to make their position greater than it is.

It’s like the problem with DevOps is even more internal than IT. The problem with DevOps is us.

IMG_0427

People

I love this quite so much, because at its core it grasps the internal hierarchy of DevOps. It all starts with People.

Obviously each group mentioned is run by People. These people are all built around their own past experiences and troubles. Like iron, these experiences and troubles have made us stronger and sharper, but like the fingers of a string musician we become callous, and those callouses help generate better solutions in both good and bad ways.

There is also the problem with Ego. My favorite statement with immediate meetings are, “Everyone check your ego at the door. It has no place here.” If we only were able to do that and bring only the strengths of the department and not the callous overreaching, “where would we go?”

Finally for us, the problem of listening and understanding.

Meetings

I know this picture hits a chord for all us. How many of us have listened to what seems like the dumbest ideas and kept our mouths shut to it? DevOps starts with listening, and by listening, I mean the whole management stack. Is this easy to do? Absolutely not, in fact, it’s extremely hard to listen for nuggets of truth in a whirlwind of ideas. However, thats what we need to do. To start this, the greatest thing we can do is just listen, and if you don’t understand, ask the dumb questions that everyone is too proud to ask, so that you do. You are probably not the only one who has that same question.

Finally: People… Again

End Users…the concept is lost by all groups now and again, but everyone in IT works for end users. The hardest concept to grasp, and the easiest road to DevOps, is how to create a better solution for them. Developing a new patch, a more secure Dev infrastructure, or new storage solution. This all has DIRECT impact on an end user, and each group works individually to create a solution for them. Now how to mix them all together? A good start is a CI/CD pipeline and securing an automated solution for Developers to run continual delivery. It exists on infrastructure, it has to run in the best HA stability, and it has to be secured. This is a great solution that involves all groups together. There is so, so much more, but that’s the journey we are all on.

Call me “Optimistic”, or “Crazy,” Or “Nuts”, Or “Dangerous”, but I believe this is the future of our industry. With the kubernetes, containers, CI/CD buzzphrases, and the dominance of public cloud, the old standards needs to be replaced with the promise of DevOps. Now, how will you do it?

Things I learned this week!

  1. Error 500 in VCenter when deploying OVF.. Verify your VCenter certificate is trusted… Also How to do this on MAC and then trust it by selecting “get-info” and setting it to “always trust” Needed this for LifeCycle Manager and NSX deployment
  2. INSTALL PowerCLI on da MAC (Includes Homebrew, Powershell Core).
  3. Initial setup of vLCM

More HomeLab Pitfalls

This is going to be another one of those blogs of the pitfalls I’ve ran into. I really want to get into how to fix specific errors and troubleshooting. However, since I’m trying my hardest to get this lab setup to dig into the new releases that came out a couple weeks ago,

VUM – Update Manager

So if your like me you know that the first thing you do when deploying anything new is to update it. So why not update your hosts? Well here are some findings that I ran across with my 3 node vSAN cluster. First DNS and I know the statement, “99% of all IT issues are DNS issues.” But thats true! In my case I built a standalone DC with DNS before I deployed my VCenter so that I could use my A records to resolve the traffic. This worked great for the VCenter. It didn’t work great for VUM. Turns out that on your ESXI hosts when you have a single DNS DC it must be running on the first DNS server field. If its running on the second, it will not be able to scan the nodes for compliance or reach them. (If you setup vSAN you would already see the errors showing it cant scan when its doing its vSAN health checks). Second iSCSI controller, if its VMware certified, why are we having to deal with the warning before we can run the updates? Well because their really not… Here is a great Blog on it to look through. Pretty simple fix but something that should just work right?

The whole Network thing.

I was pretty sure that my decision for a 2 subnet with the ability to cross-talk was not a bad idea. One for the home lab, and one for the house would make a stable environment. Nope… For this to those out there about to start, I’d say to do your homework or start your home networking solution first. Ubiquity has a ton of great solutions to look into and a lot of blogs out there to help set you up. There are, of course, a lot of other options that do the functions you may want, but try to be as specific as possible and get things setup before your lab devices come in.

For me, I went from 1 wifi router with 0 ability to create subnets, to a wifi router and ER-x from Ubituity. To the full ubiquity setup.jYjmXdNWTmGEVADkHgbokg

However, I found out the best practice is to setup a gateway -> router -> AP. So because I’m missing that middle piece my AP is running at 1/3 of the speed of my gateway(ie. Hardline in the AP I get 450mb down. Now on my AP…Image 4-27-19 at 7.24 AM       Obviously the speed just isn’t where it should be in this configuration. Once I get my ER-X I’ll update as to what the changes look like. But for now, its actually stable. so I cant complain. But to state again. Plan your goals, pre-define the architecture, research and verify the solutions, then implement it. Its a lot more stressful when its your $$ and you don’t have support available on the phone.

Quick pitfalls

  1. Don’t put your VCenter on your vSAN deployment. It “Should” keep working if the VCenter crashes, but its’ not easy to get back. I found that in my configuration it was actually faster to just rebuild… and thats not too fun.
  2. Remember to get your VCenter off your switch before you disconnect your uplinks… U2 actually made it so that if it fails it falls back. Not that I’ve done it on both updates or anything…
  3. When troubleshooting networking issue, having a centralized location for logging makes your life so much easier. Ubiquity gave me that help.
  4. VROPs deployment thats now built into VCenter will only deploy a thick provisioned VM. This can be annoying when trying to move it off and get it to a thin-Provisioned VM for vSAN.

Its sad that all this stuff came about over months of dragging myself through the mire, but now that I’m stable, I hope to start getting into things soon.

I’m thinking of cutting my blogs into shorter quicker blogs more technically focused. It wont be that crazy, but trying to find the best and newest stuff out there is starting to slow me down in terms of just getting content out there. I really want this content to support people and help them in their IT journeys with VMware products. I know I’ve come a long way because of others.

 

HomeLab Rookie – Networking Mis-steps-stakes

So going along with last post about how I am really not great with the administration of vSphere, or the setup(last time was 5.5). Its time to look at the more fundamental stuff, and how bad I am with the Network component.

Addendum: I hope these help someone out there grow themselves. I know I’m growing in leaps and bounds as I learn through doing.

My Goal was to create 2 subnets, one for home, and one for the lab. I want these to be open to each other to an extent(l2) but still be stable(still working on that part)

Ubiquity

So I decided to grab some ubiquity networking pieces to start. I grabbed the Ubiquity Edge Switch to go along with my Netgear Nighthawk router. I was looking for VLAN capabilities, and my goal was to setup the subnets on the router and then pass them through on the switch.

Learning point 1: VLANs

So its worth pointing out that VLANs on the features of a product don’t really mean SUBNET/VLANs. This kinda bit me in the butt a while, because I ended up trying to create one basic subnet and tried to create a VLAN with a different subnet… No Bueno… In fact when I created VLANs on the router, the whole thing crashed.  However, I found out that the Ubiquity switch I had was a dream to work with (after I updated the firmware). However, I looked and looked and the switch can pass through VLANs, but again, not subnet it.

Learning point 2: Devices

So for anyone looking into doing this, Its worth looking into your ISP and see if more speed is needed. I found out that I was more than doubling my devices and would need to look into my speed usage. It may not be an issue, but for me I found out that for a small figure I’d double my speed. So sure! I got a lot of OVA’s to download anyway 😉

Learning point 3: Unified Management

So there I was swapping from Netgear to Ubiquity and back. Finally I gave in and bought the Ubiquity Edge Router. I went with this one cause the price point didn’t phase me and the functionality of the router looked tremendous. Well, I learned how much this thing could do. I literally love this little box that could. It does the subnet VLANs that I wanted as well as DHCP servers for both subnets. I went with the WAN+2LAN2 connection and set my home to a 192, and my lab to a 10. Oh man, I love this thingubntrouter

The firewall, services, natting, just so much for me to learn in this tiny little box. Once I got this setup, I changed my Netgear into an access point, and set the ISP connection to static from the router. One thing about the Ubiquity Edge Router though, these small boxes use a big plug than so they take up like 2-3 spots in the UPC. Which leads me to the next point.

Learning point 4: Power

If you saw where I am going, it gets better. So every hour or so my whole network would just crash. I’d lose both my LAB and home networks and it would cause some severe anger in my brain(I think I have a couple extra knots in my back from it). I went through SOOO many settings to figure out what it is. I reset the firewall settings(which wasn’t easy considering all I’ve done before was Windows Firewall). Set specific VLAN subnets and reset. Set port forwarding when I couldn’t figure out why I needed to. Well, this went on for about a month(which is also why I have been slacking on posts). I just couldn’t move forward with an unstable lab. Well yesterday I was at the end of my tether. I troubleshooted each device one at a time. During troubleshooting each device my anger boiled. Finally I found out the little box that could was the culprit. It would crash and everything would just die. So I pulled it out to RMA it to get another one, or the gateway(As I hear good things about it). When lo and behold I realized it… I had plugged the central router into a crappy extension cord. *Le Sigh*. Just… no…. If you use a UPC like me and find your missing ports… Get These

Thoughts

I cant express how much I’ve grown doing these things. I’ve figured out so much and learned in this past month more about architecture in the past couple years.

IT is so segregated right now that we lose sight that each feature has to troubleshoot differently, and its really hard. Especially for a Rookie to try to keep swapping gears. I’ve learned from the pure windows standpoint, then PowerShell automation, then vRA. I’ve never been allowed to play with the other parts. But with this Lab, I’m getting to. If your on the edge thinking if a home lab is worth it… It is. Even in a corporate lab, I still wouldn’t learn this much. However, If your not interested in the whole stack, why deal with the trouble right?(and it is Trouble).

RELEASES

This week VRA7.6 was released doing some EXTREMELY needed updates to Orchestrator, Vrops 7.5 was also released plus ESXI 6.7U2 Get to downloading and updating folks! Now go break stuff, and learn how to fix it.

 

A Good Adjustment

I’m busy working on the homlab trying my best to duplicate a homelabber and failing miserably. But more information will be coming on that later.

For now I found a great KB that needs some sharing! VMware has been known for some great pointers to fix issues. This one fell into my lap from an issue I was seeing.

The Problem

So every VRealize Automation environment is different so let me be straight. This change will only help vRO extensiblility actions and automations. For me it was a good improvement over the vRO XaaS workflows that I had published.

We were seeing timeouts and errors showing “Form not found” when trying to open workflows that had actions to pull specific information(AD, VSphere, etc.) because of this the workflows were in the tank and sometimes even IaaS Deployments would return with an error 400.

The KB can be found here: https://kb.vmware.com/s/article/2147109

The Steps:

In Embedded vRealize Orchestrator Server:
  1. Open the /usr/lib/vco/app-server/bin/setenv.sh file using a text editor.
  2. Modify the memory by setting the Xmx and Xms values to the MB value required.For example:

    2.5 GB memory is allocated to each Xmx and Xms(this is the default setting):

    JVM_OPTS=”$JVM_OPTS –Xmx2560m Xms2560m -Xmn896m -XX:MetaspaceSize=512m -XX:MaxMetaspaceSize=1024m -Xss256k”

  3. Edit the /etc/vr/memory-custom file using a text editor.
  4. Add this entry:add_service_mem vco-server *NUMBER*

    Note: The *NUMBER* is equal to the sum of -Xmx and -MetaspaceSize as configured in step #1. The memory is in MB.

  5. Stop the vRealize Appliance and increase/decrease the memory to match the increased/decreased memory of the vRealize Orchestrator.
  6. Start the vRealize Appliance.
  7. Repeat steps #1 to #6 to rest of the nodes in the cluster.

Just a short, quick blog for today, but this was a very good change for me, and I saw a marked improvement in response time from my embedded vRO.

Hopefully some hilarity from homelabbing coming, but I hope it helps someone out there. Some highlights:

  1. ISP MAC LOCKING!
  2. WAN +2LAN2? or WAN+LAN2??
  3.  MODEM TO WHAT ON THE WAAAAAT???
  4. “Its just making them talk to each other right?”