My first VMware Explore was in 2018 (VMworld). It was a pinnacle year for me. There were around 23,000 people there with so many different vendors, parties, and more. It was called the land of free t-shirts and for good reason. I still have around 20 t-shirts from the past couple of VMworlds I went to. I remember having to wait in a huge crowd of people to get into the expo, with the pre-show of people, following the explosion of “Can I scan your badge?”. I remember checking into the excalliber and thinking vegas was the most amazing, and yet disgusting city I had ever seen. Everytime I went to Mandalay Bay for the sessions though I was blown away at how amazing it was. It was so big with an insane amount of sessions around networking, compute(numa, drs, etc), and storage. Storage was huge that year if I remember correctly, but I was there for automation, the redheaded stepchild of VMware. I still remember the feeling of elation being part of a community, learning about vExpert, VMUG and more. It was a core memory, and one I look back to.
To say this explore is a shell of what it was is an understatement. The expo is half the size of the room, the sessions are smaller, and the long lines at the bathroom are much easier to navigate (on the positive side). The problem with VMware isn’t the tech, and has never been the tech. Its how VMware deals with its customers, and partners. And seeing around 4000 people total at VMware Explore this year. Its easy to see that relationship has been strained. Like a bipolar kid trying to tell their parents where they want to eat, VMware has bounced all over the place with their customers telling them what licensing they have, what pricing, how it will be deployed/consumed and more. Its been a wild ride.
But…
VMware is coming back, from some of the already announced integrations and products that definitely turns my head. The technology of VMware has always been stellar and amazing. Hard times come to us all, and hard times have come for VMware for sure, not from a business perspective because Hock is making his money back, but from a customer relationship the problems are plentiful.
Now that explore is over, there are a ton of things to dig into, VCF Operations, and Automations doing some crazy new things (of course I’ll be playing with that in the lab), as well as additional fun around Private AI. I know NVIDIAs push towards a Kubernetes framework for AI will push customers to adopt this new framework, allowing new capabilities for their applications, and AI integration. Very cool things coming in this year, and I can’t wait for 2026.
As I keep working with customers, and friends around challenges with VMware, it’s clear, that VMware has not lost its teeth, nor has it lost its budgetary capabilities. The problem VMware had was, “Naked vSphere”, which was people utilizing vSphere, and vCenter, and nothing else. It’s truly sad when I saw these customer, because they only cared about the virtualization, and most didn’t even use it for HA or VMotion. By moving to VCF these customers must make the decision of using an enterprise platform, or a free version (Proxmox, KVM, Hyper-V, etc). This removes the half-way option, but does increase that cost.
I may be optimistic, and I may see things issues that are not prevalent in many customers, but I know the function, and options are available far beyond what the naked vSphere options customers even knew possible.
“Nathan, good to see you’re not dead!”, “Better than the alternative Plankers!” For those of you who know the lovely curmudgeon that is Bob Plankers you can hear his voice in this paragraph. The man KNOWS security in VMware and is a pillar of the community and the company. Just like that I’m reminded why I love VMware, and a realization that my worst fears are not true. The community is still here.
While walking around I got to talk to Jeremy Mayfield who is another great person to talk to about different issues and challenges, and just a pinnacle of a human being. Talking to these people reminded me so vividly just how VMware brought people together, and keeps doing it.
Thats right, I said it keeps doing it. I simply saw a stranger wearing a Texas Tech shirt and told them my In-laws were in Ransom Canyon, and just like that, we dove into VMware discussion about ELAs, VCF, and disaster recovery. Nothing in-depth or shocking, but in this conference, it’s like small talk, it’s like asking, “Where are you from” or “Did you see the game last night?” But the difference is the type of person you’re talking to, and what is important for them in the week.
Finally, meeting the vSpeaking podcast duo, John Nicholson, and Pete Fletcha and being absolutely flabbergasted at the transformation those gentlemen were able to achieve. It’s been 2 years since I’ve seen them, but the change they have done is amazing. Perhaps you’ve seen, but if not, these humans are amazing.
On LinkedIn I saw a friend saying he was in Explore, and I just replied to say, “Let’s meet while your here” only to see two more friends reach out to me to meet as well.
VMware is home. It may have its problems, a patchy roof, a remade kitchen that didn’t need to be touched but it’s all new anyways. And yes, the electric/water bills are way more than what you thought it was going to be, but we found a path forward, and we are all here together, figuring it out, challenging the company for their decisions, and still advocating for the company that built our career, and capabilities. Now, just like in Landman…. “Alright Monday, Let’s see what you got.”
Anyone that listened to the IT Reality US podcast with myself Vince Wood, and The Legend Richard Kenyon knows how much Apple irks me. Be aware this comes from someone writing this on an IPAD Pro with its Magic Keyboard, while looking at his IPhone for notifications and also wearing his Apple Watch Ultra 3 (I need help). But the anger and rage I dispel on that podcast has never been out of hate or disdain to the company, but far from it. I grew up admiring Bill Gates and Steve Jobs. I remember seeing a promo for some Silicon Valley Piracy and thinking, “Nerds are so cool”. I still to this day hold to the view that Apple is a shadow of what they once were. The products they release still have a “beauty” to them that’s difficult to put in words (cuz I can’t), and the combination of hardware, plus Software makes their products so well defined and capable that I just adore them. However, they don’t release anything “new”. Almost everything new is just a feature someone else did, but they found their own “version”, and when they do release something new it’s not even at the stage Steve would have released it. I’m not saying Steve Jobs was perfect, but he was arrogant, stringent, and a bully. He wanted things his way, and was so narcissistic that it had to be “perfect”.
But that’s all why I got into Apple products. Why I remain with them is because it’s so damn hard to move from them to something else. Everytime I do I find myself, coming back to Apple. I swapped My IPhone to Android, Microsoft, Google, and back to apple. My watch went from ultra, to whoop, to Garmin, and back to apple. Somehow it turns out cheaper for me to just stick with apple. So in that sense, my anger and rage at apple is just because they could do so much more, and better, if they just thought “Do people really need 10% more performance with a battery that lasts 4 hours on a watch?”
It’s easy for me to draw this discussion to VMware. My career was built in VMware, and now as a Cloud guy focusing in AWS/Azure/GCP/Oracle I find myself all over the place with nothing else to think of outside of that area. However, VMware always brings me back. The community, the tech, it just feels like a warm blanket that makes me smile and feel cozy, and happy. It’s weird, I get it, but the feeling is truly there.
The last two and a half years for VMware (Probably longer because I would say it started when Pat left) Have been rough for VMware. Broadcom acquisition has sent customer spiraling into fits of confusion, anger, investigating competitors and sometimes pure rage. I’ve seen it firsthand, as I’m sure many of my readers have felt, and seen. The VMware by Broadcom has become my apple. I find myself continually saying, “WHY ARE YOU DOING THIS? JUST STOP, AND WHY, AND MAYBE DONT?”
The VMware by Broadcom as a company should be coming out of a phase change. With their new release of VCF 9 they have the attention of customers again. Perhaps the next release, whenever that is, will pour more fuel on the fire to get people interested. Because now customers are interested again, but they are interested in the same way I watch ever Apple release with baited breathe only to rage afterwards. No one wants new widgets, they want a company that solves problems customers have, and have continued having. They want to see something that helps them change their life for the better.
This week at Explore, I’m going to voice my opinion and thoughts on what I see starting with this weekend, and going over a daily review. Hope you will join me, because we all know we hope to see a glimpse of the VMware we knew. Just like I still want to see the Apple I grew up with.
I asked a friend earlier yesterday what the next year was about in the Chinese new year. I was informed it was the snake, and that it was focused around “Transformation”. I’d have to say that this fits the new year for me in 2025 and my current goals.
When I started this journey about 7 years ago, I was just an automation engineer, hoping to build toward a career. it was almost 16 years ago that my first IT job ended with my boss telling me, “You don’t have the mental capability to work in this industry.” That still echos in my ears each day. I worry continually if I’m “enough” or if I can cut it.
Now I’m a Director, and will soon have engineers that I have to manage and work to help grow, develop and treat with the same kindness, and care that I was given and allowed when I restarted my career after this statement 16 years ago. I find it interesting that the pains that hurt us the most in the past still set in and sit with us further and further into a career where people want me to succeed. This past year, I was able to get a couple things done, that most thought were impossible by growing partnerships, certifications, and passing several audits. Now this year in 2025 I, like many others need to figure out how to fit in AI into my work style, and use it to move faster, smarter, and harder. I find AI to be very curious still at this time. It’s going to be a major player in 2025 as it was in 2024, but I think many enterprises are working on much different challenges.
Security is still a major issue, with me reading news reals about how foreign powers are hacking into our own government systems, as well as the usual day to day headlines about another company befallen to a hack. Broadcom has disrupted the hypervisor field and every enterprise is asking, “do we stay with VMware? or do we move to a different one?” and with even more changes and challenges from VMC on AWS, and other options, I see 2025 to continue those questions and new hypervisors to take more of the market. I mentioned AI is a curious piece right now, but one thing is sure, there are many tools built with AI that we will see adopted in 2025, from app-dev scripting support, to data analyzation and presentation to computer vision and more. Some quantify these AI solutions as “legacy” AI but with LLMs growing in adoption as well as hypervisors adopting the services even more, I see utilization of these services growing exponentially in 2025.
As for myself, I mentioned that I will be learning a whole new subset of skills around being a manager, and helping people grow as I have been helped to grow myself. But here is the quick list of what I want to do personally in 2025
Need to renew my professional AWS certification
Need to start growing partnerships with a different cloud provider
Time to certify in a different cloud (Microsoft? Google? Oracle?)
I plan on doing the GenAI bootcamp with Andrew Brown and learning how to build solutions with these toolsets
Outside of the GenAI bootcamp, I want to learn how these new AI solutions can work for enterprises and grow adoption
I need to get back into blogging, and creation of ideas, and maybe start a podcast back up. I miss spending time with smart people talking about what they are doing.
Pretty short list, but a lot to unpack.
As for my curiosity of AI. I’ve seen Cloud, and Kubernetes claim to drastically change the workplace, applications, and more. Now AI claims to change even more than both cloud and Kubernetes. However, neither cloud or Kubernetes drastically changed how we work and how we solve problems. Will AI be that solution that changes those things or will it fall into the same buckets as cloud and Kubernetes where it works for some, but not all? One thing is sure, 2025 will be a fun year. I look forward to it.
For Cloud field day 21 we listened to some of the new solutions from VMware by Broadcom for 3 hours of sessions. I had originally intended to write individual posts, but the high amount of information that came hot and heavy from VMware and my fellow delegates made it very interesting to keep up, so instead I’ll place all this information in a centralized location
I have 2 other blogs covering the high level of both VMware, and VCF. For these I’m going to try to cover the deeper engineering slides of VCF and the different solutions built within it. VCF Consolidation and Migration
When you are building VCF you will eventually be faced with the challenge of combining VCF environments together. Lots of customers have ran into this problem and now VMware has an answer. By utilizing SDDC Manager now there is a script to import a cluster into SDDC manager so it can then be managed by a single SDDC Manager.
HCX has been around for the VMC on AWS solution sets, but now it’s available for VCF customers. HCX is only a migration tool, it is not a tool for replacing DRS or working in the same way. HCX works by creating an interconnect from one site to the other, then setting up the network landing zones between and also setting up the individual needs to ensure availability across both locations. A cool callout was that most hyperscalers have an HCX service already setup to help support customers looking to migrate their workloads onto their platform. I’ve used HCX several times, and to be honest figuring out the networking pieces is a challenge and difficult to get working, but once its working its a solid solution that very rarely has failed me to migrate workloads to and from an endpoint.
VCF Monitoring
Monitoring a VCF deployment is a key part of the VI Admins day to day life. After waking up from, hopefully, no pager duty alerts, they come into the office and normally the first thing they look at is the health of their environment. This is all don’t through Aria Operations within the VCF licensing.
The dashboard provided via Aria Operations shows a quick snapshot on the alerts that need to be investigated, and also things like cost, and capacity. From there you can look into the workbench and dashboards to investigate the requirements of solutions.
As a user of Aria Operations before, I really appreciated the ability to dive into alerts and see what is the issue that caused the alert, because investigating the alert is a difficult thing for VI Admins especially when additional alerts are added that are outside of their domain. For instance seeing CPU/Memory alerts is always an application issue on a virtual machine, but figuring out which caused it and how it needs to be addressed isn’t always an question that the VI admin can answer so they will need to work on with additional groups. One aspect I enjoy hearing from the monitoring side of things is just how much granularity you have to manage the environments. You can adjust where the moderate risk, or critical risk or more level is, so that it’s honed to the organization, that Operations is built in. On top of monitoring and management, the ability is there for upsizing virtual machines and downsizing them, as well as scheduling jobs in order to support these regular tasks so that VI Admins can have a break from the mundane tasks they have to do daily.
VMware, and Kubernetes
The direct first statement VMware made is that what we are seeing, and what is above as an illustration is not Tanzu, this is VCF and what is available out of the box with VCF. VMware Kubernetes Service is the new name for an old friend, as this is what is used to build Kubernetes clusters on-premises as well as having additional integrations within the clusters to allow external connections and solutions to be added to the environment.
From my perspective there is a lot that has been repurposed from TKG in this space but the critical piece illustrated above is that there is now a yaml output for deploying the cluster. Allowing users to be able to understand what is deployed and utilize a way to deploy the same cluster in a quick and repeatable way is powerful to change how an organization works for their day to day. There was some discussion about who would be able to fill out the request to build the cluster, and really, the answer is to work with developers to understand what they are looking to provide, and help build the cluster required to meet those needs based on the releases available, and the node requirements. At the end Kat shows the ability to update the supervisor cluster to deploy newer versions of Kubernetes and it really helped forecast how simple it is now to update your releases and get them available for customers.
VMware Automation
The automation area is definitely a passion of mine. I’ve worked within vRealize Automation, and Aria for the past 10 years, and have enjoyed the additional changes that are created. Based on the new solutions with VMware Automation they are integrating a lot of the VKS solutions that were shown previously within the Kubernetes space and expand on it based on what Automation can now do. For those that have not worked in the space, there was a real challenge between Aria Automation, and Tanzu, because they both worked around the same space and it ended up being a choice based on the customers culture and what they are comfortable with using. Now there is a direct integration between Kubernetes and automation to bring self-service to a catalog in order to build the cluster by users, instead of only through the administrator.
Hello canvas my old friend…
In the above the deployment we saw in the vide we were able to see aria deploy a Kubernetes cluster within a supervisor namespace and then deploy services within Kubernetes. This capabilities have been discussed but this is the first time that I have seen it working. On top of seeing the solution actually build a cluster and the solutions when going into the resource to look at what was built you can then see the command to log into kubernetes.
VCF Security
It’s a joy to hear Bob Plankers just because I don’t think he knows how to speak down to people, and up is the only way it works with him. Speaking toward the goal of VCF security at a high level is a good direction for VCF. I work with government groups that need to ensure insane security and compliance so it’s worth the time to discuss what VCF is able to achieve in its deployment. Even security can be a challenge for air gapped environment, so the idea of having that the solution would be secure right out of the box is a wonderful start from start to finish.
Security out of the box, starts with the box. This is why VMware has been working with a number of providers including CPU manufacturers and other groups in order to enable security and following that there are additional security, like VIBs and what is ran and software ran virtually for the virtualization built upon the box.
Next iteration after the deployment of the hypervisor on the hardware is the hypervisor itself. This also includes encryption, key persistence and more as illustrated above. This is critical to organizations to ensure that they are able to utilize the security solutions upon the hypervisor itself.
The next layer is the vCenter. How is it secured?“Next layer is the workloads, because no one really wants to just run vcenter”Finally Cloud foundations needs to be secured.
Taking a wholistic view of security out of the box is a great direction for the leading hypervisor on the market. Bob also discussed using declarative code in order to state how hosts should be build, as well as allowing fast patch to patch hosts without disrupting the hosts and causing downtime within the environment. This is something that has been shown with different operating systems as well, but what it entails is utilizing virtualization to patch services and then migrate back what is updated to ensure uptime. Lifecycle manage is also enhanced to add, and remove components based on the hardware manufacturer now making it easier to add firmware and hardware requirements that need to be updated in order to run properly. vCenter updates now follow the same methodology by building a new vCenter and then migrating the data to the new vCenter after updating so it then solves the problem of downtime for a vcenter allowing only blips of downtime rather than possible large downtimes. The goal behind this is to ensure that customers actually patch and use the security solutions within VMware instead of holding off because of the organizations or culture.
Conclusion:
If it wasn’t for VMware, I wouldn’t be in tech at the level that I am. I grew as a VI Admin, and automation specialist and that started from learning how to migrate, or reboot VMs, into building them, into managing and solving problems within the VMware environment. VMware Cloud Foundations is a solid solution for enterprise customers, and this iteration is showing the next level of capability for customers. The newer things that were shown like Kubernetes being integrated with Aria now is fantastic, as well as the ability to migrate clusters together within VCF. These are great additions to a solid offering from VMware. Looking at the Kubernetes solution, the addition of VKS to the stack and removing it from the integration for vCenter is a solid choice as this has added crashes to vCenter and added additional support requests for customers to remove what is needed to be removed.
As a whole from start to finish, VMware came and stood their ground to a hostile group. We all knew the challenges customers are running into with VMware and licenses, however the group we saw understood the missteps of the past year and want to fix them in the next. I think there is hope here for folks that are anxious of their VMware environment and licensing, but only time will tell.
The second presentation is a deeper dive into VCF itself. What are the questions it is trying to answer? What is the solution really built on?
What is VCF trying to solveWhat is in VCF?
I think what is brought today really answers some specific questions toward the direction of VCF, but it’s still taking massive assumptions on the customer groups they are trying to work with. For instance the idea that there are builders and consumers assumes that an environment has developers building applications and isn’t a use case where the admins are simply building Off-the-shelf software that just needs to be kept up.
I think the challenge is that VVF is for the COTS shops, and VCF is for the enterprise groups that VMware is working directly with in order to assist them to be successful with their needs. This means VMware will be able to help larger customers to find success, and smaller customer would use less so they don’t have all the capabilities, but it also means that if you are between the two you don’t have a real path to utilize all the solutions available in VCF but its something that will help customers with support and professional services. In this guys opinion, that’s not a great answer, but its the answer that we have for now.
For the second day of #CFD21 we were joined by VMware by Broadcom to help us understand some of the new solutions they are building for a private cloud vendor. VMware and Broadcom partnership are reaching to the first year of working together since the acquisition. I remember a lot of folks in November thinking about how they are joining together and what that entails for customer. After a year I think the answer to that is quite direct, and clear. Private cloud, within the customers hands instead of driving them to a cloud provider outside of the engineerings hands.
The story after this acquisition is to simplify the many different licenses that VMware had, from NSX, VSAN, vRealize, Tanzu, and many other software licenses. The goal was to combine the product into a single product. The combination of licenses is not being moved to a new single software release. Meaning once all the functionality is all under the same software, then updates are all under the same software upgrade, and management.
“There is no boundaries keeping customers from purchasing VCF vs VVF. This has been addressed by the CEO of the company.”
I’d like to see where this blog is, because I’ve ran into this several times trying to figure out which to solve for customers. Lots of groups out there are being told that they can only sell VCF to specific customers. If this is true then this is a good direction that Broadcom is taking and should be touted and screamed from the rooftops.
Prashanth spoke specifically to what private cloud is for VMware, and stated that VMware does not see private cloud as *JUST* on-premises datacenter, but a configuration of all the individual “clouds” being used by an organization. The goal behind VCF is to integrate the same platform of VCF at all deployment locations. See below:
Good to see the OEMs listed here. In fact, the hardware vendors clashed after the first integration of their partnerships with Broadcom, but now it’s getting better with them working together again. Also the portability of VCF licensing, if true, is an amazing solution to migrate workloads to and from hyperscalers and back.
Conclusion:
From the business perspective the goal behind VMware and Broadcom is really a challenge without direct leadership and clear communication to customers. The meeting with VMware today was a good direction, and if they are able to engage, and execute, then they will be able to achieve a different perspective on VMware that we have not had in the last year. Today, we heard directly that they could have done different to help customers, and we heard VMware say it. The first year is behind us, perhaps the next year, could be the year we see a new VMware that customers can get behind again.
Listening to Qumulo a company who is figuring out how to fix the data growth for customers while managing issues that may not be the normal issues exploding data growth within the customer environment.
Great use case of the pricelessness of Data “Within this data is the cure for cancer, but how do we ensure that durability of data long enough to find it?!”
Qumulo is built remove barriers for customers rather than directly fix a problem that creates others. The goal behind Qumulo is to be everywhere (In hyperscalers and on-premises) be able to move data everywhere, run with customized low usage solution thus cloud native, finally the network is a key component to migrating data and motioning the data, so qumulo is working to fix the required sharing of data by building the engineering required to radically change how we use data.
Qumulo is wildly expansive on the capabilities they are building, but also in how they are attacking the problems the customers have. This starts at the confusion of multiple platforms into how the data is replicated, stored, secured, and more.
If you’re wondering how Qumulo is making data easier here is your picture
Qumulo Nexus combines all the individual storage solutions across on-premises and in the cloud. Allowing customers to utilize a single solution that helps them solve issues from multiple endpoints.
Qumulo looked at how the cloud is used for the majority of data use cases. The reality is most users just threw data into an S3, or an Azure blob which is an object storage and not a solution for file storage that would be used for AI and other solutions. Qumulo then worked with hyperscalers engineers to build a solution on the object storage (S3/Azure Blob) and figured out how to integrate multiple buckets and utilize a solution to stripe each buck as if it was a spinning disk in a datacenter dramatically improving performance without having to 10x the cost.
How Qumulo sees their resourcesHow AWS sees Qumulo in their solution
We were able to see the deployment of CNQ with Terraform and it was really cool to see what was possible with their IaC code in order to get it up and running. Smaller shops will run automation between production hours and off-time to bring up and down the storage infrastructure to reduce cost.
Incredible cost reduction **NOTE** This is the entire cost of the solution.After deploying the terraform script the end result is a platform with a nice dashboard.
Conclusion
Qumulo had a great presentation at CFD21 which really focused on changing difficult solutions into something easy, durable, and usable for customers who are trying to solve the problem of their file storage across multiple different levels. Qumulo is even looking at how far they can take the solution to solve GDPR compliance and “no-cloud” customers to allow the data to be processed in the cloud, but not stored in the cloud. Its great to see a vendor locked into the customers issues and dramatically challenging how things work in order to help solve major problems that customers are being challenged with continuously.
Kicking off CFD 21 we got to listen to Platform9. I’ll be honest I don’t know much about platform9 but I know the name “Cloud director” and I’ve used it in a very short amount. However, when looking at what has happened in the past months with Broadcom hearing an old name with a new company made me very excited.
With the changes in broadcom, the challenge for customers is dealing with the higher costs with both the private, and public cloud. Introducing private cloud director:
With PCD you have the ability to build your virtualization clusters, Kubernetes, and networking to help customers figure out how they want to build solutions. This is a direct response to move away from VMware into something else.
If your wondering what the goal of this is, a picture is worth 1,000 words.
Looking at a demonstration with Platform9 PCD to migrate an application. The goal from the demonstration is to move an application while maintaining the database within the central vSphere location. I have to admit this isn’t normally something we see in the field. Most customers when moving the application would move the required database at the same time.
The migration of an application in platform9
During the demonstration we were shown the different cutover options for data, and availability of the VMs utilized. I normally see these solutions from migration tools, but the granularity of how they are utilized is very refreshing. If these are not set then the default is to happen completely. It’s worth noting the migration demonstration failed because of a data copy, however, it’s also worth noting that the live demo-gods do what they want. I’ve been the guy behind the keyboard trying to build and demonstrate software in front of customers, and when it fails, it tends to fail dramatically.
Architecture of PCD
Now looking at the architecture of platform9 PCD the main deployment is SaaS with an agent that calls back to the software. The usage of the self-hosted is limited to the management plane while the other solutions can be utilized outside. The agent can be deployed by normal automation solutions or utilizing manual processes to allow the agent to be installed and deployed.
What about Day 2?
The difference between day 0, day 1, and day 2 is quite large when thinking in the engineering space. Day 0 is large movement and managing the exchanging of data, networking, and compute. Day 1, is the validation and continued exchange. Day 2 is when the migration is complete and the machines are ready to be used in production. Day 2 is normally the day “after” automation, so what can PCD do now?
This graph shows the capability from Platform9 to be able to adjust and manage the solution. When I see this I think of all the k8s startups that don’t make the software, and what it can do, but offer production support.
It’s interesting when coming back to the demonstration both migrations showed completed. Utilizing the application that was migrated the website is still up and running and able to show the cart, and what it has.
As a major point, This was all done with open-source software found here: https://github.com/platform9/vjailbreak if you want to have some fun with your local VMware lab. It’s only able to migrate from VMware to OpenStack, and will probably not add anything more as it moves forward.
Now hearing from the Engineering side of the cluster:
It’s good to mention the callouts on the assumptions here, because that truly shows the ability of the software allowing us to know where we can start. The only hypervisor this is utilizing is with KVM/Qemu which makes sense with their OpenStack platform allowing PCD to be able to grow from the VMware into OpenStack.
Networking is a big deal too.
With the automation pieces to deploy new VM’s this also fits into the integration for automation. If you are able to automate deployment of the Virtual machine, you have requirements for networking, storage and normally compute sizing (t-shirt sizes like small, medium, large etc.)
For deploying new VMs the basic suspects are all here. cloud-init writing to call and bootstrap the os. Images to define the type of OS and metadata that would be deployed. Flavors to define the size of the Virtual Machines that can be deployed. This all works together to deploy virtual machines as an administrator, and multi-tenancy to support self-service.
HA and Resource rebalancing is also available as it can support customers who are trying to maintain resources while they also like to sleep. This includes Availability Zones, High Availability, Dynamic Resource Rebalancing, Sophisticated scheduling and more. Allowing users that came from VMware to have a lot of the creature comforts they have used for years on a completely different platform.
Lets talk K8s
PCD makes it easier to manage Kubernetes by keeping the control pane into the SaaS solution allowing PCD to manage the control pane of the solution, this falls into the same policy and solution of their Virtual machine and host where upgrades are the same 2 minor upgrade flex before an update is required to be done. Within the controlled environment for SaaS this also has an opinionated deployment of grafana, and prometheus to manage and maintain their pods and deployments inside an area. The need to migrate between Tanzu and another Kubernetes distribution is a bit more than a CI/CD pipeline from one endpoint to the other, but also internal opinionation that needs to be ripped and replaced between one solution and the other.
Conclusion
Platform9 Private Cloud Director is really a strong headline for Platform9. The issue they are addressing directly is the Broadcom issue that is affection 80-90% of enterprise customers that are utilizing some VMware solution. However, the challenge is going to be how they help customers utilize a new solution that they don’t understand. OpenStack was called out as a VERY high touch solution for enterprise organizations that require their own team. Platform9 is able to fit into that gap and help customers get into OpenStack and support them once in it effectively attacking the large issues customers need to address. The future is bright with this one, and I look forward to see the future from platform9.
Cloud workloads with HPC, AI/ML in the cloud. Was launched at re:invent in 2017. WEKA is utilizing a data platform in order to support the major data driven solutions around HPC and AI/ML which helps fit in those areas
Now I have to admit I don’t know a ton about AI/ML pipelines when dealing with enterprise use cases so a picture is worth 1000 words
with the amount of steps all including data copy from the data learning from itself, you can truly see how data within these workloads is a pain to manage and keep.
WEKA is available in 4 cloud marketplaces including AWS, Azure, GCP, and Oracle cloud. WEKA is deployed via cloud formations or terraform and will live within the network cordoned off for it (VPC, Virtual networks etc.)
Personally I believe it’s a Hybrid world more than anything else. Most organizations I see are not just running things within one or two clouds, but also have on-premises locations and need to be able to connect them in their solutions. If only it was performant
oh it is. Worth mentioning the 2TB/s is in OCI and OCI is probably one of the most powerful cloud vendors out there.
It’s always great to understand the hidden cost to a solution, so I applaud WEKA for including this slide to help us understand what benefits are also hidden with adopting WEKA. WEKA has even created a guarantee for customers that they can cut the infrastructure bill in half for those customers (outcomes may vary).
LETS TALK NERD KNOBS!
What does WEKA look like in AWS?
Cluster of I3en servers with autoscaling, however you do not need to utilize bare metal instances in order to use WEKA. WEKA will assist customers in sizing and scaling within their environment and help them understand the total cost of ownership. This also includes networking as it’s a core piece of the network that needs to be validated with the storage needs.
The world is Hybrid, so the ability to replicate that data copy snapping to object storage is awesome.
When you need WEKA it stands up and runs the solution needed, but its able to Scale to zero and remove the infrastructure required for the data to be snapped, and then pushed back into on-premises in order to continue working on the data itself.
I work with a number of incredibly smart individuals that work around AI/ML and the needs for GPU, Performant data structures, and other solutions. With WEKA I have been definitely shocked by the amount of depth they have gone into to fix the issues that their customers are find and helping them figure out how to address those problems with multiple avenues. First they work to make sure that what they are providing is performant and will be usable for customers that have the need of high IOPS data platform. Second, they work to provide a resilient platform that can be built with multiple DR and HA perspective so that it can be durable and maintain the customers solutions. Third, there are multiple form factors that will fit into the customers ecosystem and allow them to utilize WEKA in a way that fits their need, and their budget. Cost performance of WEKA is amazing with the ability to even scale to zero so that there is no consumption within the cloud. Finally, WEKA steps into the customers environment and works with them as a trusted advisor to help them consume their AI/ML solutions within the WEKA data platform, and help them evaluate and use the solution in the best way possible. With these standard ways of working with customers it ensures the customer is successful and that WEKA continues to grow from that into the future. I personally have seen the value to WEKA, and with the solutions they have presented see how they will keep their place in the ecosystem of AI/ML and HPC.
Interested in learning more about weka you can start at https://start.weka.io to see what you can do with the product.