In the first part I described how to prepare your environment to start using the vRealize Build Tools. In this part I’m describing how to configure your mac.
Install and configure Java JDK I had some issues getting the VMware Build Tools, Maven and Java to work together properly. This was due to Java 8 being required by the vRealize Build Tools while I was already running the latest Java version on my mac.
People working with vRealize Orchestrator know that it is a powerful orchestration engine. The UI however has not evolved until very recently. Another downside of vRO 7.x is that there is no native integration with versioning tools like git.
Since version 8 of vRO the old Java client is no longer available, instead it has become a purely webbased UI. While it is a good thing that there is finally some investments being made to modernize vRO.
In Part 1 I described how to configure the prerequisite AD endpoint and AD policy in vRA. In this one I describe how a custom hostname workflow can be combined with the use of Active Directory policies and the associated caveats.
Requirements: First step is to create a workflow which generates a custom hostname. I am not going in detail on the hostname creation itself as the associated logic is company specific.
One of my vRA projects had some interesting requirements for their blueprints in regards to Active Directory combined with custom hostnames.
Requirements: The customer uses custom hostnames based on parameters provided during the request. The computer accounts need to be placed in OUs based on the application installed on the VM They have a multitude of different AD domains Limit the number of blueprint The AD account creation can be handled by a custom workflow but why not use the out-of-the-box Active Directory Policies vRA feature.
When using the Event Broker Service in vRealize Automation it may be usefull to access the AMQP management interface. However the this interface is disabled by default. Following steps allow you to enable the AMQP management interface.
Connect to the vRA appliance via SSH and login as root Open the RabbitMQ management interface firewall port. iptables -I INPUT -p tcp -m tcp --dport 16572 -j ACCEPT Modify the permissions on the enabled_plugins file.
For one of my customers I had to come up with a solution to automate the vCenter inventory creation. The solution needed to:
Support multiple vCenters Standardise the vCenter inventory configuration Support multiple vCenter environments eg. Production, Lab. vCenter permissions depend on the environment, each environment uses different AD groups. Engineers without extensive PowerCLI knowledge need to be able to make changes to the vCenter configuration Support regular changes eg.
I was looking arround for some uniform icons for vRealize Automation. I came accros Ryan Kelly’s vRA icon pack. This icon pack contains icons for several wellknown applications and services. Unfortunatly they do not have a uniform design nor colorscheme.
While many commercial icon packs are available I wanted some free icons for my lab. I came accross the @VMwareClarity icons which have a uniform design. Since they are part of the same UI framework vRA uses they integrate nicely in vRA.
One of my intentions for 2018 was to relaunch my blog. For several years I have had a blog which I completely neglected. My previous post dates back to 2013 :-(
I decided to migrate my blog away from wordpress, where it was previously hosted. The reason behind this migration is twofold
Use the blog to learn additional technologies Economics, I am confident that I am able to host my blog at a lower price point then possible with wordpress.
I wrote a PowerCLI script for a customer to organize his datastores in a couple of folders. The script moves the datastore to a folder according to the vSphere hosts cluster that uses the datastore.
This is done based on the name of the datastore which contains an identifier for the cluster it’s connected to. The dastores which are already organised in folders aren’t moved nor are the datastores which are part of a datastorecluster.
During failover tests in a stretched metro cluster environment we ran into some problems when recovering from a Permanent Device Loss state (PDL).
The failover tests ran successfully. The vSphere servers reacted as expected when testing a split-brain scenario on the VPLEX cluster. The VMs which where running in the same datacenter as the preferred VPLEX Node of there datastore, weren’t impacted. The VMs which weren’t not running at the datacenter of there preferred VPLEX node where stopped and restarted at the preferred site.