I am trying to create local users and group on Esxi 6.0 and found that i am not getting that option (manage users and groups). why because After upgrading to vSphere 5.1 or later, you cannot create a local user group on the host.
So question arise in my mind is that
How to create local user on ESXi host or vCenter server via command line or GUI.
If you want to create local user from vSphere Client on ESXi host steps are below:
Login into Esxi host via vSphere Client.
2. Right click and click Add.
3. Provide User (like Sandeep or Shaswati) and Name ( like Admin, DCUI user).
4. Your user is created.
Creating User on vCenter Server via Command line
Login into vCenter server via SSH
2. Run this command and your user is ready.
localaccount.user.add –role Administrator –username Sandeep –password (No need to provide here) –fullname Sandeepkaushik –email firstname.lastname@example.org
Provide the password and your account is created.
In Appliance 6.0
SSH or Putty to machine vCSA
Log in with a user name root and the root password of vCSA.
Type the command to create a user : useradd username, for example : useradd sandeep
Type the password for that user with the passwd username command , for example : passwd sandeep.
Once finished, log in to vCenter , select Permissions, and then configure the user that we just created. Select the desired permissions and then select OK
When you are installing ESXi host in Production at that time you have to think or decide from given point of view:
Need to install on SAN LUN, Virtual SAN, USB, HDD, Disk Less Host
First i will talk about ESXi Hardware Requirements
Make sure the host meets the minimum hardware configurations supported by ESXi 6.0. Hardware and System Resources To install or upgrade ESXi 6.0, your hardware and system resources must meet the following requirements:
ESXi 6.0 requires a host machine with at least two CPU cores.
ESXi 6.0 supports 64-bit x86 processors released after September 2006. This includes a broad range of multi-core processors.
ESXi 6.0 requires the NX/XD bit to be enabled for the CPU in the BIOS.
ESXi requires a minimum of 4GB of physical RAM. It is recommended to provide at least 8 GB of RAM to run virtual machines in typical production environments.
To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on x64 CPUs.
One or more Gigabit or faster Ethernet controllers. For a list of supported network adapter models.
SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the virtual machines.
For Serial ATA (SATA), a disk connected through supported SAS controllers or supported on-board SATA controllers. SATA disks will be considered remote, not local. These disks will not be used as a scratch partition by default because they are seen as remote.
You cannot connect a SATA CD-ROM device to a virtual machine on an ESXi 6.0 host. To use the SATA CD-ROM device, you must use IDE emulation mode.
Now let’s see ESXi Booting Requirements
vSphere 6.0 supports booting ESXi hosts from the Unified Extensible Firmware Interface (UEFI). With UEFI, you can boot systems from hard drives, CD-ROM drives, or USB media. Network booting or provisioning with VMware Auto Deploy requires the legacy BIOS firmware and is not available with UEFI.
ESXi can boot from a disk larger than 2TB provided that the system firmware and the firmware on any adding card that you are using support it. See the vendor documentation.
Changing the boot type from legacy BIOS to UEFI after you install ESXi 6.0 might cause the host to fail to boot. In this case, the host displays an error message similar to Not a VMware boot bank. Changing the host boot type between legacy BIOS and UEFI is not supported after you install ESXi 6.0.
Storage Requirements for ESXi 6.0 Installation or Upgrade
Installing ESXi 6.0 or upgrading to ESXi 6.0 requires a boot device that is a minimum of 1GB in size. When booting from a local disk, SAN or iSCSI LUN, a 5.2GB disk is required to allow for the creation of the VMFS volume and a 4GB scratch partition on the boot device . If a smaller disk or LUN is used, the installer attempts to allocate a scratch region on a separate local disk. If a local disk cannot be found the scratch partition, /scratch, is located on the ESXi host ramdisk, linked to /tmp/scratch.
You can reconfigure /scratch to use a separate disk or LUN. For best performance and memory optimization, do not leave /scratch on the ESXi host ramdisk.
Due to the I/O sensitivity of USB and SD devices the installer does not create a scratch partition on these devices. When installing or upgrading on USB or SD devices, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found, /scratch is placed on the ramdisk.
For environments that boot from a SAN or use Auto Deploy, you need not allocate a separate LUN for each ESXi host. You can co-locate the scratch regions for many ESXi hosts onto a single LUN. The number of hosts assigned to any single LUN should be weighed against the LUN size and the I/O behavior of the virtual machines.
The ESXi installer creates the initial VMFS volumes on the first blank local disk found. To add disks or modify the original configuration, use the vSphere Web Client. This practice ensures that the starting sectors of partitions are 64K-aligned, which improves storage performance.
For SAS-only environments, the installer might not format the disks. For some SAS disks, it is not possible to identify whether the disks are local or remote. After the installation, you can use the vSphere Web Client to set up VMFS.
How many ways to install ESXi host
Installing ESXi Interactively
Installing or Upgrading Hosts by Using a Script.
Installing ESXi Using vSphere Auto Deploy.
Using vSphere ESXi Image Builder.
Installing ESXi Interactively
Use the interactive installation option for small deployments of less than five hosts. In a typical interactive installation, you boot the ESXi installer and respond to the installer to install
ESXi to the local host disk. The installer reformats and partitions the target disk and installs the ESXi boot image. If you have not installed ESXi on the target disk before, all data located on the drive is overwritten, including hardware vendor partitions, operating system partitions, and associated data.
NOTE: To ensure that you do not lose any data, migrate the data to another machine before you install ESXi. If you are installing ESXi on a disk that contains a previous installation of ESXi or ESX, or a VMFS datastore, the installer provides you with options for upgrading.
Installing on VSAN Server
If you select a disk that is in Virtual SAN disk group, the resulting installation depends on the type of disk and the group size:
If you select an SSD, the SSD and all underlying HDDs in the same disk group will be wiped.
If you select an HDD, and the disk group size is greater than two, only the selected HDD will be wiped.
If you select an HDD disk, and the disk group size is two or less, the SSD and the selected HDD will be wiped.
If you are performing a new installation, or you chose to overwrite an existing VMFS datastore, during the reboot operation, VFAT scratch and VMFS partitions are created on the host disk.
Install ESXi on a Software iSCSI Disk
When you install ESXi to a software iSCSI disk, you must configure the target iSCSI qualified name (IQN).During system boot, the system performs a Power-On Self Test (POST), and begins booting adapters in the order specified in the system BIOS. When the boot order comes to the iSCSI Boot Firmware Table (iBFT) adapter, the adapter attempts to connect to the target, but does not boot from it. See Prerequisites. If the connection to the iSCSI target is successful, the iSCSI boot firmware saves the iSCSI boot configuration in the iBFT. The next adapter to boot must be the ESXi installation media, either a mounted ISO image or a physical CD-ROM.
Verify that the target IQN is configured in the iBFT BIOS target parameter setting. setting is in the option ROM of the network interface card (NIC) to be used for the iSCSI LUN. See the vendor documentation for your system.
Disable the iBFT adapter option to boot to the iSCSI target. This action is necessary to make sure that the ESXi installer boots, rather than the iSCSI target. When you start your system, follow the prompt to log in to your iBFT adapter and disable the option to boot to the iSCSI target.
Start an interactive installation from the ESXi installation CD/DVD or mounted ISO image.
On the Select a Disk screen, select the iSCSI target you specified in the iBFT BIOS target parameter setting.
If the target does not appear in this menu, make sure that the TCP/IP and initiator iSCSI IQN settings are correct. Check the network Access Control List (ACL) and confirm that the adapter has adequate permissions to access the target.
Follow the prompts to complete the installation.
Reboot the host.
In the host BIOS settings, enter the iBFT adapter BIOS configuration, and change adapter parameter to boot from the iSCSI target.
With vSphere Replication, you can configure replication of a virtual machine from a source site to a target site, monitor and manage the status of the replication, and recover the virtual machine at the target site. When you configure a virtual machine for replication, the vSphere Replication agent sends changed blocks in the virtual machine disks from the source site to the target site, where they are applied to the copy of the virtual machine. This process occurs independently of the storage layer. vSphere Replication performs an initial full synchronization of the source virtual machine and its replica copy. You can use replication seeds to reduce the amount of time and bandwidth required for the initial replication. During replication configuration, you can set a recovery point objective (RPO) and enable retention of instances from multiple points in time (MPIT). As administrator, you can monitor and manage the status of the replication. You can view information for incoming and outgoing replications, source and target site status, replication issues, and for warnings and errors. When you manually recover a virtual machine, vSphere Replication creates a copy of the virtual machine connected to the replica disk, but does not connect any of the virtual network cards to port groups. You can review the recovery and status of the replica virtual machine and attach it to the networks. You can recover virtual machines at different points in time, such as the last known consistent state. vSphere Replication presents the retained instances as ordinary virtual machine snapshots to which you can revert the virtual machine. vSphere Replication stores replication configuration data in its embedded database. You can also configure vSphere Replication to use an external database. You can replicate a virtual machine between two sites. vSphere Replication is installed on both source and target sites. Only one vSphere Replication appliance is deployed on each vCenter Server. You can deploy additional vSphere Replication Servers.
Replication In a Single vCenter Server
The most basic architecture for vSphere Replication is protecting a virtual machine within a datacenter with a single vCenter Server. The administrator deploys a single vSphere Replication Appliance to act as both the replication manager and also the recipient and distributor of hanged blocks.
The administrator then configures a virtual machine and one or more of its VMDK files to be replicated, giving the local vSphere Replication Appliance as the target, and selecting a different datastore for the replica of the virtual machine.
The vSphere Replication Agent on the appropriate vSphere 5.1 or 5.5 host that holds the running virtual machine then starts tracking changes to disk as they are being written, and in accordance with the configured RPO sends the changed blocks to the vSphere Replication Appliance. The vSphere Replication Appliance passes the changed block bundle through NFC to an ESXi host to write the blocks to the replica VMDK.
This scenario might fit a local campus scenario, with a single cluster spanning two floors of a building where recoverability is within a proximal datacenter. If a floor loses power and the primary hosts and disks are unreachable, the administrator might point to the replica VMDK within vSphere Replication and choose to recover it.
#Remote Offices Replicating with a Single VC
Another common scenario for vSphere Replication is protecting virtual machines in a Remote Office or Branch Office scenario. In this model, hosts at remote sites are not managed by distributed vCenter Server instances, but from a central “head office” data center. A single vCenter Server instance manages both local vSphere instances and remote clusters or hosts. Virtual machines from multiple remote sites must be replicated to the central office in this scenario.
At the remote sites, as long as the hosts are vSphere 5.1 or 5.5, no change must be implemented. vSphere 5.1 and 5.5 have the necessary vSphere Replication built into the kernel.
At the head office data center, at least one vSphere Replication Appliance must be deployed to manage the replication of all the virtual machines (both remote and local targets).
This single appliance will usually be sufficient to handle the incoming replications, but sometimes administrators will want to isolate replication traffic by source, or will need to scale up the number of recipient servers to handle more incoming replications. In that case, administrators can deploy more vSphere Replication servers (not the full vSphere Replication Appliance – there is only one per vCenter) to handle isolating the incoming replication traffic or to adjust for scale. Each additional vSphere Replication Server can be used as a dedicated target for one or more remote sites.
The vSphere Replication Server is the same as the vSphere Replication Appliance. Both are deployed in the same way. But if only being used as a vSphere Replication Server, the appliance is simply not paired with a vCenter Server. Within the main datacenter, the vSphere Replication servers pass the incoming replication data to the recovery cluster through Network File Copy for committing to local replica copies of the remote virtual machines.
Another model is to have a central main data center replicating to multiple remote sites, to offer “fan-out” protection. In this model, a single vSphere Replication appliance is deployed at the main data center as the remote offices are still managed from a central vCenter Server. At each remote office, however, a vSphere Replication Server appliance is deployed to act as a recipient of changed blocks and to ensure that NFC disk writes are done locally to the recovery site rather than being sent across a WAN connection. The vSphere Replication Agents on the central data center track changed blocks and distribute them through the vSphere host’s management network to the vSphere Replication Server defined as the destination for each individual virtual machine. Note that a virtual machine can be replicated only to a single destination. A virtual machine cannot be replicated to multiple remote locations at one time. If, however, the primary data center disappears, the virtual machine that is recovered at the second site can now be configured to be replicated to a third site. This replication must be done manually, as vSphere Replication has no automation or programmatic interface.
Now what are all components of vSphere Replication:
Fundamentally, vSphere Replication is a handful of virtual appliances that allow the vSphere kernel to identify and replicate changed blocks from a source site to a target site. All of these virtual appliance components are bundled into a single standard OVA virtual appliance called the vSphere Replication Appliance. From where you can download this vSphere OVF
Contents of the vSphere Replication Appliance
The vSphere Replication appliance provides all the components that vSphere Replication requires.
A plug-in to the vSphere Web Client that provides a user interface for vSphere Replication.
An embedded database that stores replication configuration and management information.
A vSphere Replication management server:
Configures the vSphere Replication server.
Enables, manages, and monitors replications.
Authenticates users and checks their permissions to perform vSphere Replication operations.
A vSphere Replication server that provides the core of the vSphere Replication infrastructure.
The vSphere Replication appliance provides a virtual appliance management interface (VAMI) that you can use to reconfigure the appliance after deployment, if necessary. For example, you can use the VAMI to change the appliance security settings, change the network settings, or configure an external database etc.
Before Replication Start you have to Connect Source and Target Sites
These steps begin with an administrator deploying the vSphere Replication Appliance from the OVF file. After the administrator has deployed the components,
it is a matter of pairing a source virtual machine with a destination virtual machine. Lastly, configuration of an individual virtual machine for protection tells vSphere Replication to start replicating its changes, and where to put those changes at the recovery location. This configuration necessitates the definition of a Recovery Point Object (RPO), a target folder, and a target datastore, host, cluster, or resource pool.
Before you replicate virtual machines between two sites, you must connect the sites. If the sites use different SSO, you must provide authentication details for the target site, including IP or FQDN, user and password information. When connecting sites, users at both sites must have VRM remote.Manage VRM privilege. When you connect sites that are part of the same SSO, you need to select the target site only without providing authentication details, as you are already logged in. After connecting the sites, you can monitor the connectivity state between them at Target Sites tab.
VMware vSphere Replication is the industry’s first and only genuinely hypervisor-level replication engine.
vSphere Replication enables a vSphere platform to protect virtual machines natively by copying their disk files to another location where they are ready to be recovered.
It protects VM from partial to complete site failures by replicating the VMs b/w sites.
It does this by employing a software-based replication engine that works at the host level rather than the array level.
Important: Identical hardware is not required between sites.
vSphere Replication can run source virtual machines on any type of storage, even local storage on vSphere hosts, and replicate to any type of storage at a failover site.
vSphere Replication Benefits and can replicate from where:
From a source site to a target site
Within a single site from one cluster to another
From multiple source sites to a shared remote target site vSphere Replication provides several benefits as compared to storage-based replication.
Data protection at lower cost per virtual machine.
A replication solution that allows flexibility in storage vendor selection at the source and target sites.
Overall lower cost per replication.
A virtual machine is replicated by components of the hypervisor, removing any dependency on the underlying storage, and without the need for storage-level replication.
Virtual machines can be replicated between any type of storage platform: Replicate between VMFS and NFS, from iSCSI to local disk. Because vSphere Replication works above the storage layer, it can replicate independently of the file systems. It will not, however, work with physical raw device mappings.
Replication is controlled as a property of the virtual machine itself and its VMDKs, eliminating the need to configure storage any differently or to impose constraints on storage layout or management. If the virtual machine is changed or migrated then the policy for replication will follow the virtual machine.
vSphere Replication creates a “shadow virtual machine” at the recovery site, then populates the virtual machine’s data through replication of changed data.
vSphere Replication can revert to multiple points in time so that it can return to a known good point after failover.
You can use vSphere Replication with the vCenter Server Appliance or with a standard vCenter Server Installation. You can have a vCenter Server Appliance on one site and a standard vCenter Server installation on the other.
With vSphere Replication, you can replicate virtual machines from a source datacenter to a target site quickly and efficiently.
You can deploy additional vSphere Replication servers to meet your load balancing needs.
After you set up the replication infrastructure, you can choose the virtual machines to be replicated at a different recovery point objective (RPO). You can enable multi-point in time retention policy to store more than one instance of the replicated virtual machine. After recovery, the retained instances are available as snapshots of the recovered virtual machine.
You can use VMware Virtual SAN datastores as target datastores and choose destination storage profiles for the replica virtual machine and its disks when configuring replications.
NOTE: VMware Virtual SAN is a fully supported feature of vSphere 5.5u1 and later.
VDP is fully integrated with vSphere webclient so it is very easy for vsphere admin.
From this single console vsphere admin will be able to create backup job and able to restore, rather than investing time to learn new user interface.
At the first time VM is backed up all the block which makeup VM will be backed up. After that only change block will be copied (CBT) is used.
Variable length segment de-duplication is more efficient than Fixed segment length de-duplication. Other backup solution use fixed length deduplication.
Without agent you are able to restore backup so it remove complexity
It is also possible to restore individual file and folder you can restore rather than restoring whole VM using just a web browser.
If you want to take backup for Database then you can install agent for that application and backup is possible for the same. Example: Microsoft SQL database, Microsoft Exchange.
You can replicate the backup between VDP advanced appliances
You can replicate backup from VDP advanced to EMC Avamar.
You can also move data to offsite by using VDP
Replication is very efficient to minimize Network bandwidth utilization.
Relication is encrypted for security purpose.
Let’s see vSphere Data Protection Components.
Virtual Machine Appliance: As shown in above Image.
vSphere Infrastructure: VMware vsphere API or VADP is used vsphere data protector advanced. This include the change block tracing or CBT mechanism. CBT track the changes made to a VM at the block level and provides this information to VDP so that the only change block will be backedup. This significantly reduce the storage consumptions and speeds up the backup and restore time VDPA.
** VMware tool contain VSS on windows OS it will quiseing the application while backing up the windows OS.
3. Appliance Sizes:
Disk Space Required
1600 GB (1.57 TB)
3100 GB (3.02 TB)
In VDP 5.5 one OVA size is available, you will be prompted to choose the size of your deduplication store during configuration.
When a VDP appliance is deployed, additional space cannot be added to an existing appliance. If more destination datastore capacity is needed, a new VDP appliance can be deployed (up to 10 appliances can be deployed per vCenter Server).
When sizing a datastore to house a VDP appliance, consider if the virtual machine swap file will also reside here and allocate sufficient space.
Note: You must use Web client for managing VDP. You can’t use VI client
You can install VDP in
But consumed amount is 50% higher than actual amount of VDP.
2 TB — 3 TB Storage – 6 GB RAM ( +2 GB with every 2 TB storage)
4 TB — 6 TB Storage – 8 GB RAM
6 TB — 9 TB Storage – 10 GB RAM
8 TB — 12 TB Storage – 12 GB RAM
You can do Thin provisioning but admin need to monitor the storage.
You can upgrade VDP to VDAP (advance) but you can’t downgrade.
Before going to start installation let’s see Prerequisites.
VDP 6.1 requires the following software:
vCenter server 5.5 or later
VDP 6.1 supports the Linux-based vCenter server virtual appliance and the Windows-based vCenter server.
vSphere web client
Web browsers must be enabled with Adobe Flash Player 11.3 or later to access the vSphere web client and VDP functionality.
vSphere host 5.0 or later
** Unsupported Disk Types
When planning for backups, make sure the disks are supported by VDP. Currently, VDP does not support the following virtual hardware disk types:
b. RDM Independent – Virtual Compatibility Mode
c. RDM Physical Compatibility Mode
** Unsupported Virtual Volumes
The VDP appliance version 6.1 does not support backups and restores of virtual machines on Virtual Volumes (VVOLs).
VDP System Requirements
Best Practices for VDP deployment:
Deploy the VDP appliance on shared VMFS5 or later to avoid block size limitations.
Make sure that all virtual machines are running hardware version 7 or later to support Change Block Tracking (CBT).
Install VMware Tools on each virtual machine that VDP will back up. VMware Tools add additional backup capability that quiesces certain processes on the guest OS before the backup.
VMware Tools are also required for some features used in File Level Restore.
When configuring the network for the VDP appliance and the vCenter, do not modify network address information by using NAT or other configuration methods (firewall, IDS, or TSNR). When these unsupported methods are deployed as part of the virtual network, some VDP functionality may not work as designed.
The Update Manager process begins by downloading information (metadata) about a set of patches, extensions, and virtual appliance upgrades. One or more of these patches or extensions are aggregated to form a baseline. You can add multiple baselines to a baseline group.
A baseline group is a composite object that consists of a set of non-conflicting baselines.
You can use baseline groups to combine different types of baselines, and scan and remediate an inventory object against all of them as a whole.
** If a baseline group contains both upgrade and patch or extension baselines, the upgrade runs first. A collection of virtual machines, virtual appliances, and ESXi hosts or individual inventory objects can be scanned for compliance with a baseline or a baseline group and later remediated. You can initiate these processes manually or through scheduled tasks.
Configuring the Update Manager Download Source
Downloading Updates and Related Metadata
Importing ESXi Images
Creating Baselines and Baseline Groups
Attaching Baselines and Baseline Groups to vSphere Objects
Scanning Selected vSphere Objects
Reviewing Scan Results
Staging Patches and Extensions to Hosts
Remediating Selected vSphere Objects
1. Configuring the Update Manager Download Source
You can download the patch from Internet or from a shared repository.
You can also import patches and extensions manually from a ZIP file.
If your deployment system is connected to the Internet, you can use the default settings and links for downloading upgrades, patches, and extensions to the Update Manager repository.
Third-party patches and extensions are applicable only to hosts that are running ESXi 5.0 and later.
If you’r desktop is not connected with internet then you can use Update Manager Download Service (UMDS)
NOTE: You can use offline bundles for host patching operations only. You cannot use third-party offline bundles or offline bundles that you generated from custom VIB sets for host upgrade from ESXi 5.x to ESXi 6.0.
2. Downloading Updates and Related Metadata
Downloading virtual appliance upgrades, host patches, extensions, and related metadata is a predefined automatic process that you can modify.
By default, at regular configurable intervals, Update Manager contacts VMware or third-party sources to gather the latest information (metadata) about available upgrades, patches, or extensions.
VMware provides information about patches for ESXi hosts and virtual appliance upgrades.
Update Manager downloads the following types of information:
Metadata about all ESXi 5.x patches regardless of whether you have hosts of such versions in your environment.
Metadata about ESXi 5.x patches as well as about extensions from third-party vendor URL addresses.
Notifications, alerts, and patch recalls for ESXi 5.x hosts.
Metadata about upgrades for virtual appliances.
Downloading information about all updates is a relatively low-cost operation in terms of disk space and network bandwidth. The availability of regularly updated metadata lets you add scanning tasks for hosts or appliances at any time.
Importing ESXi Images
You can upgrade the hosts in your environment to ESXi 6.0 by using host upgrade baselines. To create a host upgrade baseline, you must first upload at least one ESXi 6.0 .iso image to the Update Manager repository.
With Update Manager 6.0 you can upgrade hosts that are running ESXi 5.x to ESXi 6.0. Host upgrades to ESXi 5.0, ESXi 5.1 or ESXi 5.5 are not supported.
Before uploading ESXi images, obtain the image files from the VMware Web site or another source. You can create custom ESXi images that contain third-party VIBs by using vSphere ESXi Image Builder.. You can upload and manage ESXi images from the ESXi Images tab of the Update Manager Administration view.
ESXi images that you import are kept in the Update Manager repository. You can include ESXi images in host upgrade baselines. To delete an ESXi image from the Update Manager repository, first you must delete the upgrade baseline that contains it. After you delete the baseline, you can delete the image from the ESXi Images tab.
Staging Patches and Extensions to Hosts
You can stage patches and extensions before remediation to ensure that the patches and extensions are downloaded to the host. Staging patches and extensions is an optional step that can reduce the time during which hosts are in maintenance mode.
Staging patches and extensions to hosts that are running ESXi 5.0 or later lets you download the patches and extensions from the Update Manager server to the ESXi hosts without applying the patches or extensions immediately. Staging patches and extensions speeds up the remediation process because the patches and extensions are already available locally on the hosts.
IMPORTANT Update Manager can stage patches to PXE booted ESXi hosts.
Update Manager 6.0 supports upgrade from ESXi 5.x to ESXi 6.0. Host upgrades to ESXi 5.0, ESXi 5.1 or ESXi 5.5 are not supported.
IMPORTANT: You can patch PXE booted ESXi hosts if you enable the setting from the ESX Host/Cluster Settings page of the Configuration tab or from the Remediate wizard.
After you upload ESXi images, upgrades for ESXi hosts are managed through baselines and baseline groups. Typically hosts are put into maintenance mode before remediation if the update requires it. Virtual machines cannot run when a host is in maintenance mode. To ensure a consistent user experience, vCenter Server migrates the virtual machines to other hosts within a cluster before the host is put in maintenance mode. vCenter Server can migrate the virtual machines if the cluster is configured for vMotion and if VMware Distributed Resource Scheduler (DRS) and VMware Enhanced vMotion Compatibility (EVC) are enabled. EVC is not a prerequisite for vMotion. EVC guarantees that the CPUs of the hosts are compatible. For other containers or individual hosts that are not in a cluster, migration with vMotion cannot be performed.
After you have upgraded your host to ESXi 6.0, you cannot roll back to your version ESXi 5.x software. Back up your host configuration before performing an upgrade. If the upgrade fails, you can reinstall the ESXi 5.x software that you upgraded from, and restore your host configuration. Remediation of ESXi 5.0, 5.1 and 5.5 hosts to their respective ESXi update releases is a patching process, while the remediation of ESXi hosts from version 5.x to 6.0 is an upgrade process.
Remediating Virtual Machines and Virtual Appliances
You can upgrade virtual appliances, VMware Tools, and the virtual hardware of virtual machines to a later version. Upgrades for virtual machines are managed through the Update Manager default virtual machine upgrade baselines. Upgrades for virtual appliances can be managed through both the Update Manager default virtual appliance baselines and custom virtual appliance upgrade baselines that you create.
NOTE Update Manager 6.0 does not support virtual machines patch baselines.
After installing VMware Update Manager what’s next?
How to access VMware Update Manager
As you have two interfaces
1. VI client and
2. Web Client.
VI client View
2. Web client View
Both client interfaces have two main views, Administration view and Compliance view.
Configure the Update Manager settings
Create and manage baselines and baseline groups
View Update Manager events
Review the patch repository and available virtual appliance upgrades n Review and check notifications
Import ESXi images
In the Update Manager Client Compliance view, you can do the following tasks:
View compliance and scan results for each selected inventory object
Attach and detach baselines and baseline groups from a selected inventory object
Scan a selected inventory object
Stage patches or extensions to hosts
Remediate a selected inventory object
If your vCenter Server system is connected to other vCenter Server systems by a common vCenter Single Sign-On domain, and
You have installed and registered more than one Update Manager instance, you can configure the settings for each Update Manager instance.
Configuration properties that you modify are applied only to the Update Manager instance that you specify and are not propagated to the other instances in the group. You can specify an Update Manager instance by selecting the name of the vCenter Server system with which the Update Manager instance is registered from the navigation bar.
For a vCenter Server system that is connected to other vCenter Server systems by a common vCenter Single Sign-On domain, you can also manage baselines and baseline groups as well as scan and remediate only the inventory objects managed by the vCenter Server system with which Update Manager is registered.