0

Installing NSX Manager GUEST INTROSPECTION NSX Data Security

Guest introspection is a service that is deployed from NSX Manager to offload security functions. Guest Introspection installs a new vib and a service virtual machine on each host in the cluster. Guest Introspection is required for NSX Data Security, Activity Monitoring, and several third-party security solutions.

System Requirements for NSX

To install Guest Introspection First need to login into the vSphere web client and browse to Networking & Security.
on the left hand side click on Installation. Click the green plus symbol to add a new service deployment.

After this select Guest introspection

 

Select Storage and Network

Create IP Pool for the VM which NSX install for AV purpose.

 

 

Now select the IP Pool and click next

After that it will appear here in last tab

If you closely monitor vCenter task then it will start installing VM

In NSX manager you can see status.

Now your Guest Introspection is ready.

 

You can check this in vCenter under Newly created resource pool > ESX Agent.
This resource pool is auto created by NSX manager for these VM’s only.

 

If this VM giving you error or warning like Failed then just click on Resolve and it will delete the VM and create new VM.

0

Configuring IPsec VPN within VMware NSX Edge

This article shows you how to create an IPsec VPN between a NSX Edge Gateway with a vCloud Director/NSX Manager and a remote Client site.

First you need basic details from client so that you can configure IPSec VPN from your end.Like you need Phase 1 and Phase 2 Details. (This document related to NSX Edge 6.3.2)

 

Image Credit VMware

Note: NSX Edge supports Main Mode for Phase 1 and Quick Mode for Phase 2.

Phase 1 Parameters

Phase 1 sets up mutual authentication of the peers, negotiates cryptographic parameters, and creates session keys. The Phase 1 parameters used by NSX Edge are:

  • Main mode
  • TripleDES / AES [Configurable]
  • SHA-1
  • MODP group 2 (1024 bits)
  • pre-shared secret [Configurable]
  • SA lifetime of 28800 seconds (eight hours) with no kbytes rekeying
  • ISAKMP aggressive mode disabled

In 6.3.2 you Can see basic details or you can say this is mixed mode like Phase 1 and Phase 2 they don’t have different tabs or options.

 

Here are details which you have to fill while configuring IPSec VPN for client.

Note: If you are doing this from HTML5 Console then in “Peer Subnets” You have to provide IP range from Increasing to Decreasing order like (192.168.11.0/24 and after that 192.168.10.0/24).

 

I was trying to update this tab from vCloud Director Web and i was not able to do so i changed this from vCenter > NSX manager > Edge settings.

Client side settings must match with your Edge settings.

If your Client have old router (Cisco) then you have to ask them to do settings with supported parameter and these parameters are:

1. SHA1
2. Diffie-Hellman Group – DH5 or DH2 (Old router can only support this IOS 12.4)
3. Encryption Algorithm – AES256

NSX Edge to Cisco

  • proposal: encrypt 3des-cbc, sha, psk, group5(group2)

After setup you can export Settings from Edge and share it with Client Network Team so that they can run it in their router and do the same setup which you have done in your Edge gateway.

Go to vCenter > NSX Manager > NSX Edges > Search your Edge and double click > Manage (R.H.S) > VPN Tab > IPSec VPN and from here you can download script for Cisco router.

 

You can copy and send it to client IT team.

Note: It will copy Shared key also so before sending to Client IT team remove that.

After this check the Tunnel status.

Go to vCenter > NSX Manager > NSX Edges > Search your Edge and double click > Manage (R.H.S) > VPN Tab > IPSec VPN

 

I was getting below Error so need to check your DH2 or DH14 settings.

  • If the Cisco device does not accept any of the parameters the NSX Edge sent in step one, the Cisco device sends the message with flag NO_PROPOSAL_CHOSEN and terminates the negotiation.
0

Upgrade VMware vCloud Director for Service Providers v8.20 to v9.0.0

Now before starting this activity need to take backup for few VMs like. You can check my last blog regarding same.

Here is the link http://www.dtechinspiration.com/upgrade-vmware-vcloud-director-for-service-providers-v8-1-0-to-v8-20/

Now starting activity

 

Steps to upgrade

We are running only one vCD cell server. Here is the process:

  1. Restrict access to vCD and display a maintenance message for the duration of the upgrade.

 

0

Upgrade VMware vCloud Director for Service Providers v8.1.0 to v8.20

Today we are upgrading our vCloud Director from 8.10 to 8.20.

Now before starting this activity need to take backup for few VMs like.

  • Take Snapshot of vCloud Director VM
  • Take Snapshot of vCenter VM
  • Take Snapshot of NSX VM
  • Do the FTP backup for NSX VM
  • Take vCloud Database backup
  • Take vCenter Database backup
  • Login into vmware website and download the upgrade bin file from there
  • Copy it on vCloud Director Server using winscp software to copy this in tmp folder

 

Steps to upgrade

We are running only one vCD cell server. Here is the process:

  1. Restrict access to vCD and display a maintenance message for the duration of the upgrade.

2017-09-27 15_34_30-mRemoteNG - confCons.xml - vCenter-Cloud-D2017-09-27 15_34_48-Service Unavailable

 

2. Quiesce vCD cell servers from the Cell Management Tool.

  • cd /opt/vmware/vcloud-director/bin/
    ./cell-management-tool -u username -p password cell –status
    ./cell-management-tool -u username -p password cell –quiesce true
    ./cell-management-tool -u username -p password cell –maintenance true

3. Upgrade the vCD software on all cell servers (If you have). Do not restart vCD services until the database is upgraded.

Log in to the target server as root.

  • Go to cd /tmp folder make this vCloud file as executable
  • chmod u+x vmware-vCloud-director-distribution-8.20.0-5515092.bin

After that run this command ./vmware-vCloud-director-distribution-8.20.0-5515092.bin

If the installer detects a version of vCloud Director installed on this server that is equal to or later than
the version in the installation file it displays an error message and exits. Otherwise, it prompts you to
confirm that you are ready to proceed to upgrade this server.

 

2017-09-27 15_36_17-mRemoteNG - confCons.xml - vCenter-Cloud-D2017-09-27 15_36_23-mRemoteNG - confCons.xml - vCenter-Cloud-D2017-09-27 15_36_41-mRemoteNG - confCons.xml - vCenter-Cloud-D2017-09-27 15_36_53-mRemoteNG - confCons.xml - vCenter-Cloud-D

 

Upgrade the vCD database.

Note: Make Sure you have Database backup before upgrading Schema for database.

image

 

Go to this location: cd /opt/vmware/vCloud-director/bin/

type ./upgrade

2017-09-27 15_39_54-mRemoteNG - confCons.xml - vCenter-Cloud-D2017-09-27 15_40_24-mRemoteNG - confCons.xml - vCenter-Cloud-D

 

After that it will ask you to restart vCD server press Y.

2017-09-27 15_41_09-mRemoteNG - confCons.xml - vCenter-Cloud-D2017-09-27 15_41_43-mRemoteNG - confCons.xml - vCenter-Cloud-D

 

Restart the vCD services on the cell servers.

2017-09-27 15_42_53-mRemoteNG - confCons.xml - vCenter-Cloud-D2017-09-27 15_41_43-mRemoteNG - confCons.xml - vCenter-Cloud-D

 

If you had the vCD web console open during the upgrade, log out to clear the browser cache.

2017-09-27 15_47_00-VMware vCloud Director

Errors during and After upgrade:

 

1. First thing after upgrade vCloud Director, vCD VM not getting IP address

To resolve this issue you have to make nic UP and Down for Centos or Linux

  • ifup eth0 and ifdown eth0
  • Also known error (Due Gratuitous ARP issue)

2. After that vCenter not connecting in vCloud Director.

We discovered the connection issue was due to vCenter using TLSv1 which we needed to enable on the vCD cell by running

  • /opt/vmware/vcloud-director/bin]# ./cell-management-tool ssl-protocols -d SSLv3,SSLv2Hello

and then restarting the cell services.

To perform this activity you have to follow below steps.

  • A. Login to the vCloud Director cell via SSH as root.
  • B. Change directory to where the cell-management-tool is located.
    ># cd /opt/vmware/vcloud-director/bin
  • C. Run the tool to verify the job count and active state (true):
    ># ./cell-management-tool -u administrator cell -t
    Please enter the administrator password:
    Job count = 2
    Is Active = true
  • D. Use the quiesce feature to ensure the cell takes no new jobs
    ># ./cell-management-tool -u administrator cell -q true
    Please enter the administrator password:
  • E. Run the tool to verify the job count and active state (false):
    ># ./cell-management-tool -u administrator cell -t
    Please enter the administrator password:
    Job count = 0
    Is Active = false
  • F. Shut down the cell once the Job count = 0 and Is Active = false:
    ># ./cell-management-tool -u administrator cell -s
    Please enter the administrator password:
  • G. Stop the vCloud Director services:
    ># service vmware-vcd stop
    Stopping vmware-vcd-watchdog:        [  OK  ]
    Stopping vmware-vcd-cell:                   [  OK  ]

3. Here you need to access your vCloud Director SQL database

and run these commands. (Run these command in one go)

  • update jobs set status = 3 where status = 1;
    update last_jobs set status = 3 where status = 1;
    delete from busy_object;
    delete from QRTZ_SCHEDULER_STATE;
    delete from QRTZ_FIRED_TRIGGERS;
    delete from QRTZ_PAUSED_TRIGGER_GRPS;
    delete from QRTZ_CALENDARS;
    delete from QRTZ_TRIGGER_LISTENERS;
    delete from QRTZ_BLOB_TRIGGERS;
    delete from QRTZ_CRON_TRIGGERS;
    delete from QRTZ_SIMPLE_TRIGGERS;
    delete from QRTZ_TRIGGERS;
    delete from QRTZ_JOB_LISTENERS;
    delete from QRTZ_JOB_DETAILS;
    delete from compute_resource_inv;
    delete from custom_field_manager_inv;
    delete from ccr_drs_vm_host_rule_inv;
    delete from cluster_compute_resource_inv;
    delete from datacenter_inv;
    delete from datacenter_network_inv;
    delete from datastore_inv;
    delete from dv_portgroup_inv;
    delete from dv_switch_inv;
    delete from folder_inv;
    delete from managed_server_inv;
    delete from managed_server_datastore_inv;
    delete from managed_server_network_inv;
    delete from network_inv;
    delete from resource_pool_inv;
    delete from storage_pod_inv;
    delete from task_inv;
    delete from task_activity_queue;
    delete from activity;
    delete from activity_parameters;
    delete from failed_cells;
    delete from lock_handle;
    delete from vm_inv;
    delete from property_map;

4. Start the vCloud Director services:
># service vmware-vcd start
Starting vmware-vcd-watchdog: [ OK ]
Starting vmware-vcd-cell: [ OK ]

0

VMware vSphere NSX 6.3.3 Step-By-Step Installation : Howto

 

NSX Manager Deployment

1. NSX Manager

It will automatically install all the required component as per request Like NSX controller.

2. NSX Controllers: Need to be deploy in odd number like 3 or 5.

3. Edge

4. Distributed Router Controllers

 

Now First you need to Create NSX Cluster this is not compulsion but suggested or i have created one Resource pool with Expandable check.

image

Download the OVA file from VMware website.

image

 

NSX Components Layer where they Installed.

Data Plane: On ESXi host

          a) Kernel Modules: VXLAN (16 Million VXLAN Segment)

b) Distributed Logical Router (DLR)

c) Distributed Firewall (DFW)

d) VDS switch and Edge Service Appliance.

Control Plane:

             a) NSX Controllers

b) User World Agent

c) Logical Router Control VM

Management Plane:

             a) vCenter Server

b) NSX Manager

             c) Message Bus

 

Deployment of NSX Manager:

1. Only One NSX manager per vCenter Server.

2. Deploy as a VM.

3. What if my NSX manager is Shutdown: No worries, everything continues to work but you will not able to change anything. No Service impact.

Solution: Protect with vSphere HA.

NSX manager Virtual Machine Requirement: reference vmware doc.

1. 4 vCPUs

2. 16 GB of RAM

3. 60 GB of Storage

Steps for Installation:

1. Right-Click on the cluster in which you would like to deploy the NSX Manager OVA and choose “Deploy OVF Template”

2017-09-12 14_46_23-vSphere Web Client

image

2. Browse the file and Click next.

image

3. Select the Resources

2017-09-12 14_50_12-vSphere Web Client

4. Review the details and click Next.

image

5. Accept the License agreements and click next.

image

6. Select Storage and click next.

image

7. Now Provide Network from VDS.

image

8. Provide IP details, CLI password and DNS sever click next and finish.

image

After installation you will see new icon with Networking & Security.

image

when you click on Dashboard option on the left hand side then you can see that status of NSX Manager and Host Preparation Status.

image

When you click on Installation Option then it will show you NSX manager and it’s controller nodes name and status.

2017-09-12 16_07_09-vSphere Web Client

Now under Installation you have multiple tabs like Management, Host Preparation, Logical Network Preparation and Service deployments

Click on the Host Preparation and you can see whether NSX manager module vCenter, NSX Manager, EAM(ESX Agent Manager) is installed or not.

These are the VIB which will be install.

esx-vsip
esx-vxlan

 

image

Open Putty session for ESXi host.

 

[root@esx002291:~] esxcli software vib list
Name                                                           Version                                Vendor  Acceptance Level  Install Date
—————————–  ——————————————————————-  ——  —————-  ————

esx-vsip                                            6.5.0-0.0.5534171                      VMware  VMwareCertified   2017-07-19
esx-vxlan                                         6.5.0-0.0.5534171                      VMware  VMwareCertified   2017-07-19

0

“Agent VIB module not installed” when installing EAM/VXLAN Agent using VUM

After upgrading NSX Manager 6.3.2 facing issue with NSX manager and EAM, VIB is continuously failing.

“Problem with ESXi host preparation and specifically the error ‘Dependencies of the VIB which is updating the ESX image cannot be satisfied by the host”.

Now at this point all the Hosts are in ready state.

After rebooting NSX Manager and Deleting VUM database still not working

what is inside in ESXi logs showing error > /var/log/esxupdate.log

===========================================================================
2017-09-04T12:21:51Z esxupdate: 75636: vmware.runcommand: INFO: runcommand called with: args = ‘localcli system visorfs ramdisk list | grep /vibtransaction && localcli system visorfs ramdisk remove -t /tmp/vibtransaction’, outfile = ‘None’, returnoutput = ‘True’, timeout = ‘0.0’.
2017-09-04T12:21:51Z esxupdate: 75636: Transaction: DEBUG: Populating VIB list from all VIBs in metadata https://vc01.domain.local:443/eam/vib?id=54fbc544-436e-4e06-800d-3e2a95448d92; depots:
2017-09-04T12:21:51Z esxupdate: 75636: downloader: DEBUG: Downloading https://vc01.domain.local:443/eam/vib?id=54fbc544-436e-4e06-800d-3e2a95448d92 to /tmp/tmpgxbeofzh…
2017-09-04T12:21:51Z esxupdate: 75636: Metadata.pyc: INFO: Unrecognized file vendor-index.xml in Metadata file
2017-09-04T12:21:51Z esxupdate: 75636: imageprofile: INFO: Adding VIB VMware_locker_tools-light_6.2.6-0 -4463934 to ImageProfile (Updated) ESXi-6.5.0-20170702001-standard
2017-09-04T12:21:51Z esxupdate: 75636: imageprofile: INFO: Adding VIB VMware_bootbank_esx-vsip_6.2.6-0 -4977495 to ImageProfile (Updated) ESXi-6.5.0-20170702001-standard
2017-09-04T12:21:51Z esxupdate: 75636: imageprofile: INFO: Adding VIB VMware_bootbank_esx-vxlan_6.5.0-0.0.4463934 to ImageProfile (Updated) ESXi-6.5.0-20170702001-standard
2017-09-04T12:21:51Z esxupdate: 75636: vmware.runcommand: INFO: runcommand called with: args = ‘[‘/bin/localcli’, ‘system’, ‘maintenanceMode’, ‘get’]’, outfile = ‘None’, returnoutput = ‘True’, timeout = ‘0.0’.
2017-09-04T12:21:52Z esxupdate: 75636: HostInfo: INFO: localcli system returned status (0) Output: Enabled Error:
2017-09-04T12:21:52Z esxupdate: 75636: BootBankInstaller.pyc: INFO: Unrecognized value “title=Loading VMware ESXi” in boot.cfg

===========================================================================

I found nothing,it’s just blocking the VIB to be install on Host. After this i started troubleshooting further and start checking logs and error in vCenter and NSX Manager logs.

Thanks to Conor Scolard he found that NSX manager is trying to install old VIB as you can see in logs.
After upgrading NSX manager 6.3.2 it should be “6.5.0-0.0.4463934″

===========================================================================

2017-09-04T12:21:51Z esxupdate: 75636: Transaction: DEBUG: Populating VIB list from all VIBs in metadata https://vc01.domain.local:443/eam/vib?id=54fbc544-436e-4e06-800d-3e2a95448d92; depots:
2017-09-04T12:21:51Z esxupdate: 75636: downloader: DEBUG: Downloading https://vc01.domain.local:443/eam/vib?id=54fbc544-436e-4e06-800d-3e2a95448d92 to /tmp/tmpgxbeofzh…
2017-09-04T12:21:51Z esxupdate: 75636: Metadata.pyc: INFO: Unrecognized file vendor-index.xml in Metadata file
2017-09-04T12:21:51Z esxupdate: 75636: imageprofile: INFO: Adding VIB VMware_locker_tools-light_6.5.0-0.23.5969300 to ImageProfile (Updated) ESXi-6.5.0-20170702001-standard
2017-09-04T12:21:51Z esxupdate: 75636: imageprofile: INFO: Adding VIB VMware_bootbank_esx-vsip_6.5.0-0.0.4463934 to ImageProfile (Updated) ESXi-6.5.0-20170702001-standard
2017-09-04T12:21:51Z esxupdate: 75636: imageprofile: INFO: Adding VIB VMware_bootbank_esx-vxlan_6.5.0-0.0.4463934 to ImageProfile (Updated) ESXi-6.5.0-20170702001-standard

===========================================================================

Tried to open EAM Mob >> https://vcenterIP/eam/mob, it’s not working then “rebooted vCenter” and after that this start working.

As you can see here we have older version of VIB.(NSX Manager is on 6.3.2 version)

 

Under NSX Component Installation on Hosts > Clusters & Hosts is known as “Agency” and this mean Agency is corrupted.

===========================================================================

Now only option is to Delete Agency and it will automatically create new Agency and pick the Latest VIB drivers.

This is Easy when you are not using

1. VXLAN
2. NSX Firewall

But before that how to identify that Agency is corrupted.

Open SSH for NSX manager.

1. type enable and provide password.

===========================================================================
admin@nsx.domain..local’s password:
nsx.domain..local> en
Password:
===========================================================================

2. After this if you are working with VMware then tech support guy will type their secret password to enable “Engineering Mode”

nsx.domain..local# st e
Engineering Mode: The authorized NSX Manager system administrator is requesting a shell which is able to perform lower level unix commands/diagnostics and make changes to the appliance. VMware asks that you do so only in conjunction with a support call to prevent breaking your virtual infrastructure. Please enter the shell diagnostics string before proceeding.Type Exit to return to the NSX shell. Type y to continue: y
Password: this is VMware password
su: Authentication failure
nsx.domain..local# st e
Engineering Mode: The authorized NSX Manager system administrator is requesting a shell which is able to perform lower level unix commands/diagnostics and make changes to the appliance. VMware asks that you do so only in conjunction with a support call to prevent breaking your virtual infrastructure. Please enter the shell diagnostics string before proceeding.Type Exit to return to the NSX shell. Type y to continue: y
Password:
[root@nsx ~]# cd /home/secureall/secureall/logs/
[root@nsx /home/secureall/secureall/logs]# less vsm.log
[root@nsx /home/secureall/secureall/logs]# psql -U secureall
psql.bin (9.3.15 (VMware Postgres 9.3.15.0-4760484 release))
Type “help” for help.
Cannot read termcap database;
using dumb terminal settings.

===========================================================================

3. After this run below command
secureall=# select * from deployment_unit;

by using these commands you can check if there is any issue with NSX Manager database or not.

===========================================================================

Now Solution time:

1. First open https://vcenter/eam/mob

 

2. Highlighted in Yellow is Agency

3. Click on this and after that click on “DestroyAgency”.

4. Click on Invoke Method

5. Just follow the same path and check the VIB version and you will see this time it’s updated with newer version.

6. Now go to NSX Manager View and this time you will see there is no status it’s blank, that mean we can install latest version.

7. Click on Action Gear icon and install the agent.

0

New vSphere Availability Option’s and Quarantine Mode

In 6.5 VMware has changed the name of vSphere High Availability (vSphere HA) to vSphere Availability.

In vSphere 6.0 Version

2017-09-05 10_18_03-vSphere Web Client

And in 6.5 Version

2017-09-05 10_19_12-vSphere Web Client

 

New Setting in 6.5 vSphere is Proactive HA which works with DRS and it has two options.

 

First thing, this is DRS feature, not an HA and basically, it will get data from physical hardware like RAM, CPU like DRS algorithms doing normally and then use DRS to decide whether it needs to move VMs or not.

Proactive HA look at different alerts to see if maybe something is going on with the host in order to make a decision to move workload or not.

Now in order to do this, Different hardware vendor is creating or in process of creating OEM.

If you are using Proactive HA with OEM and there is DIMM or NIC card failure then, in this case, Proactive HA coordinate with DRS and take action on that host. (vMotion)

Now we have two option in this setting:

1. Automation Level is like DRS.

2. Remediation

Quarantine Mode Means: DRS will move all the VMs but no impact on VM performance and affinity rules should not be violated.

when you decided to put the host into quarantine mode, then it will start evacuating the VMs but it will consider the basic facts of and this is DRS affinity and anti affinity rules should not be violated, and also if there is no performance degradation.

After this DRS will not place any VM on that host.

As I said second consideration is if there is no performance degradation on VMs if DRS see any impact on VMs performance then DRS place VM on this Quarantine host.

A) Quarantine mode for all failures

B) Quarantine mode for moderate and Maintenance mode for Severe failure(Mixed).

C) Maintenance mode for all failures

 

Admission Control: New Change is Autocalculation

Now it can automatically calculate the Percentage of resources which are required for Cluster failover.

 

Performance degration VMs tolerate:

Here 100% means this setting is disabled and you are fine if there is any degradation in terms of performance and if i put 0% percent then i am not okay or i dont want any performance degradation with VMs.

In the event of failure and VMs restarted and they are not getting sufficient resource and facing performance degradation then it will alert me.

 

0

How to upgrade NSX Manager from 6.2.6 to 6.3.2

Today i was updating NSX manager from 6.2.6 to 6.3.2 version.

Environment running with vCloud director 8.20 and ESXi 6.5 U1.

1. Check VMware Product Interoperability Matrices for more information “Link”.
2. Check Update sequence for vSphere “KB“.

Before upgrade to NSX 6.3.2.

1. NSX Manager VM Snapshot.
2. NSX Manager Appliance Backup.(FTP server required)

3. Check “Lookup service” and “vCenter” server status in NSX management service.

4. Download the NSX 6.3.2 version from VMware website.

5. After this upload the bundle file into NSX manager console from Upgrade option.

 

 

 

 

it will start uploading this image to NSX manager and will start installing.

After uploading bundle it will give you Warning! Please create an NSX Manager backup before proceeding with upgrade also it will ask you whether you want to enable SSH and Finally hit Upgrade button.

After this it will pop up with new version with 6.3.2 version.

After this Need to upgrade NSX Controller if you have.

Also install the agent in Agency.

Final Status after upgrade.

0

Farewell, vCenter Server for Windows

Farewell, vCenter Server for Windows

Farewell, vCenter Server for Windows

VMware plans to deprecate vCenter Server for Windows with the next numbered release (not update release) of vSphere. The next version of vSphere will be the terminal release for which vCenter Server for Windows will be available. Why Is VMware deprecating vCenter Server for Windows? VMware is consistently striving to simplify data center administration and The post Farewell, vCenter Server for Windows appeared first on VMware vSphere Blog .


VMware Social Media Advocacy