Open-Source Network Simulators

This is a pretty comprehensive list of Open-Source Network Simulators available..

REF:http://www.brianlinkletter.com/open-source-network-simulators/

Advertisements

UCS:Forcing a Fabric Interconnect Failover

UCS Forcing a Fabric Interconnect Failover

Forcing a Fabric Interconnect Failover

This operation can only be performed in the Cisco UCS Manager CLI.

You must force the failover from the primary fabric interconnect.

Procedure

Command or Action Purpose
Step 1 UCS-A# show cluster state Displays the state of fabric interconnects in the cluster and whether the cluser is HA ready.
Step 2 UCS-A# connect local-mgmt Enters local management mode for the cluster.
Step 3 UCS-A (local-mgmt) # cluster {force
primary | lead {a | b}}
Changes the subordinate fabric interconnect to primary using one of the following commands:

force
Forces local fabric interconnect to become the primary.

lead
Makes the specified subordinate fabric interconnect the primary.

The following example changes fabric interconnect b from subordinate to primary:

UCS-A# show cluster state
Cluster Id: 0xfc436fa8b88511e0-0xa370000573cb6c04

A: UP, PRIMARY
B: UP, SUBORDINATE

HA READY
UCS-A# connect local-mgmt
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2011, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php

UCS-A(local-mgmt)# cluster lead b
UCS-A(local-mgmt)# exit

UCS-A# show cluster state
Cluster Id: 0xfc436fa8b88511e0-0xa370000573cb6c04

A: UP,SUBORDINATE
B: UP,PRIMARY

HA READY

 

The failover has completed.

 

RedHat/CentOS NTP is now CHRONY

Installing chrony

The chrony suite is installed by default on some versions of Red Hat Enterprise Linux 7. If required, to ensure that it is, run the following command as root:

~]# yum install chrony

The default location for the chrony daemon is /usr/sbin/chronyd. The command line utility will be installed to /usr/bin/chronyc.

Checking the Status of chronyd

To check the status of chronyd, issue the following command:

~]$ systemctl status chronyd

chronyd.service - NTP client/server

   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2018-10-18 04:01:52 EDT; 1 day 3h ago

Starting chronyd

To start chronyd, issue the following command as root:

~]# systemctl start chronyd
To ensure chronyd starts automatically at system start, issue the following command as root:

~]# systemctl enable chrony

Stopping chronyd

To stop chronyd, issue the following command as root:

~]# systemctl stop chronyd
To prevent chronyd from starting automatically at system start, issue the following command as root:

~]# systemctl disable chronyd

Checking if chrony is Synchronized

To check if chrony is synchronized, make use of the trackingsources, and sourcestatscommands.

17.3.5.1. Checking chrony Tracking

To check chrony tracking, issue the following command:

~]$  chronyc tracking
Reference ID    : 0A6E480A (core.test.local)
Stratum         : 4
Ref time (UTC)  : Fri Oct 19 11:23:00 2018
System time     : 0.000164141 seconds fast of NTP time
Last offset     : +0.000027329 seconds
RMS offset      : 0.005201139 seconds
Frequency       : 18.160 ppm slow
Residual freq   : +0.005 ppm
Skew            : 0.083 ppm
Root delay      : 0.027721141 seconds
Root dispersion : 0.012964345 seconds
Update interval : 259.0 seconds
Leap status     : Normal

Checking chrony Sources

The sources command displays information about the current time sources that chronyd is accessing. The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the columns.

~]$ chronyc sources
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* core1.test.local        3   8   377   189   -748us[ -721us] +/-   37ms
^+ core2.test.local        3   7   377   115   -602us[ -602us] +/-   46ms
^+ core3.test.local        3   9   377   837  +1485us[+1636us] +/-   36ms

Manually Adjusting the System Clock

To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the following command as root:

~]# chronyc makestep
If the rtcfile directive is used, the real-time clock should not be manually adjusted. Random adjustments would interfere with chrony‘s need to measure the rate at which the real-time clock drifts.

VRA 7.4 HFX – cleanup for install retry

VRA 7.4 HF6 – stuck in a DB sync loop, or just stuck

Upload and install the patch as per the kb (https://kb.vmware.com/s/article/56618).

Completing all the prequisites as called out in the KB:

remove all obsolete nodes
etc/host file is correct, loopback address and FQDN
taking snapshots/backups of all vrealize nodes
Diasable traffic & service monitoring to secondary node is you are using a load balancer

Perform the installation as called out in the KB:

we started experiencing issues installing the hotpatch it seems to get stuck in a loop
possibly due to sync issues between Master and Secondary Postgres Replicas.
The main error we see from the catilina is as follows:

vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:784 – Starting:: Command validation
vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:789 – Command status for update-patch-history: COMPLETED
vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:789 – Command status for update-patch-history: COMPLETED
vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:789 – Command status for discover-components: COMPLETED
vcac-config: last message repeated 8 times
vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:789 – Command status for discover-components: QUEUED
vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:799 – Processing command discover-components
vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:789 – Command status for discover-components: COMPLETED
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:16 — AGENT next iteration —
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:16 getAllVotes():, url suffix: api/master
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:17 ElectMaster(): Votes for Master:
Node vra01.test.local (On: true, Manual failover: false, IsLocalDbMaster: true) has 1 voters: vra01.test.local
Node vra01.test.local (On: true, Manual failover: false, IsLocalDbMaster: false) has 1 voters: vra02.test.local
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:17 ElectMaster(): Votes for Master (aggregate):
Node vra01.test.local (On: true, Manual failover: false, IsLocalDbMaster: false) has 2 voters: vra01.test.local, vra02.test.local
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:17 Elected master: Node vra01.test.local (On: true, Manual failover: false, IsLocalDbMaster: false)
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:17 Refreshing the HotSync information…
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:17 HotSync(): currrent sync replica: ——
vra01 [database-failover-agent][4050]: 2018/10/11 11:50:17 Master database is not in SYNC mode…
vra01.test.local vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:784 – Starting:: Command validation

NOTE


Navigating away from vRA updates status page renders the status invalid, it does not reflect the current updated/progress of the install once you navigate to another tab.


After several fails and rollbacks, it was determined that with each upload of the patch it creates a new directory in /usr/lib/vcac/patches/repo/
And with restarting each upgrade the process seems to get confused as to which folder it should be using.

Performing a cleanup of previous uploads of the HF application installation with rm -rf /usr/lib/vcac/patches/repo/*
This resulted in a cleaned HF installation.