Configure RHEL7/Centos 7 as a Virtualization Host

smarta

This is a fresh install of RHEL 7.5

First install the packages as shown below.


1
yum install qemu-kvm libvirt

Now install the additional recommened virtualization packages


1
# yum install virt-install libvirt-python virt-manager virt-install libvirt-client 

Now restart libvirtd


1
# systemctl restart libvirtd

 
Now you should be able to launch virt-manager from your remote machine and add a connection to your new virtualization host.

Special note. Make sure that you disable NetworkManager


1
2
# systemctl stop NetworkManager
# systemctl disable NetworkManager
Advertisements

Cisco Catalyst: Configure a Trunk Port with a Native Vlan

data_sheet_c78-530976-1

This post will show you how to configure a trunk port with a native vlan. In this specific example I am configuring a range of ports, but the commands for an individual interface will be the same once you have entered the config for a particular port.

First enter configuration mode

s3560#conf terminal

Now we configure the range of ports. Here we are configuring the range of ports to be a trunk, using dot1q encapsulation. We are allowing vlans 96-99, and vlan 101. We are also setting vlan101 to be the native vlan (no vlan tagging required).

s3560(config)#interface range gigabitEthernet 0/13 – 24
s3560(config-if-range)#description generic trunk ports
s3560(config-if-range)#switchport trunk encapsulation dot1q
s3560(config-if-range)#switchport mode trunk
s3560(config-if-range)#switchport trunk native vlan 101
s3560(config-if-range)switchport trunk allowed vlan 96-99,101

Now we enable spanning-tree portfast and bpduguard

s3560(config-if-range)#spanning-tree portfast

%Warning: portfast should only be enabled on ports connected to a single
host. Connecting hubs, concentrators, switches, bridges, etc… to this
interface when portfast is enabled, can cause temporary bridging loops.
Use with CAUTION

%Portfast will be configured in 13 interfaces due to the range command
but will only have effect when the interfaces are in a non-trunking mode.

s3560(config-if-range)#spanning-tree bpduguard enable
s3560(config-if-range)#end

Save your config changes

s3560#copy run start
Destination filename [startup-config]?
Building configuration…
[OK]
0 bytes copied in 1.443 secs (0 bytes/sec)
s3560#

Cisco Catalyst: Reset/Clear Port Config

inner-banner-itnetworkaudit

Reset Port Config to Default for One Port

Use the command below to clear the configuration on a port.

s3560(config)#default interface gigabitEthernet 0/3
Interface GigabitEthernet0/3 set to default configuration

Use the command below to verify that the running config for the port has been reset to the default configuration.

s3560#show running-config interface gigabitEthernet0/3
Building configuration…

Current configuration : 36 bytes
!
interface GigabitEthernet0/3
end

Now write your changes to memory.

s3560#copy run start
Destination filename [startup-config]?
Building configuration…
[OK]
0 bytes copied in 1.427 secs (0 bytes/sec)

Reset Port Config to Default for a Range of Ports

Below we are going to clear/delete the config from a range of ports

s3560#conf terminal

s3560(config)#default interface range gigabitEthernet 0/13 – 24

Again, don’t forget to save your changes.

 

 

OpenStack: 9 tips to properly configure your OpenStack Instances

faf3a30ac4067155dd656381da179869

Qcow vs Raw, Performance Tweaks, Cloud-init, and a short guide on Kernel Tuning – courtesy of redhatstackblog.redhat.com

via 9 tips to properly configure your OpenStack Instance

OpenStack: Deleting Zombie Cinder Volumes and VMs

cinder-1

First off let me start by saying that the new Cinder logo is wonderful. Nothing helps me think of backend storage better than the backend of a horse.

In an environment I am working in, we have a large number of cinder volumes that are in error state, due to the backend storage being ripped out. The volumes were not deleted, nor were they detached from the VMs.

End result, you cannot delete the zombie VM (at it has an attached volume) and you cannot delete the zombie/orphaned volume (as it is attached to a VM).

The following process allows you to work around the chicken-and-egg scenario above.

First we get a list of all volumes in error state.

# openstack volume list –all | grep -i error

Then we take a closer look at the volume to see if it exists/existed on the backend that was removed.

# openstack volume show 05b372ef-ee45-499b-9676-72cc4170e1b3

First we check the backend to ensure it is the affected backend – in this case it is.

| os-vol-host-attr:host | hostgroup@dellsc#openstack_dellsc

We also check for any current attachments. Below we see that this volume is attached to a vm with the uuid shown below.

| attachments | [{u’server_id’: u’d142eb4b-823d-4abd-95a0-3b02a3194c9f’,

Now we reset the state of the volume, so that it is no longer in an error state

# cinder reset-state –state available 05b372ef-ee45-499b-9676-72cc4170e1b3

Now we detach the volume via cinder.

# cinder reset-state –attach-status detached 05b372ef-ee45-499b-9676-72cc4170e1b3

Now we are free to delete the volume

# openstack volume delete 05b372ef-ee45-499b-9676-72cc4170e1b3

Confirm volume deletion

# openstack volume show 05b372ef-ee45-499b-9676-72cc4170e1b3
No volume with a name or ID of ’05b372ef-ee45-499b-9676-72cc4170e1b3′ exists

Now we can delete the VM.

# openstack server delete d142eb4b-823d-4abd-95a0-3b02a3194c9f

And now we confirm its deletion.

#openstack server show d142eb4b-823d-4abd-95a0-3b02a3194c9f
No server with a name or ID of ‘d142eb4b-823d-4abd-95a0-3b02a3194c9f’ exists.