Solaris 10 and Net-SNMP

Apparently Net-SNMP is now included in Solaris 10 by default, however it is not enabled by default. To use it you must first disable Sun’s SNMP.

#svcadm disable svc:/application/management/snmpdx:default

Net-SNMP is part of Solaris’ System Management Agent. More about that here:
Solaris System Management Agent Developer’s Guide

Net-SNMP daemon is located at /usr/sfw/sbin/snmpd, and
its configuration file is located in /etc/sma/snmp.

You will need to use svcadm to start and stop Net-SNMP:

#svcadm enable svc:/application/management/sma:default
#svcadm disable svc:/application/management/sma:default

Apparently there are two logs that you can reference for Net-SNMP

SMA’s log is located here — /var/svc/log/application-management-sma\:default.log.
However the default log
file of snmpd is /var/log/snmpd.log.

You also have an init script in /etc/init.d called  init.sma

Advertisements

Solaris Package Management Cheat Sheet

packageBasic Solaris Package Management Commands

Show description about a package

pkginfo -l <pkg>

Install a package

pkgadd -d <pkg-file>

Remove a package

pkgrm <pkg>

Find package which owns a file

pkgchk -l -p <file>

List files installed by a package

pkgchk -l <pkg>|awk '/^Pathname/ {print $2}'

List files in package file

pkgchk -l -d <pkg-file>|awk '/^Pathname/ {print $2}'

List all installed packages

pkginfo

Verify integrity of installed files from a package

pkgchk <pkg>

Verify integrity of install files from all packages

pkginfo|awk '{print $2}'|xargs pkgchk

ZFS Quick Start

Images
ZFS is a filesystem, designed by Sun, and implemented in Solaris 10.
Unlike traditional filesystems, which reside on single disk devices and
thus require a volume manager to use more than one device, ZFS
filesystems are built on top of virtual storage pools called zpools, which elimiate the need to create individuall volumes.
When creating a filesystem, you do not need to specify a size, as filesystems will grow automatically withing the zpool . When new storage is added, all file systems within the pool can immediately
use the additional space without additional work. Most importantly ZFS is simple and pretty easy to administer. See my simple notes below which outline how to get started.


1. First create your pool. I called mine database_pool

-bash-3.00# zpool create database_pool c1t14d0s6 c1t13d0s6 c1t12d0s6

2. Now admire your new pool. Yours may differ in size 🙂

-bash-3.00# zpool list

NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT


database_pool          50.2G   60.5K   50.2G     0%  ONLINE     –

3. Create your volumes. In this example I create two similar to what we will be doing in Philly.
-bash-3.00# zfs create database_pool/apdru
-bash-3.00# zfs create database_pool/apdi2

4. Take a look at your new mounts
-bash-3.00# df -k

Filesystem            kbytes    used   avail capacity  Mounted on


database_pool        51867648      25 51867511     1%    /database_pool


database_pool/db1  51867648      24 51867511     1%    /database_pool/db1


database_pool/db2 51867648      24 51867511     1%    /database_pool/db2

5. Make the directory structure for your new volumes if the above path is not where you want them mounted.
-bash-3.00# mkdir -p /db1/v1

-bash-3.00# mkdir -p /db2/v1

6. Mount your new volumes where you want them to go using zfs set. Note there is no leading / on the pool name


-bash-3.00# zfs set mountpoint=
/db1/v1 database_pool/db1
-bash-3.00# zfs set mountpoint=
/db2/v1/apdru database_pool/db2

7. Admire your work. You do not have to create a filesystem, its
already there and each mount point/volume will grow as needed up to the
total capacity of the pool. So no need to worry about sizing each
volume for each database.

bash-3.00# df -k

Filesystem            kbytes    used   avail capacity  Mounted on


database_pool        51867648      25 51867498     1%    /database_pool


database_pool/apdi2  51867648      24 51867498     1%    /db1/v1


database_pool/apdru  51867648      24 51867498     1%    /db2/v1

Solaris Fair Share Scheduling and Projects

Scheduling_solutions_small_355048

Solaris FSS is a method of resource sharing and allocation based on the importance of a process. Specifically a SHARE is a portion of a CPUs resources. In the instance below I was asked to setup FSS and Projects so that two low priority test databases could run along side production databases without any risk of impact.

First off its work nothing that FSS is not always installed by default, you need to ensure that you have the package, SUNWfss.

pkginfo SUNWfss

You are looking for the response below

system      SUNWfss Fair Share Schedule system      SUNWfss Fair Share Schedule

 Ok, since I have several databases running on each box, I am going to make this easy on myself and create two NIS groups, one that the production database users will belong to and one that the non-production, or low priority databases will belong to. The two groups that I created are ora-high and ora-low. This should be self explanitory. Once created in NIS and given access to the box, the xext step is to add ourprojects.

The two examples below define projects by groups.

#projadd -c "Oracle high priority project" group.ora-high

#projadd -c "Oracle low priority project" group.ora-low

 

Now view your new projects.

#projects -l

 

Use the command below to see what projects a user id is part of.

#id -p <userid>

 Now that the projects are created, go ahead and change the system scheduling to use Solaris FSS. This change will require a reboot.

Continue reading

Changing a Solaris Server’s IP Address

The following steps may be used to change the IP address of a
Solaris system.

1. Change the host’s IP in /etc/hosts for the change to take effect
after reboot. If you are using Solaris 10, you must also change the host’s IP in /etc/inet/ipnodes for the change to take effect after reboot.

2. Run ifconfig interface ip_address netmask broadcast_address for the
IP address change to take effect immediately. The netmask and broadcast_address should be specified if you are using variable length subnet masks (VLSM), but may be omitted otherwise.


3. If you are using variable length subnet masks (VLSM), add the host’s network number and subnet mask to /etc/netmasks.

If the new IP address places the system on a different network:

4. Place the host’s new default gateway in /etc/defaultrouter, place new netmask in /etc/netmasks

5. Run route add default new_gateway for the new default gateway to take effect immediately.

6. Run route delete default old_gateway to delete the old default gateway from the routing table.

Verify Solaris 10 Multipathing/Configure SAN Disk

Fibre

I was attempting to troubleshoot issues as a user was complaining about slow performance on a SAN disk. First thing that I did was check to ensure that there were not any performance issues on any disks that might have been causing this users issues

A quick iostat verified that everything was looking fine
iostat -cxzn 1

 

This box is running Veritas so lets check out the disks. Vxdisk list shows one Sun6140 disk.

# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
Disk_0       auto:none       –            –            online invalid
Disk_1       auto:none       –            –            online invalid
SUN6140_0_1  auto:cdsdisk    diskname_dg02  diskname_dg online nohotuse

Luxadm is an utility, which discovers FC devices (luxadm probe), shut
downs devives (luxadm shutown_device …) runs a firmware upgrade
(luxadm download_firmware …) and many other things. In this instance I use luxadm to get the true device name for my disk


# luxadm probe
No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
Node WWN:200600a0b829a7a0  Device Type:Disk device
Logical Path:/dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2

I then run a luxadm on the device. Below you can see that I do indeed have two paths to the device.
1 controller = one path, 2 controllers = 2 paths

# luxadm display /dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2
Vendor:               SUN
Product ID:           CSM200_R
Revision:             0619
Serial Num:           SG71009283
Unformatted capacity: 12288.000 MBytes
Write Cache:          Enabled
Read Cache:           Enabled
Minimum prefetch:   0x1
Maximum prefetch:   0x1
Device Type:          Disk device
Path(s):

/dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2
/devices/scsi_vhci/ssd@g600a0b800029a7a000000dc747a8168a:c,raw
Controller           /devices/pci@1f,4000/SUNW,qlc@5,1/fp@0,0
Device Address              203700a0b829a7a0,1
Host controller port WWN    210100e08bb370ab
Class                       secondary
State                       STANDBY
Controller           /devices/pci@1f,4000/SUNW,qlc@5/fp@0,0
Device Address              203600a0b829a7a0,1
Host controller port WWN    210000e08b9370ab
Class                       primary
State                       ONLINE

Had I only had one path I would have run cfgadm. I would have seen that one of the fc-fabric devices would have been unconfigured. I then could have used cfgadm to configure it and enable my mulitpathing

# cfgadm
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c1                             scsi-bus     connected    unconfigured unknown
c2                             fc-fabric    connected    configured   unknown
c3                             fc-fabric    connected    configured   unknown

MPXIO Primer

Solaris I/O multipathing gives you the ability to set up multiple
redundant paths to a storage system and gives you the benefits of load
balancing and failover.

Need to enable MPXIO

Solaris 10 is the easier, because the mpxio capability is
built-in. You just need to turn it on!

To enable it, edit the file

1
/kernel/drv/fp.conf

file. At the end it should say:

mpxio-disable="yes";

Just change yes to no and it will be enabled:

mpxio-disable="no";

Before multipathing, you should see two copies of each disk in
format. Afterwards, you’ll just see the one copy.

It assigns the next available controller ID, and makes up some
horrendously long target number. For example:

Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c6t600C0FF000000000086AB238B2AF0600d0s5
697942398 20825341 670137634 4% /test

Solaris: How to Resize a Swap Partition….on a disk without any free slices

First lets detach the swap partition (submirror) on the secondary disk. Where d1 is the metadevice and d21 is the submirror

#metadetach d1 d21
d1: submirror d21 is detached

metaclear d21
d21: Concat/Stripe is cleared

Now lets format Secondary Disk (c1t1d1 in this example)

Below is the partition table of the disk that I am working with. Slice one will be resized using unallocated disk space. Add 1 to the last cylinder of slice 7 to determine your starting cylinder. In this case my new starting cylinder is 2691.

Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 824 4.00GB (825/0/0) 8395200
1 swap wu 825 - 1031 1.00GB (207/0/0) 2106432
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 1032 - 1038 34.78MB (7/0/0) 71232
4 var wm 1039 - 1657 3.00GB (619/0/0) 6298944
5 home wm 1658 - 2070 2.00GB (413/0/0) 4202688
6 stand wm 2071 - 2277 1.00GB (207/0/0) 2106432
7 stand wm 2278 - 2690 2.00GB (413/0/0) 4202688

Now resize the partition
In this case I am going to use the rest of the disk, which according to the information for partition two, ends @ cylinder 14087. Subtract 2691 from 14087, to determine how many cylinders your slice is going to be.

Enter partition id tag[swap]:
Enter partition permission flags[wu]:
Enter new starting cyl[825]: 2691
Enter partition size[2106432b, 207c, 1028.53mb, 1.00gb]: 11396c

Now save this new partition scheme

Enter table name (remember quotes): my_table

Then label your disk. As you can see below, my swap partition is now huge.

Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 824 4.00GB (825/0/0) 8395200
1 swap wu 2691 - 14086 55.30GB (11396/0/0) 115965696
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 1032 - 1038 34.78MB (7/0/0) 71232
4 var wm 1039 - 1657 3.00GB (619/0/0) 6298944
5 home wm 1658 - 2070 2.00GB (413/0/0) 4202688
6 stand wm 2071 - 2277 1.00GB (207/0/0) 2106432
7 stand wm 2278 - 2690 2.00GB (413/0/0) 4202688
Now lets Swap the Swap


First, reinitialize swap submirror.
metainit d21
d21: Concat/Stripe is setup

Add swap to new submirror

\swap -a /dev/md/dsk/d21

View swap

swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d1 85,1 16 2106416 2106416
/dev/md/dsk/d21 85,21 16 115965680 115965680

Remove the metadevice from swap

swap -d /dev/md/dsk/d1

Verify that previous step worked

swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d21 85,21 16 115965680 115965680

Clear Metadevice/Mirror

metaclear d1
Mirror is cleared
Clear Metadevice/submirror
metaclear d11
d11: Concat/Stripe is cleared

Format Primary Disk (c1t1d0 in this example)

Select a predefined table

partition> select
0. original
1. my_table
Specify table (enter its number)[0]: 1
View the partition table, and make sure it looks right.
Total disk cylinders available: 14087 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 824 4.00GB (825/0/0) 8395200
1 swap wu 2691 - 14086 55.30GB (11396/0/0) 115965696
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 1032 - 1038 34.78MB (7/0/0) 71232
4 var wm 1039 - 1657 3.00GB (619/0/0) 6298944
5 home wm 1658 - 2070 2.00GB (413/0/0) 4202688
6 stand wm 2071 - 2277 1.00GB (207/0/0) 2106432
7 stand wm 2278 - 2690 2.00GB (413/0/0) 4202688

Label disk
partition> label
Ready to label disk, continue? y

Put things back the way they were.

Reinitialize the swap submirror on the primary disk

metainit d11
d11: Concat/Stripe is setup

Reinitialize Metadevice Mirror

metainit d1
d1: Mirror is setup

Add the metadevice to swap
swap -a /dev/md/dsk/d1

… and verify that it took

swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d21 85,21 16 115965680 115965680
/dev/md/dsk/d1 85,1 16 115965680 115965680

Remove the submirror that you added to swap in the step above

swap -d /dev/md/dsk/d21

Reattach Submirror d21 to Mirror d1

metattach d1 d21
d1: submirror d21 is attached

Redefine dump device

dumpadm -d swap
Dump content: kernel pages
Dump device: /dev/md/dsk/d1 (swap)
Savecore directory: /var/crash/blahblah
Savecore enabled: yes

Related articles

SwapFaq’s
Repartitioning a Linux Cloud Server
HomeLab: Cisco Switch Boot Alternate IOS Image
Solaris 9 Scan for New SAN Luns & and Vertias Volume Manager Quick Start