Solaris10Guide

Solaris 10 Quick Reference (Work In Progress)

 

This document is a quick reference to the major difference and new technologies in Solaris 10. Its is not fully extensive and will only feature on.

 

     ZFS

     Solaris Containers (Zones)

     Resource Management

     Predictive Self Healing

     DTrace

     Misc

 

ZFS – Solaris Zettabyte File System

Features

     128-bit Filesystem

     File System and Volume Manager Integrated

     RAID 0, RAID 1, RAID-Z (RAID5) &  RAID-Z2 Double Parity

     Snap Shot and Clone Support

     Compression

     Self healing

     Checksum of data blocks

     Does not use /etc/vfstab as standard

     Setup NFS through zfs

     Does not support ufsdump/ufsrestore

 

Commands

zpool

Manages Pools (disks)

zfs

Manages file systems 

 

Pool tasks

zpool create mypool c0t1d0

Create mypool with whole disk

zpool create mypool c0t1d0s7

Create mypool with slice

zpool create mypool c1t0d0 c2t0d0

Create mypool with stripe

zpool create mypool mirror c1t0d0 c2t0d0

Create mypool with mirror

zpool create mypool raidz c1t0d0 c1t1d0 c1t2d0

Create mypool with RAID-Z

zpool create mypool raidz2 c1t0d0 c1t1d0 c1t2d0

Create mypool with RAID-Z Double Parity

zpool create -m /export/home mypool c2t0d0

Create mypool with whole disk with mount point /export/home

mkdir 2g /disk1 ; mkdir 2g /disk2 ; mkdir 2g /disk3 ; mkdir 2g /disk4

zpool create mypool mirror /disk1 /disk2

zpool add mypool mirror /disk3 /disk4

 Create mypool from files!!

zpool destroy mypool

DESTROY POOL AND FILESYSTEM

zpool list

List pools

zpool status -x

Display pool status

zpool replace mypool c1t0d0 c1t4d0

Replaces drive c1t0d0 to c1t4s0

zpool clear mypool c1t2d0

Clears transient errors from drive

zpool online mypool c1t3d0

Notify ZFS to rescan c1t3d0

zpool scrub mypool

Disk scrubber (checks drives & data)

zpool iostat [-v] [<pool>] <interval>

Display disk status

 

ZFS File system Tasks

zfs create mypool/home

Create filesystem home

zfs set -o mountpoint=/export/home mypool/home

Create /export/home

zfs create mypool/home/user

Create user directory

zfs destroy mypool/home

Delete home filesystem and below (users)

zfs set quota=20G mypool/home/user

 

zfs set compression=on mypool/home

 

zfs get all mypool/home

View settings

zfs set sharenfs=on mypool/home/user

Create NFS share

zfs set sharenfs=ro myspool/home/jumpstart

Create Read only NFS share

zfs set sharenfs=root=192.168.10.162 rpool/export/zones

 

zfs unshare mypool/home/user

disable NFS share

zfs list

List ZFS filesystem

zfs list -o name,quota,mountpoint

List ZFS filesystem options

zfs snapshot mypool/home@today

Create readonly snapshot of home called today

zfs set snapdir=visable mypool

Enable access to snapshot

ls /export/home/.zfs/snapshot/today

Access to snapshot

zfs rollback mypool/home@today

Rollback to snapshot

zfs clone myspool/home@today mypool/home_new

Clone home snapshot and create a   writeable mypool/home_new

zfs promote mypool/home_new

Make clone the primary data source

zfs rename mypool/home  mypool/home_old ; zfs rename mypool/home_new  mypool/home

rename clone to replace home, move old to home_old

zfs destroy mypool/home@today

Destroy snapshot

 

Solaris Containers (Zones)

 

Features

     Software partition, single kernel!

     share or individual packages and patches.

     chroot on steroids!

     Consolidation

     Test and development

     Resource Management

     BrandZ (Solaris 8/9 and Linux)

     IP Share & Exclusive Mode

     Zone enable commands (ps / prstat ..)

     Unbundled application Zonestat

Configuration Guide Lines

Network & Routing

The global zone must be connected and routing table correctly configured.

DHCP

Is not supported in a zone in IP Share mode on in Exclusive Mode

 

NFS Server

Unable to export

 

Commands

zonecfg

Manages zones configuration

zoneadm

Manages Zones

zlogin

Login into a zone

~. (drop to shell but my break ssh shell)

Change SSH break character.

ssh -e ^ <user>@<globalzone>

 

Create Zone

zonecfg -z myzone

zonecfg:myzone> create

zonecfg:myzone> set zonepath=/export/zones/myzone

zonecfg:myzone> set autoboot=true

zonecfg:myzone> add net

zonecfg:myzone:net> set address=10.10.25.33

zonecfg:myzone:net> set physical=e1000g0

zonecfg:myzone:net> end

zonecfg:myzone> verify

zonecfg:myzone> commit

zonecfg:myzone> exit

Basic Zone

zonecfg -z webzone

zonecfg:webzone> set zonepath=/export/zones/webzone

zonecfg:webzone> set autoboot=true

zonecfg:webzone> add net

zonecfg:webzone:net> set address=10.10.25.34

zonecfg:webzone:net> set physical=e1000g0

zonecfg:webzone:net> end

zonecfg:webzone> add fs

zonecfg:webzone:fs> set dir=/export/home

zonecfg:webzone:fs> set type=lofs

zonecfg:webzone:fs> set special=/export/home

zonecfg:webzone:fs> end

zonecfg:webzone> info

zonecfg:webzone> verify

zonecfg:webzone> commit

zonecfg:webzone> exit

Zone with home directories shared with Global zone

zonecfg -z ownzone

zonecfg:ownzone> create

zonecfg:ownzone> set zonepath=/export/zones/ownzone

zonecfg:ownzone> set autoboot=true

zonecfg:ownzone> add net

zonecfg:ownzone:net> set address=10.10.25.35

zonecfg:ownzone:net> set physical=e1000g0

zonecfg:ownzone:net> end

zonecfg:ownzone> remove inherit-pkg-dir dir=/lib

zonecfg:ownzone> remove inherit-pkg-dir \ dir=/platform

zonecfg:ownzone> remove inherit-pkg-dir dir=/sbin

zonecfg:ownzone> remove inherit-pkg-dir dir=/usr

zonecfg:ownzone> verify

zonecfg:ownzone> commit

zonecfg:ownzone> exit

Zone with no inherited packages

zonecfg -z poolzone

zonecfg:poolzone> create

zonecfg:poolzone> set        zonepath=/export/zones/poolzone

zonecfg:poolzone> set autoboot=true

zonecfg:poolzone> set pool=qa-pool

zonecfg:poolzone> add net

zonecfg:poolzone:net> set address=10.10.25.33

zonecfg:poolzone:net> set physical=e1000g0

zonecfg:poolzone:net> end

zonecfg:poolzone> verify

zonecfg:poolzone> commit

zonecfg:poolzone> exit

Zone with resource pool “qa-pool” allocated, See pools.

 

sysidcfg file, copy in <zonepath>/root/etc/sysidcfg to pre answer boot questions

name_service=DNS

       {domain_name=<domain>

       name_server=<default router>}

system_locale=en_GB.ISO8859-1

terminal=vt100

network_interface=primary {

                hostname=<hostname>}

security_policy=NONE

nfs4_domain=LOCAL.com

timezone=GB-Eire

root_password=<encrypted password>


Administer Zone

zoneadm -z myzone install

Install a configured zone

zoneadm -z myzone boot

Boot a zone

zoneadm -z myzone boot && zlogin -C myzone

Boot and watch console of a zone

zoneadm -z myzone halt

Stop a zone ( eg pull power)

zoneadm -z myzone reboot

 

zoneadm -z myzone uninstall -F

Deletes a zone

zonecfg -z myzone delete -F

Removes a zone config

zoneadm list -civ

Display Zones Status

zoneadm -z myzone detach

Detach zones (ready to move)

zoneadm -z myzone attach

Attach  Zone

zoneadm -z myzone attach -F

Attach  Zone with out verify

zoneadm -z myzone attach -u

Attach  Zone update any patches with global zone

zoneadm -z myzone attach -u -b <patch=id>

Attach  Zone update any patches with global zone, backing out patch  id

zlogin -C myzone

Login to the zone console

~. to drop out

zlogin -l sysadmin myzone

Login as sysadmin user

zlogin myzone shutdown -i 0

Shutdown zone gracefully

zlogin -S myzone

Login Safe mode for diagnostics

zonename

Tell me my zonename

 

Clone myzone to dolly (ZFS)

zlogin myzone shutdown -i 0

Shut down zone gracefully

zonecfg -z myzone export -f /export/zones/myzone.cfg

 

chmod 700 /export/zones/dolly

Strict permissions

vi /export/zones/myzone.cfg

Change path and IP address

zonecfg -z dolly -f /export/zones/myzone.cfg

Create zone from master template

zoneadm -z dolly clone myzone

Clone myzone to dolly.

zoneadm -z dolly boot

Boot dolly

 

Move myzone to different server

Source Host

1

zlogin myzone shutdown -i 0

Shut down zone gracefully

2

zonecfg -z myzone export -f /export/zones/myzone.cfg

Create export file

3

zoneadm -z myzone detach

 

4

tar cf myzone.tar /export/zones/myzone

 

5

scp myzone.cfg <user>@<target-host>

 

 

Target Host

1

tar xvf myzone.tar

Untar in to new root zonepath.

2

Edit myzone.cfg to reflect new zonepath

 

3

zonecfg -z myzone -f myzone.cfg

Create zone from master template

4

zoneadm -z myzone attach

Attach zone

5

zoneadm -z myzone boot

 

 

Resource Management CPU

 

Features

     Fixed number of CPU's per zone

     Variable CPU's per zone

     FSS Fair Share, weighted zones. recommended for most application

     Mixed workloads

     In Solaris 10 8/07 and later we have dedicated CPU' feature  which can be an effective alternative to pools.

Commands

pooladm

Administer pools

poolcfg

Configure pools

dispadmin

Dispatch “Scheduler” Admin

 

 

Create Fixed CPU Zone Pool

pooladm -e

Enable pools

svcadm enable pools

Enable pools

pooladm -s

Save configuration

poolcfg -c 'create pset db-pset (uint pset.min=10; uint pset.max=10)'

Processor Set “db-pset” 10 CPU's

poolcfg -c 'create pool db-pool'

 

poolcfg -c 'associate pool db-pool (pset db-pset)'

 

pooladm -c

Activate configuration

zonecfg -z dbzone

zonecfg:dbzone> set pool=db-pool

zonecfg:dbzone> verify

zonecfg:dbzone> commit

zonecfg:dbzone> exit

Associate the zone with a resource pool

 

Create FSS Zone (Pool)

pooladm -e

Enable pools

svcadm enable pools

Enable pools

pooladm -s

Save configuration

poolcfg -c 'create pool db-pool ( string pool.scheduler = "FSS" )'

Resource pool with  FSS

poolcfg -c 'create pool ap-pool ( string pool.scheduler = "FSS" )'

Resource pool with  FSS

pooladm -c

Activate configuration

pooladm

Display configuration

zonecfg -z dbzone

zonecfg:dbzone> set pool=dbpool

zonecfg:dbzone> add rctl

zonecfg:dbzone:rctl>set name=zone.cpu-shares

zonecfg:dbzone:rctl>add add value (priv=privileged,limit=3,action=none)

zonecfg:dbzone:rctl> end

zonecfg:dbzone> verify

zonecfg:dbzone> commit

zonecfg:dbzone> exit

Associate the zone with a resource pool and set FSS CPU share to 3

zonecfg -z apzone

zonecfg:apzone> set pool=appool

zonecfg:apzone> add rctl

zonecfg:apzone:rctl>set name=zone.cpu-shares

zonecfg:apzone:rctl>add add value (priv=privileged,limit=2,action=none)

zonecfg:apzone:rctl> end

zonecfg:apzone> verify

zonecfg:apzone> commit

zonecfg:apzone> exit

Associate the zone with a resource pool and set FSS CPU share to 2

zlogin apzone init 6 && zlogin dbzone init 6

Reboot zones

dispadmin -d

Display default scheduler

dispadmin -d FSS

Set scheduler to FSS

priocntl -s -c FSS -i all

Set scheduler to FSS now

prctl -n zone.cpu-shares -i zone global

Display Global zone CPU shares

prctl -n zone.cpu-shares -v 2 -r -i zone global

Set Global Zone to FSS CPU share to 2

* not persistent after reboots

Prctl -n zone.cpu-shares -r -v 3 -i zone <zone>

Dynamically change zone CPU shares

 

Oracle Database Example using dedicated (need to add filesystems for /u01 ..)

In global

$ mkdir /usr/local

 

zonecfg -z orazone

zonecfg:orazone> set zonepath /zones/orazone

zonecfg:orazone> set max-shm-memory=4G

zonecfg:orazone> add dedicated-cpu

zonecfg:orazone:dedicated-cpu> set ncpus=12-16

zonecfg:orazone:dedicated-cpu> set importance=2

zonecfg:orazone:dedicated-cpu> end

zonecfg:orazone> add net

zonecfg:orazone:net> set address=10.10.25.35

zonecfg:orazone:net> set physical=e1000g0

zonecfg:orazone:net> end

zonecfg:orazone> add fs

zonecfg:orazone:fs> set dir=/usr/local

zonecfg:orazone:fs> set type=lofs

zonecfg:orazone:fs> set special=/usr/local

zonecfg:orazone:fs> end

zonecfg:orazone> verify

zonecfg:orazone> commit

zonecfg:orazone> exit

Create zone with

zonecfg -z orazone

zonecfg:orazone> set zonepath /zones/orazone

zonecfg:orazone> set max-shm-memory=4G

zonecfg:orazone> add capped-cpu

zonecfg:orazone:capped-cpu> set ncpus=3.75

zonecfg:orazone:capped-cpu> end

zonecfg:orazone> add net

zonecfg:orazone:net> set address=10.10.25.35

zonecfg:orazone:net> set physical=e1000g0

zonecfg:orazone:net> end

zonecfg:orazone> add fs

zonecfg:orazone:fs> set dir=/usr/local

zonecfg:orazone:fs> set type=lofs

zonecfg:orazone:fs> set special=/usr/local

zonecfg:orazone:fs> end

zonecfg:orazone> verify

zonecfg:orazone> commit

zonecfg:orazone> exit

 

Services management Facility

 

Features

      Predictive hardware monitoring

      isolation and deactivation

      Fault Management Architecture FMA / Service Management Facility SMF

      Message Identifier lookup @ http://www.sun.com/msg/

      RC script, /etc/inetd.conf & /etc/inittab are legacy

      Milestones as wellas run levels

      FMRI – example svc:/network/ssh:default

      Methods and Manifest (XML)

             

Commands and Logs

svcs

Display service status and dependencies

svcadm

Enable/Disable services

inetconv

convert inetd.conf

svccfg

Manifest Management

/var/adm/messages

System logs

/var/svc/log

Service Logs

/etc/svc/volatile

pre-single user logs

 

SMF Tasks

svcadm disable system/cron:default

Disables

svcadm enable system/cron:default

Enable cron

svcadm refresh network/ssh:default

reread ssh configuration

svcadm restart network/ssh:default

restart ssh

svcadm -v enable -r nfs/server

Enables all services required to start nfs

svcadm -v enable -r -t nfs/server

Enables all services required to start nfs

until a reboot

svcs -a

List all services

svcs -p ssh

Show processes attached to ssh server

svcs -d /network/smtp

Show what service smtp depend on

svcs -D /network/smtp:sendmail

Show what service  depend on smtp

svcs -xv

Display failed services

boot -m verbose

Display services on boot up.

svcadm milestone -d milestone/single-user:default

change default run level

svcadm milestone milestone/multi-user

change run level to multi-user

Ok> boot -m milestone=single-user

boot in to single user

 

Networking

 

 

 

 

 

 

Security

RBAC

                   /etc/user/attr                                          # User and Role information

                   /etc/security/prof_attr                            # Predefined profile (collection of rights)

                   /etc/security/policy.conf              # User defaults

                   /etc/security/exec_attr                            # Rights Profile and associated execution attributes

 

profiles <username>

Display security profiles assigned to user

profiles -l <username>

Displays individual commands within a profile

pfexec <cmd>

Executes commands with correct privileges

roles <username>

Display roles assigned to user

 

Solaris Security Toolkit

                   Download from http://www.sun.com/software/security/jass/

                   Installed into /opt/SUNWjass

                   Run manual or integrated with Jumpstart

                   Ready made templates in /opt/SUNWjass/Drivers

                   Always have console access, as the tool-kit often blocks remote logins.

./jass-execute -d secure.driver

Hardens Solaris

./jass-execute -a secure.driver

Audits Solaris against template

./jass-execute -c

Clear previous edits.

 

Miscellaneous  Settings

/etc/ssh/sshd_config

SSH Settings

Disable Root, Allow on SSH-2 protocol

Consider naming specific users.

Or disable well know users, oracle, admin....

vi /etc/security/policy.conf

change CRYPT_DEFAULT to 1 (BSD MD5)

CRYPT_DEFAULT=1

Change password encryption

 

 

Solaris IP Filter

                   Not enables by default

                   Packet Filtering available between zones.

                   NAT

                   Statefull

                   Manual configuration only

 

vi /etc/ipf/ipf.conf

Edit rules file

ipf -f /etc/ipf/ipf.conf

Enabe rules files

ipf -Fa

Disable Rules

ipf -Fi

Disable incoming filter

Ipfstat

IP Filter stastics

svcadm enable network/ipfilter

Enable the IP Filter

 

Rule Examples

#/etc/ipf/ipf.conf

 

# pass and log everything by default

pass in log on elxl0 all

pass out log on elxl0 all

 

# Disable SSH access to this machine from  192.168.10.254

block in quick from 192.168.10.254/32 to port = 22

 

# block, but don't log, incoming packets from other reserved addresses

block in quick on elxl0 from 10.0.0.0/8 to any

block in quick on elxl0 from 172.16.0.0/12 to any

 

# block and log untrusted internal IPs. 0/32 is notation that replaces

# address of the machine running Solaris IP Filter.

block in log quick from 192.168.1.15 to <thishost>

block in log quick from 192.168.1.43 to <thishost>

 

# block and log X11 (port 6000) and remote procedure call

# and portmapper (port 111) attempts

block in log quick on elxl0 proto tcp from any to elxl0/32 port = 6000 keep state

block in log quick on elxl0 proto tcp/udp from any to elxl0/32 port = 111 keep state

Note – example taken from Sun IP Security Manual

 

DTrace

 

A dynamic tracing facility that provides a comprehensive view of operating system and application behaviour. It has functionality similar to truss, apptrace, prex and mdb, bundled into a single scriptable tool that can examine both userland activity and the kernel. DTrace can be used on live production servers with often negligible impact on performance.

 

Example D-scripts are provided in /usr/demo/dtrace

 

DTrace toolkit                            http://www.opensolaris.org/os/community/dtrace/dtracetoolkit

 

DTrace manual              http://docs.sun.com/app/docs/doc/817-6223

 

Quick Ref                            http://developers.sun.com/solaris/articles/dtrace_quickref/dtrace_quickref.html

 

How to                             http://www.sun.com/software/solaris/howtoguides/dtracehowto.jsp

 

Dtrace Cheatsheet              http://blogs.sun.com/brendan/entry/dtrace_cheatsheet

 

 

Solaris 10 minor differences

 

routeadm -e ipv4-forwarding

routeadm -d ipv4-routing

routeadm -e ipv4-routing

Routing commands, for IP Forwarding and routing (in.routed)

echo “server IP_ADDRESS” >> /etc/inet/ntp.conf

svcadm enable /network/ntp

Enable NTP

vi /etc/security/policy.conf

change CRYPT_DEFAULT to 1 (BSD MD5)

CRYPT_DEFAULT=1

Change password encryption

svccfg -s x11-server setprop options/tcp_listen = true

Allow X11 connection Solaris 10

dumpadm -d /dev/dsk<device>

Manage Dumps to dedicated device (support ZFS root)

dumpadm -c /var/crash/<hostname>

Manage core dumps

 

Live Upgrade (ZFS)

lucreate -c <be_name> -n <new-be_name>

Name current boot environment and create New BE.

lucreate  -n <new-be_name>

Create new environment

lustatus

Display boot environment

luupgrade -u -n <new-be_name> -s /net/<ip address>/export/install

Live OS Upgrade

cd 10_Recommended

luupgrade -t -n <new-be_name> -O  \

-t -s . ./patch_order

Patch a live environment from downloaded patch cluster.

luactivate <new-be_name>

Activate Boot environment, after reboot.

DON'T USE “reboot” command, always use init or shutdown.

Reference Material

 

Little Known Solaris Features http://www.c0t0d0s0.org/pages/lksfbook.html

Solaris Security Tool Kit http://docs.sun.com/app/docs/prod/61ec778c-5688-47ba-b4bf-af20b140731a

Solaris Patching Best Practise http://www.sun.com/bigadmin/features/articles/patch_management.jsp

Solaris Zones FAQ http://hub.opensolaris.org/bin/view/Community+Group+zones/faq

Zonestat Util http://hub.opensolaris.org/bin/view/Project+zonestat/

SVC SMF https://www.sun.com/offers/docs/smfmanifest_howto.pdf

 

 

 

 

 

 

Andy Paton

18/18

7/11/09

Version 2.1

Comments

Popular posts from this blog

Solaris 11 Locale en_GB.UTF-8 / en_GB.ISO8859-1 / en_GB.ISO8859-15

Scheduled network capture on Windows using Wireshark (tshark.exe)

[Linux] X-server ScreenShots from the CLI "ImageMagick"