Monday, April 26, 2010

Sun Advanced Lights Out Manager system controller (ALOM SC) - only for SPARC system

Sun Advanced Lights Out Manager system controller (ALOM SC) - only for SPARC system


ALOM System Controller enables you to remotely manage and administer a server.

It comes preinstalled on the machine, so as soon you plug in power cable, it works. 

Yes, it uses server's standby power, which enables you to remotely power off and on server (very useful when someone schedule power outage in remote office and you want gracefully to bring server down and after outage power it on). 

ALOM monitors hardware in the server, like CPU, RAM, Power supply, etc, and much more like Voltage and status of alarms.

Of course, all this exercise assumes you configured ALOM's network parameters. Try to have dedicated management subnet for this. 

If you access ALOM and stay idle for 1 minute, it will switch to serial console. 

Or you can type console and reach serial console of remote system from your cube or home living room.

sc> console

To go back to ALOM type #. (pound and dot)




This can be changed (if you really want) from ALOM with sc> setsc sc_escapechars ## (for example)

hostname console login:

#.  ---> switch from console to ALOM

Copyright 2004 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Sun(tm) Advanced Lights Out Manager 1.5.4 (hostname)
Please login: admin
Please Enter password: **********
Cannot login to the ALOM? Try default password which is the last 8 digits of the chassis serial number. Username is admin. Okay, I don’t want to write much about this, since commands are intuitive and easy to understand (note: they may vary from different ALOM versions) So find about them by yourself and see what help offers you.
sc> help
Available commands
------------------
poweron [-c] {FRU}
poweroff [-y] [-f]
removefru [-y] {FRU}
reset [-y] [-x] [-c]
break [-y] [-c]
bootmode [normal|reset_nvram|diag|skip_diag|bootscript="string"]
console [-f]
consolehistory [-b lines|-e lines] [-g lines] [-v] [boot|run]
showlogs [-b lines|-e lines] [-g lines] [-v]
setlocator [on|off]
showlocator
showenvironment
showfru
showplatform [-v]
showsc [-v] [param]
shownetwork [-v]
setsc [param] [value]
setupsc
showdate
setdate [[mmdd]HHMM | mmddHHMM[cc]yy][.SS]
resetsc [-y]
flashupdate [-s IPaddr -f pathname] [-v]
setdefaults [-y] [-a]
useradd 
userdel [-y] 
usershow [username]
userpassword 
userperm  [c][u][a][r]
password
showusers [-g lines]
logout
help [command]
I guess one of most used commands will be:
sc> setsc set if_network true
sc> setsc netsc_dhcp false
sc> setsc netsc_ipaddr 192.168.etc.etc
sc> setsc netsc_ipnetmask 255.255.255.etc
sc> setsc netsc_ipgateway 192.168.etc.etc
sc> resetsc
Are you sure you want to reset the SC [y/n]?  y
sc> shownetwork
SC network configuration is:
IP Address: 192.168.etc.etc
Gateway address: 192.168.etc.etc
Netmask: 255.255.255.etc
Ethernet address: 00:14:4f:64:b2:6f
sc> showplatform
SUNW,Sun-Fire-v240
Domain Status
------ ------
hostname  OS Running
sc> showenvironment

=============== Environmental Status ===============
--------------------------------------------------------------------------------
System Temperatures (Temperatures in Celsius):
--------------------------------------------------------------------------------
Sensor         Status    Temp LowHard LowSoft LowWarn HighWarn HighSoft HighHard
--------------------------------------------------------------------------------
MB.P0.T_CORE    OK         56     --      --      --      84       89       96
MB.P1.T_CORE    OK         53     --      --      --      84       89       96
MB.T_ENC        OK         23     -6      -3       5      40       48       51

--------------------------------------
Front Status Panel:
--------------------------------------
Keyswitch position: NORMAL

--------------------------------------------------------
System Indicator Status:
--------------------------------------------------------
MB.LOCATE            MB.SERVICE           MB.ACT
--------------------------------------------------------
OFF                  OFF                  ON

--------------------------------------------
System Disks:
--------------------------------------------
Disk   Status            Service  OK2RM
--------------------------------------------
HDD0   OK                OFF      OFF
HDD1   OK                OFF      OFF
HDD2   NOT PRESENT       OFF      OFF
HDD3   NOT PRESENT       OFF      OFF

----------------------------------------------------------
Fans (Speeds Revolution Per Minute):
----------------------------------------------------------
Sensor           Status           Speed   Warn    Low
----------------------------------------------------------
F0.RS            OK                6490     --   1000
F1.RS            OK                6750     --   1000
F2.RS            OK                6553     --   1000
MB.P0.F0.RS      OK               16071   2000   2000
MB.P0.F1.RS      OK               15697   2000   2000
MB.P1.F0.RS      OK               15697   2000   2000
MB.P1.F1.RS      OK               13235   2000   2000

--------------------------------------------------------------------------------
Voltage sensors (in Volts):
--------------------------------------------------------------------------------
Sensor         Status       Voltage LowSoft LowWarn HighWarn HighSoft
--------------------------------------------------------------------------------
MB.P0.V_CORE   OK             1.46      --    1.26    1.54       --
MB.P1.V_CORE   OK             1.47      --    1.26    1.54       --
MB.V_VTT       OK             1.24      --    1.17    1.43       --
MB.V_GBE_+2V5  OK             2.49      --    2.25    2.75       --
MB.V_GBE_CORE  OK             1.19      --    1.08    1.32       --
MB.V_VCCTM     OK             2.53      --    2.25    2.75       --
MB.V_+2V5      OK             2.49      --    2.34    2.86       --
MB.V_+1V5      OK             1.51      --    1.35    1.65       --
MB.BAT.V_BAT   OK             2.95      --    2.70      --       --

--------------------------------------------
Power Supply Indicators:
--------------------------------------------
Supply    Active  Service  OK-to-Remove
--------------------------------------------
PS0       ON      OFF      OFF
PS1       ON      OFF      OFF

------------------------------------------------------------------------------
Power Supplies:
------------------------------------------------------------------------------
Supply  Status          Underspeed  Overtemp  Overvolt  Undervolt  Overcurrent
------------------------------------------------------------------------------
PS0     OK              OFF         OFF       OFF       OFF        OFF
PS1     OK              OFF         OFF       OFF       OFF        OFF

----------------------
Current sensors:
----------------------
Sensor          Status
----------------------
MB.FF_SCSI       OK
sc> usershow
Username         Permissions      Password
admin            cuar             Assigned
sc> showfru
FRU_PROM at MB.SEEPROM
  Manufacturer Record
  Timestamp: THU NOV 23 02:06:23 UTC 2006
  Description: FRUID,INSTR,M'BD,2X1.5GHZ,CPU,RoHS
  Manufacture Location: Shunde,China
  Sun Part No: 3753467
  Sun Serial No: 1U0J7I
  Vendor JEDEC code: 3E5
  Initial HW Dash Level: 01
  Initial HW Rev Level: 50
  Shortname: MOTHERBOARD
  Etc –etc –etc
Prefer working from the OS? For some platforms, like SunFire V240 and V440, ALOM can be configured from OS level using scadm utility. Unfortunately this is not supported on SunFire T2000. Examples:
/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm set if_network true

Enable Ethernet link integrity test: 

/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm set netsc_tpelinktest true

Enable backup of local user database (username, passwords, permissions) on system configuration card. 

/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm set sc_backupuserdata true

/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm set netsc_dhcp false

/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm set netsc_ipaddr 192.168.etc.etc

/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm set netsc_ipnetmask 255.255.255.etc

/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm set netsc_ipgateway 192.168.etc.etc

/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm resetrsc

/usr/platform/SUNW,Sun-Fire-V240/sbin> ./scadm shownetwork
IP Address: 192.168.etc.etc
Gateway address: 192.168.etc.etc
Netmask: 255.255.255.etc
Ethernet address: 00:00:00:00:00:00


iSCSI

Brief introduction

The iSCSI protocol allows SCSI commands to be used over a TCP/IP network.
The main reason people want to use an iSCSI is reducing costs, since they don't need buying FC HBA and infrastructure is already setup.
The default port for iSCSI targets is 3260.
iSCSI versus NFS
Both are used for accessing storage device over network, so what’s the difference?

1. NFS is used for accessing remote FILE SYSYEM data.
Many people can assess data so there is functionality of locking data when used by someone, so others have to wait.

2. iSCSI is used for accessing BLOCKS on remote disk.
In this case many users cannot share this access, because there is not lock functionality for block level access.

In this example I have SunFire T2000 directly connected to StorEdge 3510 (with FC cables).
The server is running Solaris 10 update 7.
This server is iSCSI target and is exporting block device (ZFS volume) in order to be accessed by another Solaris box over network (iSCSI initiator).
Configuring iSCSI Target - no Authentication
You need next packages installed:

system SUNWiscsir Sun iSCSI Device Driver (root)
system SUNWiscsitgtr Sun iSCSI Target (Root)
system SUNWiscsitgtu Sun iSCSI Target (Usr)
system SUNWiscsiu Sun iSCSI Management Utilities (usr)

Enable the service svc:/system/iscsitgt:default

1. Create a base directory

The base directory is used to store the iSCSI target configuration data and needs to be defined prior to using the iSCSI target for the first time.

# iscsitadm modify admin -d /var/iscsi


2. Configure a backing store

The backing store contains the physical storage that is exported as iSCSI target.
With Solaris, next can be backing store: flat files, physical device, SVM, ZFS volume.

Let's create ZFS volumes on StorEdge 3510, one on RAID1 and RAID5.
> format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t40d0
/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w216000c0ff89cacc,0
1. c1t40d1
/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w216000c0ff89cacc,1
2. c2t0d0
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
3. c2t1d0
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@1,0

> zpool create -f drum-raid5 c1t40d0

> zpool create -f drum-raid1 c1t40d1

> zpool status
pool: drum-raid1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
drum-raid1 ONLINE 0 0 0
c1t40d1 ONLINE 0 0 0

pool: drum-raid5
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
drum-raid5 ONLINE 0 0 0
c1t40d0 ONLINE 0 0 0

> zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
drum-raid1 68G 94K 68.0G 0% ONLINE -
drum-raid5 136G 111K 136G 0% ONLINE -

> zfs create -V 30g drum-raid1/volume-no-CHAP

> zfs list
NAME USED AVAIL REFER MOUNTPOINT
drum-raid1 30.0G 36.9G 18K /drum-raid1
drum-raid1/volume-no-CHAP 30G 66.9G 30K -
drum-raid5 112K 134G 18K /drum-raid5


The ZFS Volume (drum-raid1/volume-no-CHAP) is now created.

3. Create a target

Let's now create iSCSI target.
> iscsitadm create target
iscsitadm: at least one option required
iscsitadm create target
OPTIONS:
-t, --type
-u, --lun
-z, --size
-a, --alias
-b, --backing-store
For more information, please see iscsitadm(1M)

>iscsitadm create target -b /dev/zvol/dsk/drum-raid1/volume-no-CHAP testors-no-CHAP


If you change mind and want to removing iSCSI targets, do:
> iscsitadm delete target --lun 0 testors-no-CHAP


4. Verify the target configuration
> iscsitadm list target -v
Target: testors-no-chap
iSCSI Name: iqn.1986-03.com.sun:02:92b0edad-52cd-ca06-93bd-d39a31259a2b.testors-no-chap
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0
VID: SUN
PID: SOLARIS
Type: disk
Size: 30G
Backing store: /dev/zvol/dsk/drum-raid1/volume-no-CHAP
Status: online


The iSCSI configuration data file is created in base directory:
# ls /etc/iscsi
total 12
drwxr-xr-x 2 root sys 512 Mar 18 15:23 .
drwxr-xr-x 58 root sys 4096 Mar 18 15:37 ..
-rw------- 1 root root 548 Mar 18 15:23 iscsi_v1.dbc


Note about iSCSI name:

It can be in two formats: IQN or EUI.

1. IQN - has date, domain and node identification, like in my example.
2. EUI - has 16 hexadecimal digits, resembles WWN of FC node.

5. Once iSCSI initiator is configured (will be done in next steps), you can list it:
> iscsitadm list initiator
Initiator: cs2
iSCSI Name: iqn.1986-03.com.sun:01:0003ba3559b8.4ba69545
CHAP Name: cs2

Configuring iSCSI Initiator - no Authentication
You need next packages installed:

system SUNWiscsir Sun iSCSI Device Driver (root)
system SUNWiscsiu Sun iSCSI Management Utilities (usr)

Enable service svc:/network/iscsi_initiator:default

1. Configure a discovery method

Have only static discovery (use this for small number of targets or restrict targets that initiator can access)
# iscsiadm modify discovery
iscsiadm: at least one option required
iscsiadm modify discovery
OPTIONS:
-s, --static
-t, --sendtargets
-i, --iSNS
For more information, please see iscsiadm(1M)

# iscsiadm modify discovery -i disable
# iscsiadm modify discovery -t disable

# iscsiadm list discovery
Discovery:
Static: enabled
Send Targets: disabled
iSNS: disabled

# iscsiadm add static-config iqn.1986-03.com.sun:02:92b0edad-52cd-ca06-93bd-d39a31259a2b.testors-no-chap,192.168.24.35


2. Verify the targets
> iscsiadm list target -vS
Target: iqn.1986-03.com.sun:02:92b0edad-52cd-ca06-93bd-d39a31259a2b.testors-no-chap
Alias: testors-no-chap
TPGT: 1
ISID: 4000002a0000
Connections: 1
CID: 0
IP address (Local): 192.168.20.222:32784
IP address (Peer): 192.168.24.35:3260
Discovery Method: Static
Login Parameters (Negotiated):
Data Sequence In Order: yes
Data PDU In Order: yes
Default Time To Retain: 20
Default Time To Wait: 2
Error Recovery Level: 0
First Burst Length: 65536
Immediate Data: yes
Initial Ready To Transfer (R2T): yes
Max Burst Length: 262144
Max Outstanding R2T: 1
Max Receive Data Segment Length: 8192
Max Connections: 1
Header Digest: NONE
Data Digest: NONE
LUN: 0
Vendor: SUN
Product: SOLARIS
OS Device Name: /dev/rdsk/c4t2d0s2

> iscsiadm list target-param -v
Target: iqn.1986-03.com.sun:02:92b0edad-52cd-ca06-93bd-d39a31259a2b.testors-no-chap
Alias: testors-no-chap
Bi-directional Authentication: disabled
Authentication Type: NONE
Login Parameters (Default/Configured):
Data Sequence In Order: yes/-
Data PDU In Order: yes/-
Default Time To Retain: 20/-
Default Time To Wait: 2/-
Error Recovery Level: 0/-
First Burst Length: 65536/-
Immediate Data: yes/-
Initial Ready To Transfer (R2T): yes/-
Max Burst Length: 262144/-
Max Outstanding R2T: 1/-
Max Receive Data Segment Length: 8192/-
Max Connections: 1/-
Header Digest: NONE/-
Data Digest: NONE/-
Configured Sessions: 1

> iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:0003ba3559b8.4ba6c502
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS access: unknown
Configured Sessions: 1


3. Initialize and use the new targets

Target(s) is visible, device entries are needed to create with command:
> devfsadm -Cv -i iscsi


> format
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1f,0/pci@1/scsi@8/sd@0,0
1. c1t1d0
/pci@1f,0/pci@1/scsi@8/sd@1,0
2. c4t2d0
/iscsi/disk@0000iqn.1986-03.com.sun%3A02%3A92b0edad-52cd-ca06-93bd-d39a31259a2b.testors-no-chapFFFF,0

4. Create file systems
> zpool create iscsi-no-chap c4t2d0

> zfs list
NAME USED AVAIL REFER MOUNTPOINT
iscsi-no-chap 89.5K 29.3G 1K /iscsi-no-chap

A. In case you want to remove already configured iSCSI Target

Find your targets.
> iscsiadm list target
Target: iqn.1986-03.com.sun:02:92b0edad-52cd-ca06-93bd-d39a31259a2b.testors-no-chap
Alias: testors-no-chap
TPGT: 1
ISID: 4000002a0000
Connections: 1


B. Remove the target
> iscsiadm remove static-config iqn.1986-03.com.sun:02:92b0edad-52cd-ca06-93bd-d39a31259a2b.testors-no-chap

The iSCSI authentication
The iSCSI authentication may be required since iSCSI target cannot determine if connection request is from valid host.
For example, someone else can connect to already connected (in use) block device, and type 'newfs' and you are screwed.

Target authenticates an initiator by using Challenge-Handshake Authentication Protocol (CHAP).

Authentication can be:

1. Unidirectional: only target identifies initiator.
2. Bidirectional: initiators also identifies target.

Let's create new volume for testing iSCSI setup with Bidirectional authentication :

> zfs create -V 40g drum-raid5/volume-yes-CHAP
> zfs list
NAME USED AVAIL REFER MOUNTPOINT
drum-raid5 40.0G 93.9G 18K /drum-raid5
drum-raid5/volume-yes-CHAP 40G 134G 16K -
An iSCSI initiator configuration for Bidirectional authentication
1. Set a secret key (between 12-16 characters)
> iscsiadm modify initiator-node --CHAP-secret
Enter secret:
Re-enter secret:


2. Set iSCSI initiator CHAP name (let's make it same as hostname of iSCSI initiator)
> iscsiadm modify initiator-node --CHAP-name counterstrike2


3. Tell iSCSI initiator to use CHAP
> iscsiadm modify initiator-node --authentication CHAP

> iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:0003ba3559b8.4ba6c502
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: CHAP
CHAP Name: counterstrike2
RADIUS Server: NONE
RADIUS access: unknown
Configured Sessions: 1


Just as reminder how to add iSCSI target in case it's not already added.
> iscsiadm add static-config iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap,192.168.24.35


4. We want to enable bidirectional authentication on iSCSI target.
> iscsiadm modify target-param --bi-directional-authentication enable iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap

> iscsiadm list target-param -v
Target: iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap
Alias: testors-yes-chap
Bi-directional Authentication: enabled
Authentication Type: NONE
Login Parameters (Default/Configured):
Data Sequence In Order: yes/-
Data PDU In Order: yes/-
Default Time To Retain: 20/-
Default Time To Wait: 2/-
Error Recovery Level: 0/-
First Burst Length: 65536/-
Immediate Data: yes/-
Initial Ready To Transfer (R2T): yes/-
Max Burst Length: 262144/-
Max Outstanding R2T: 1/-
Max Receive Data Segment Length: 8192/-
Max Connections: 1/-
Header Digest: NONE/-
Data Digest: NONE/-
Configured Sessions: 1


5. And setup authentication method which is CHAP.
> iscsiadm modify target-param --authentication CHAP iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap


6. The iSCSI target must also know secret-key, we already setup on iSCSI initiator in step 1.
> iscsiadm modify target-param --CHAP-secret iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap
Enter secret:
Re-enter secret:


7. Final bidirectional config is below.
> iscsiadm list target-param -v
Target: iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap
Alias: testors-yes-chap
Bi-directional Authentication: enabled
Authentication Type: CHAP
CHAP Name: iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap < - - see this Login Parameters (Default/Configured): Data Sequence In Order: yes/- Data PDU In Order: yes/- Default Time To Retain: 20/- Default Time To Wait: 2/- Error Recovery Level: 0/- First Burst Length: 65536/- Immediate Data: yes/- Initial Ready To Transfer (R2T): yes/- Max Burst Length: 262144/- Max Outstanding R2T: 1/- Max Receive Data Segment Length: 8192/- Max Connections: 1/- Header Digest: NONE/- Data Digest: NONE/- Configured Sessions: 1 The iSCSI target configuration for Bidirectional authentication Quick reminder: how to create iSCSI target: > iscsitadm create target -b /dev/zvol/dsk/drum-raid5/volume-yes-CHAP testors-yes-CHAP

> iscsitadm list target
Target: testors-yes-chap
iSCSI Name: iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap
Connections: 0

> iscsitadm list target -v
Target: testors-yes-chap
iSCSI Name: iqn.1986-03.com.sun:02:504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chap
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0
VID: SUN
PID: SOLARIS
Type: disk
Size: 40G
Backing store: /dev/zvol/dsk/drum-raid5/volume-yes-CHAP
Status: online

1. Set iSCSI target CHAP name as its own hostname
> iscsitadm modify admin --chap-name testors


2. Set the secret (same secret as for iSCSI initiator)
> iscsitadm modify admin --chap-secret
Enter secret:
Re-enter secret:


3. Creat an initiator object (this will be associated to one/more targets)
> iscsitadm create initiator --iqn iqn.1986-03.com.sun:01:0003ba3559b8.4ba69545 cs2

> iscsitadm list initiator -v
Initiator: cs2
iSCSI Name: iqn.1986-03.com.sun:01:0003ba3559b8.4ba69545
CHAP Name: Not set
CHAP Secret: Not set


4. Create CHAP name
> iscsitadm modify initiator --chap-name cs2 cs2

> iscsitadm list initiator -v
Initiator: cs2
iSCSI Name: iqn.1986-03.com.sun:01:0003ba3559b8.4ba69545
CHAP Name: cs2 < - see this CHAP Secret: Not set 5. Set CHAP secret (same secret as for iSCSI initiator) > iscsitadm modify initiator --chap-secret cs2
Enter secret:
Re-enter secret:


On iSCSI initiator:

> format
Searching for disks...done
c4t4d0: configured with capacity of 23.91GB
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1f,0/pci@1/scsi@8/sd@0,0
1. c1t1d0
/pci@1f,0/pci@1/scsi@8/sd@1,0
2. c4t4d0
/iscsi/disk@0000iqn.1986-03.com.sun%3A02%3A504a7bec-02bb-603a-cb19-ea22f593a799.testors-yes-chapFFFF,0

Sun Fire T2000 firmware upgrade

Sun Fire T2000 firmware upgrade

On date of writing this article, the latest firmware for T2000 is version 6.6.7

I am doing this exercise, since my current firmware (or Hypervisor) doesn't support Logical Domains (LDom). 

Basically, hypervisor is software sitting above hardware and doing virtualization of it, so machine can have more LDoms (each LDom is like separate piece of hardware). 

So I downloaded (from Sun site) patch 139434-01, and once you extract it, there are README files that tells you everything. 

To determine current hardware version I go to ALOM System Controller. 
sc> showhost
Sun-Fire-T2000 System Firmware 6.3.1  2006/12/04 09:35

Host flash versions:
   Hypervisor 1.3.0 2006/11/10 06:35
   OBP 4.25.0 2006/11/07 23:24
   POST 4.25.0 2006/11/08 00:08

sc> showsc version -v
Advanced Lights Out Manager CMT v1.3.1
SC Firmware version: CMT 1.3.1
SC Bootmon version: CMT 1.3.1

VBSC 1.3.0
VBSC firmware built Nov 10 2006, 06:38:31

SC Bootmon Build Release: 01
SC bootmon checksum: 6C563271
SC Bootmon built Dec  4 2006, 09:29:14

SC Build Release: 01
SC firmware checksum: D2D367DA

SC firmware built Dec  4 2006, 09:29:21
SC firmware flashupdate SUN APR 22 17:48:32 2007

SC System Memory Size: 32 MB
SC NVRAM Version = 12
SC hardware type: 4

FPGA Version: 4.2.4.7 
The utility sysfwdownload is used for this exercise, and requirement is that current firmware is above 6.1.10 version, so we are good to go. For example go to directory /tmp/images/ From bunch of patch extracted files, copy two files to /tmp/images/ Files are Sun_System_Firmware-6_7_0-Sun_Fire_T2000.bin and sysfwdownload Firmware image is the file Sun_System_Firmware-6_7_0-Sun_Fire_T2000.bin If needed add executable permission to the tool. # chmod u+x sysfwdownload Tool sysfwdownload helps you to download this image to ALOM System Controller. Takes 10-15 minutes for download to complete.
# /tmp/images/sysfwdownload Sun_System_Firmware-6_7_0-Sun_Fire_T2000.bin
.......... (9%).......... (18%).......... (27%).......... (37%).......... 
(46%).......... (55%).......... (64%).......... (74%).......... 
(83%).......... (92%)......... (100%)
Download completed successfully.
Power off the system and go to ALOM SC.
sc> showplatform
SUNW,Sun-Fire-T2000
Chassis Serial Number: 0716NNN06S

Domain Status
------ ------
S0     OS Standby
Make sure that your virtual keyswitch setting is not in the LOCKED position.
sc> showkeyswitch
Keyswitch is in the NORMAL position.
And finaly run firmware upgrade.
sc> flashupdate -s 127.0.0.1

SC Alert: System poweron is disabled.
............. will see many dots .............

Update complete. Reset device to use new software.
SC Alert: SC firmware was reloaded
Reset of ALOM SC is needed as last step in this upgrade.
sc> resetsc
Are you sure you want to reset the SC [y/n]?  y
User Requested SC Shutdown
Let’s see what version we have now:
sc> showhost
Sun-Fire-T2000 System Firmware 6.7.0  2008/12/11 14:54

Host flash versions:
   OBP 4.30.0 2008/12/11 12:15
   Hypervisor 1.7.0 2008/12/11 13:43
   POST 4.30.0 2008/12/11 12:41

sc> showsc version -v
Advanced Lights Out Manager CMT v1.7
SC Firmware version: CMT 1.7.0
SC Bootmon version: CMT 1.7.0

VBSC 1.7.0
VBSC firmware built Dec 11 2008, 13:51:17

SC Bootmon Build Release: 01
SC bootmon checksum: 89A76C05
SC Bootmon built Dec 11 2008, 14:01:32

SC Build Release: 01
SC firmware checksum: 97A9A75C

SC firmware built Dec 11 2008, 14:01:47
SC firmware flashupdate FRI NOV 20 20:55:50 2009

SC System Memory Size: 32 MB
SC NVRAM Version = 14
SC hardware type: 4

FPGA Version: 4.2.4.7

Console server

Working on console is something that we all need to do sometimes.
And this is not always fun, especially if a server room is cold (and no chair to sit down).

So connecting on a serial port from your desk or home really makes your life much easier.

Basically, it’s always good idea that each Sun server has three connections:

1. Data connection (one or more)
2. Management connection {ALOM (SPARC) or ILOM (x86)} for out-of-band access and management – put this on separate subnet.
3. Console connection (on the serial port) for out-of-band connection also.

I have been working with Cyclades console server, series TS-2000 and TS-3000.

These appliances support multiple user access to same console simultaneously (only one person can work and has write access, while rest are in read-only mode), which is great since someone can monitor what are you doing and provide help.

The software I use is CSWconserver (Solaris package easily downloaded from blastwave.org). This one also allows you to log serial traffic.

Client program

Let's first say something about client program. This is command console.

It reads system-wide configuration file console.cf (if needed there is also user config file $HOME/.consolerc).

The console.cf file on console server can look like:
config * {
master server-name.company.com;
}

Basically, client console knows for primary conserver host, and connects to it.
If there are more servers, primary one can refer client to other that is responsible for specific console.

Some commands to introduce you with console command.

Connect to each console server and show version information.
# console -r
192.168.etc.etc: version `conserver.com version 8.1.11'
server-1: version `conserver.com version 8.1.11'
server-1: version `conserver.com version 8.1.11'

Connect only to primary server.
# console -R
version `conserver.com version 8.1.11'

Shows who is currently using console (good to know if you want to reset machine).
# /opt/csw/etc> console -w
root@hostname-0.domain.ca attach 40days server-1
username@hostname-1.domain.ca attach 21days server-2

Show PID of master daemon on all servers
# console -P
192.168.etc.etc: 569
server-1: 846

Show list of all consoles with status and attached users.
# console -u
machine-1 up
machine-2 up
machine-3 up root@server.company.com

Show list of console and devices.
# console -x
machine-1 on cyclades-1/7009 at Netwk
machine-2 on cyclades-2/7008 at Netwk
machine-3 on cyclades-3/7020 at Netwk
machine-4 on cyclades-4/7022 at Netwk

Exiting and manipulating console connection are performed with Ctrl E c , followed by commands, like some of mostly used ones are:
. disconnect

; select another console

l0 send break signal

? display list of commands

z suspend connection

f force to connect with write mode (push other connected users to spy/read mode)

b send broadcast message to all users on this console


Daemon program

Conserver is daemon that talks with client console and reads file conserver.cf.
# pgrep -l cons
569 conserver
571 conserver
25508 console
26725 conserver

Conserver categorizes consoles into 2 groups:

1. consoles to actively manage
2. consoles to know about and reference client to other servers

If "master" value in configuration file points to local machine, conserver will manage consoles.

How it works in short:

1. Conserver creates process (PID 25508 in example above) for each console it has to manage and assign port number.
2. Client program “console” talks with master console server process (PID 25508) and finds port

The file conserver.cf can look like:
default * { logfile /logs/console/&; rw *; }
default cyclade { type host; host cyclade; master con-server; portbase 7000; portinc 1; }

access * {
trusted 127.0.0.1;
trusted con-server-1;
trusted con-server-2;
}
console host-1 { include cyclade; port 1; }
console host-2 { include cyclade; port 2; }
console host-3 { include cyclade; port 3; }
console host-4 { include cyclade; port 4; }

Tips and explanation:

1. Form of this file is basically BLOCK_TYPE NAME { keyword value; .. }

2. Block "default" with name * defines logfile directory /logs/console and & is replaced with console name. Also everyone has read/write access.

3. Block "default" with name cyclade defines console type "host" for TCP connection, "host" is hostname of Cyclades appliance,
"master" is server that manages Cyclades, "portbase" is base value for port calculation formula,
"portinc" is increment value for port calculation.

4. Block "access" with name * is for all conserver hosts. Trusted host can connect without user authentication.

5. Block "console" with name of console use "include" to include previously defined block "cyclade".
But for each console also defines port number (formula for final port is final_port = portbase + portinc x port).

Jumpstart server setup - for SPARC client

Jumpstart installation method is CLI that enables automatic installation of the OS on remote systems.

It is using:
1. profile text file (defines software to be installed)
2. rules text file (defines rules/steps for install)
3. scripts for pre and post install tasks
4. sysidcfg (configuration information)

Note: Sun provides 'check' script that you run to verify rule file, if everything okay, script will generate rules.ok file.

Basically this works like:

1. Jumpstart reads rules.ok file
2. Jumpstart finds profile in rules.ok and use it for installation
3. If rule(s) doesn't match, regular interactive installation occurs.

Let's do some exercises and examples in creating Jumpstart server.

One machine will be use as Boot, Install and Profile server.

NOTE: this doc assumes client is SPARC, not x86 machine!

Jumpstart Directory
First you need Jumpstart directory that is accessible over network.

Say it is /jumpstart

Share it (add line to /etc/dfs/dfstab file)
share -F nfs -o ro,anon=0 -d "Jumpstart_Share" /jumpstart
Run command:
# shareall

Installing Jumpstart Server
Mount Solaris distribution (DVD or NFS location) and run command:
# /mnt/Solaris_10/Tools> ./setup_install_server /jumpstart/sparc
Verifying target directory...
Calculating the required disk space for the Solaris_10 product
Calculating space required for the installation boot image
Copying the CD image to disk...
Copying Install Boot Image hierarchy...
Copying /boot netboot hierarchy...
Install Server setup complete


Bootparams file
SPARC client uses /etc/bootparams (located on the Jumpstart server) file to boot.

This file will be populated with clients using script add_install_client (located in same directory as setup_install_server script).

Example how file looks:
client_hostname \
root=servername:/jumpstart/sparc/Solaris_10/Tools/Boot \
install=servername:/jumpstart/sparc \
boottype=:in \
sysid_config=servername:/jumpstart/config/client_hostname \
install_config=servername:/jumpstart/config/client_hostname \
rootopts=:rsize=8192


Rules file
Example of the file – use && (logical AND) to join keywords/values in same rule:
# rules keyword & values begin script profile finish script
#-----------------------------------------------------------------
hostname unixlab && arch sparc - profile finish_script

And verify the rules file:
/jumpstart/config/client_hostname # ./check
Validating rules...
Validating profile profile...
The custom JumpStart configuration is ok.

Note: check Sun docs for other rule’s keywords/values.

Begin (Bourne shell) script
Here you place action to be performed before Solaris software is installed.

After installation, logs will be in /var/sadm/system/logs/begin.log

Profile file
This file defines how to install software. Check Sun docs for many profiles' keywords/values.

Example of profile with comments:
# keyword value
# ============================

# Keyword install_type is MANDATORY
install_type initial_install

# If system_type omitted, standalone will be also used
system_type standalone

# Cluster is collection of packages that makes functional unit.
# Meta-cluster is collection of clusters and packages that creates configuration.
# They are listed in /mnt/Solaris_10/Product/.clustertoc file.
# If add/delete is not specified, ‘add’ is used
cluster SUNWCreq
cluster SUNWCdhcp add
cluster SUNWC-name add nfs server:/servername/path

# partitioning - Defines how disk is divided in slices
# Must be combined with keyword ‘filesys’
partitioning explicit
filesys rootdisk.s0 4096 /
filesys rootdisk.s1 4096 swap
filesys rootdisk.s3 4096 /var
filesys rootdisk.s4 4096 /usr
filesys rootdisk.s5 free /backup
filesys rootdisk.s7 256

# Creating SVM Mirror and State Database Replica
#filesys mirror:d10 c1t0d0s0 c1t1d0s0 4096 /
#filesys mirror c1t0d0s1 c1t1d0s1 4096 swap
#filesys mirror:d30 c1t0d0s3 c1t1d0s3 4096 /usr
#filesys mirror:d40 c1t0d0s4 c1t1d0s4 4096 /var
#filesys mirror:d50 c1t0d0s5 c1t1d0s5 free /backup
#metadb c1t0d0s7 count 3
#metadb c1t1d0s7 count 3


Finish (Bourne shell) script
This performs action after Solaris software is installed but before system reboots.
It’s ideal for installing third party software, setting root’s password, etc.

After installation, logs will be in /var/sadm/system/logs/finish.log

Note that file system remains mounted on /a until system reboots.

Required services on the Jumpstart server
Make sure that three necessary services are running on jumpstart server.

svc:/network/rarp:default
svc:/network/rpc/bootparams:default
svc:/network/tftp/udp6:default

Note:
Server must be on same subnet as client, since client bootparam request packet has TTL=1 (can't cross gateway).
Also RARP doesn't transmit network/router information.

SPARC client installation
See the link how to add SPARC client to Jumpstart server.

Then go to the OpenBoot (ok) prompt and boot from network:

ok boot net –v - install

- more info to be added later?!

Adding SPARC Jumpstart client

Quick intro:

Jumpstart is Solaris tool for OS installation over the LAN.

The installation is faster then using DVD and you can do consistent installation for many clients (same system configuration and third party software).
It is also unattended.

Basically, there are two processes here: network booting and installation.

But here, I’ll just talk how to install Jumpstart SPARC client.

I assume server is setup properly
(I will talk about this in other document, but I may also mention some steps here, no hurt if I repeat stuff more than once).

What I do first is (on Jumpstart server):

1.) Add MAC address and host name to /etc/ethers file

(example: 0:14:4f:f:11:28 citrus)

2.) Add IP and host name in the /etc/hosts file

(example: 192.168.etc.etc citrus)

The PARP protocol will be used for mapping between IP and MAC and for booting client.

3.) Export jumpstart directory (I assume this is done before, except this is maybe first client you are adding):

Say that your jumpstart directory is /export/jumpstart/.
Yes, you need to share this as NFS and do it as read-only with root access for anonymous user.

Have the entry in /etc/dfs/dfstab file:

share -F nfs -o ro,anon=0 -d "Jumpstart_Share update 5" /export/jumpstart

# shareall

# showmount -e
export list for jumpstartserver: 
/export/jumpstart (everyone)
4.) Finally add SPARC client:

Go to below directory and add the client – you can see example of my directory structure.
Run the command without argument to see its usage.
# /export/jumpstart/distrib/sparc/5.10/Solaris_10/Tools> ./add_install_client
ERROR: Either client name or client platform group is not specified.

Usage: ./add_install_client [-i ipaddr] [-e ethernetid] [-s server:path]
[-c server:path] [-p server:path]
[-n [name_server]:name_service[(netmask)]]
[-t install boot image path] client_name platform_group

DHCP clients:
./add_install_client -d [-s server:path] [-c server:path]
[-p server:path] [-t install boot image path]
[-f boot file name] platform_name platform_group

./add_install_client -d -e ethernetid [-s server:path]
[-b "property=value"] (i86pc platform only)
[-c server:path] [-p server:path]
[-t install boot image path] [-f boot file name]
platform_group

# ./add_install_client -i 192.168.etc.etc -e 0:14:4f:f:11:28 -s servername:/export/jumpstart/distrib/sparc/5.10 citrus sun4v
You may see some cleaning, removing, updating messages.
Note: in this example I use Sun Fire T2000, so platform group is sun4v.

I like doing some manual works, but you can avoid that using more options. Let me explain.

As you see, I use only:
-s = path to installation media on boot server.

This command will add the entry to /etc/bootparams file. Then I manually add two lines:
install_config=servername:/export/jumpstart/config/citrus
sysid_config=servername:/export/jumpstart/config/citrus

You can avoid adding lines in /etc/bootparams file if you use below options when adding a client.

-c servername:/export/jumpstart/config/citrus (path to client’s configuration directory)
-p servername:/export/jumpstart/config/citrus (path to clients sysidcfg file)

Here is example of /etc/bootparams file:
citrus \
root=servername:/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot \
install=servername:/export/jumpstart/distrib/sparc/5.10 \
boottype=:in \
sysid_config=servername:/export/jumpstart/config/citrus \
install_config=servername:/export/jumpstart/config/citrus \
rootopts=:rsize=8192

Note: /etc/bootparams file can have more than one client. Instead of server name you can have the IP.
The client uses this file for booting.

5.)The directory /tftpboot has small net kernel for booting and you should have similar files for each client that has to boot:
rm.192.168.28.202
C0A81CCA -> inetboot.SUN4U.Solaris_10-1
C0A81CCA.SUN4U -> inetboot.SUN4U.Solaris_10-1
inetboot.SUN4U.Solaris_10-1

6.)Also make sure that three necessary services are running on jumpstart server.

svc:/network/rarp:default
svc:/network/rpc/bootparams:default
svc:/network/tftp/udp6:default

And you are done on the server side!

7.)On the SPARC client side:

You want to start network installation form OpenBoot {ok} promot.

{0} ok boot net -v - install

SC Alert: Host System has Reset
-

Sun Fire T200, No Keyboard
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.30.0, 8064 MB memory available, Serial #68096296.
Ethernet address 0:14:4f:f:11:28, Host ID: 840f1128.

Boot device: /pci@780/pci@0/pci@1/network@0 File and args: -v - install
1000 Mbps full duplex Link up
Requesting Internet Address for 0:14:4f:f:11:28 client MAC
Requesting Internet Address for 0:14:4f:f:11:28
1000 Mbps full duplex Link up
Using RARP/BOOTPARAMS...
Internet address is: 192.168.etc.etc Client IP
hostname: client-hostname client hostname
domainname: company.com domain
Found 192.168.24.123 @ 0:14:4f:6a:c0:c4 found jumpstart server on ot's MAC
root server: jumpstart-server (192.168.etc.etc)
root directory: /export/jumpstart/distrib/sparc/5.10/Solaris_10/Tools/Boot Boot directory
module /platform/sun4v/kernel/sparcv9/unix: text at [0x1000000, 0x10b373d] data at 0x1800000
module /platform/sun4v/kernel/sparcv9/genunix: text at [0x10b3740, 0x127420f] data at 0x1899a00
module /platform/SUNW,Sun-Fire-T200/kernel/misc/sparcv9/platmod: text at [0x1274210, 0x12743f7] data at 0x18efd88
module /platform/sun4v/kernel/cpu/sparcv9/SUNW,UltraSPARC-T1: text at [0x1274400, 0x1278247] data at 0x18f0580
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Ethernet address = 0:14:4f:f:11:28
Using default device instance data
mem = 8257536K (0x1f8000000)
avail mem = 8009105408
root nexus = Sun Fire T200
etc, etc

Mirroring the operating system in Solaris

In the steps below, I'm using DiskSuite to mirror the active root disk (c0t0d0) to a mirror (c0t1d0). I'm assuming that partitions five and six of each disk have a couple of cylinders free for DiskSuite's state database replicas.

Introduction

First, we start with a filesystem layout that looks as follows:

Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 6607349 826881 5714395 13% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/dsk/c0t0d0s4 1016863 8106 947746 1% /var
swap 1443064 8 1443056 1% /var/run
swap 1443080 24 1443056 1% /tmp

We're going to be mirroring from c0t0d0 to c0t1d0. When the operating system was installed, we created unassigned slices five, six, and seven of roughly 10 MB each. We will use slices five and six for the DiskSuite state database replicas. The output from the "format" command is as follows:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@1f,4000/scsi@3/sd@0,0
1. c0t1d0
/pci@1f,4000/scsi@3/sd@1,0
Specify disk (enter its number): 0

selecting c0t0d0
[disk formatted]
...
partition> p
Current partition table (original):
Total disk cylinders available: 5266 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 3994 6.40GB (3995/0/0) 13423200
1 swap wu 3995 - 4619 1.00GB (625/0/0) 2100000
2 backup wm 0 - 5265 8.44GB (5266/0/0) 17693760
3 unassigned wu 0 0 (0/0/0) 0
4 var wm 4620 - 5244 1.00GB (625/0/0) 2100000
5 unassigned wm 5245 - 5251 11.48MB (7/0/0) 23520
6 unassigned wm 5252 - 5258 11.48MB (7/0/0) 23520
7 unassigned wm 5259 - 5265 11.48MB (7/0/0) 23520

DiskSuite Mirroring

Note that much of the process of mirroring the root disk has been automated with the sdsinstall script. With the exception of the creation of device aliases, all of the work done in the following steps can be achieved via the following:

# ./sdsinstall -p c0t0d0 -s c0t1d0 -m s5 -m s6

1. Ensure that the partition tables of both disks are identical:

# prtvtoc /dev/rdsk/c0t0d0s2 fmthard -s - /dev/rdsk/c0t1d0s2

2. Add the state database replicas. For redundancy, each disk has two
state database replicas.

# metadb -a -f c0t0d0s5
# metadb -a c0t0d0s6
# metadb -a c0t1d0s5
# metadb -a c0t1d0s6

Note that there appears to be a lot of confusion regarding the recommended number and location of state database replicas. According the the DiskSuite reference manual:

State database replicas contain configuration and status information of all metadevices and hot spares. Multiple copies (replicas) are maintained to provide redundancy. Multiple copies also prevent the database from being corrupted during a system crash (at most, only one copy if the database will be corrupted).

State database replicas are also used for mirror resync regions. Too few state database replicas relative to the number of mirrors may cause replica I/O to impact mirror performance.

At least three replicas are recommended. DiskSuite allows a maximum of 50 replicas. The following guidelines are recommended:

• For a system with only a single drive: put all 3 replicas in one slice.
• For a system with two to four drives: put two replicas on each drive.
• For a system with five or more drives: put one replica on each drive.

In general, it is best to distribute state database replicas across slices, drives, and controllers, to avoid single points-of-failure.
Each state database replica occupies 517 KB (1034 disk sectors) of disk storage by default. Replicas can be stored on: a dedicated disk partition, a partition which will be part of a metadevice, or a partition which will be part of a logging - device.

Note - Replicas cannot be stored on the root (/), swap, or /usr slices, or on slices containing existing file systems or data.

Starting with DiskSuite 4.2.1, an optional /etc/system parameter exists which allows DiskSuite to boot with just 50% of the state database replicas online. For example, if one of the two boot disks were to fail, just two of the four state database replicas would be available. Without this /etc/system parameter (or with older versions of DiskSuite), the system would complain of "insufficient state database replicas", and manual intervention would be required on bootup. To enable the "50% boot" behaviour with DiskSuite 4.2.1, execute the following command:

# echo "set md:mirrored_root_flag=1" >> /etc/system

3. Define the metadevices on c0t0d0 (/):

# metainit -f d10 1 1 c0t0d0s0
# metainit -f d20 1 1 c0t1d0s0
# metainit d0 -m d10

The metaroot command edits the /etc/vfstab and /etc/system files:

# metaroot d0

Define the metadevices for c0t0d0s1 (swap):

# metainit -f d11 1 1 c0t0d0s1
# metainit -f d21 1 1 c0t1d0s1
# metainit d1 -m d11

Define the metadevices for c0t0d0s4 (/var):

# metainit -f d14 1 1 c0t0d0s4
# metainit -f d24 1 1 c0t1d0s4
# metainit d4 -m d14

4. Edit /etc/vfstab so that it references the DiskSuite metadevices instead of simple slices:

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/md/dsk/d1 - - swap - no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no logging
/dev/md/dsk/d4 /dev/md/rdsk/d4 /var ufs 1 no logging
swap - /tmp tmpfs - yes -

5. Reboot the system:

# lockfs -fa

# sync;sync;sync;init 6

6. After the system reboots from the metadevices for /, /var, and swap, set up mirrors:

# metattach d0 d20
# metattach d1 d21
# metattach d4 d24

The process of synchronizing the data to the mirror disk will take a
while. You can monitor its progress via the command:

# metastatgrep -i progress

7. Capture the DiskSuite configuration in the text file md.tab. With Solaris 2.6 and Solaris 7, this text file resides in the directory /etc/opt/SUNWmd; however, more recent versions of Solaris place the file in the /etc/lvm directory. We'll assume that we're running Solaris 8 here:

# metastat -p tee /etc/lvm/md.tab

8. In order for the system to be able to dump core in the event of a panic, the dump device needs to reference the DiskSuite metadevice:

# dumpadm -d /dev/md/dsk/d1

9. If the primary boot disk should fail, make it easy to boot from the mirror. Some sites choose to alter the OBP "boot-device" variable; in this case, we choose to simply define the device aliases "sds-root" and "sds-mirror". In the event that the primary boot device ("disk" or "sds-root") should fail, the administrator simply needs to type "boot sds-mirror" at the OBP prompt.

Determine the device path to the boot devices for both the primary and mirror:

# ls -l /dev/dsk/c0t0d0s0 /dev/dsk/c0t1d0s0

lrwxrwxrwx 1 root root 41 Oct 17 11:48 /dev/dsk/c0t0d0s0 -> ../..
/devices/pci@1f,4000/scsi@3/sd@0,0:a
lrwxrwxrwx 1 root root 41 Oct 17 11:48 /dev/dsk/c0t1d0s0 -> ../..
/devices/pci@1f,4000/scsi@3/sd@1,0:a

Use the device paths to define the sds-root and sds-mirror device aliases (note that we use the label "disk" instead of "sd" in the device alias path):

# eeprom "nvramrc=devalias sds-root /pci@1f,4000/scsi@3/disk@0,0
devalias sds-mirror /pci@1f,4000/scsi@3/disk@1,0"

# eeprom "use-nvramrc?=true"

Test the process of booting from either sds-root or sds-mirror.
Once the above sequence of steps has been completed. the system will look as follows:

# metadb
flags first blk block count
a m p luo 16 1034 /dev/dsk/c0t0d0s5
a p luo 16 1034 /dev/dsk/c0t0d0s6
a p luo 16 1034 /dev/dsk/c0t1d0s5
a p luo 16 1034 /dev/dsk/c0t1d0s6

# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d0 6607349 845208 5696068 13% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/md/dsk/d4 1016863 8414 947438 1% /var
swap 1443840 8 1443832 1% /var/run
swap 1443848 16 1443832 1% /tmp

Trans metadevices for logging

UFS filesystem logging was first supported with Solaris 7. Prior to that release, one could create trans metadevices with DiskSuite to achieve the same effect. For Solaris 7 and up, it's much easier to simply enable ufs logging by adding the word "logging" to the last field of the /etc/vfstab file. The following section is included for those increasingly rare Solaris 2.6 installations.

The following two steps assume that you are have an available (<=64MB) slice 3 available for logging. 1. Define the trans metadevice mirror (c0t0d0s3): # metainit d13 1 1 c0t0d0s3 # metainit d23 1 1 c0t1d0s3 # metainit d3 -m d13 # metattach d3 d23 2. Make /var use the trans meta device for logging: # metainit -f d64 -t d4 d3 Edit vfstab as follows: /dev/md/dsk/d64 /dev/md/rdsk/d64 /var ufs 1 no - Ensure that no volumes are syncing before running the following: # sync;sync;sync;init 6

Building Jumpstart Server with Root disk Mirroring

These steps are based upon the assumption that boot server, install server, config server all are on the same server.1. Put in the cd.

2. # cd /cdrom/so/Solaris_9/Tools

3. # ./setup_install_server /jumpstart_dir

/jumpstart_dir is the directory used for storing cd-images.

4. #cd /cdrom/s0/Solaris_9/Misc

5. #cp * /jumpstart_dir

6. Put in the second CD.

7. #cd /cdrom/s0/Solaris_9/Tools

8. #./add_to_install_server /jumpstart_dir

9. This would complete the boot server image as well as the install server image to the /jumpstart_dir directory.

10. get the ethernet address of the client, either from ifconfig -a command or from the
banner command from the ok prompt.

11. create the /etc/ethers file.

03:12:af:21:12:12 client-name

12. Make an entry in the /etc/hosts file corrosponding to the client-name

10.0.0.1 client-name

13. Create the sysidcfg file. This file would contain all the client identification mechanism for client. an example would be

system_locale=US
timezone=Asia/Calcutta
terminal=vt100
name_service=NONE
timeserver=localhost
security_policy=NONE
root_password=And123MpaN
network_interface=primary
{ hostname = client-name
ip_address=10.0.0.1
netmask=255.255.255.0
default_router=10.0.0.30
protocol_ipv6=no }

Here the root_password should be in encrypted format and can be a standard password for fresh installs.

14. NOTE: For the sysidcfg file, root password must be in the encrypted format, this is in the format from /etc/shadow. This can be copied from any of the existing servers.

15. create this sysidcfg file also in the /jumpstart_dir directory. However, inside the directory name of the client.

/jumpstart_dir/client-name/sysidcfg.

You need a unique sysidcfg file for each and every client.

16. a file called rules would exist in the scripts that we copied from Misc direcotry of the first CD. please edit the file, so that it contain only this line

any - - any_machine -

17. any_machine is the name of the profile file that would contain the partition information for the clients.

18. Profile file would be like the example below, this would also enable to root disk mirroring using the SVM.

File system partioning can be done based upon the following format

filesys slice start_cylinder:slice_size_by_cylinders file_system

install_type initial_install
system_type standalone
partitioning default
cluster SUNWCall
filesys mirror:d1 c1t0d0s0 c1t1d0s0 15000 /
filesys mirror:d10 c1t0d0s1 c1t1d0s1 32000 swap
filesys mirror:d20 c1t0d0s3 c1t1d0s3 32000 /var
metadb c1t0d0s7
metadb c1t1d0s7

Here we are creating a standard profile file to be used for all clients.

19. We need to verify the rules file along with the profile file for any syntax errors, to do this we would require something called the check script.

20. This check script would be available from the 1st CD, please copy the check script from the CD to the /jumpstart_dir directory.

21. after copy, please run the script.

22. If there are no errors, on profile and rules file, this would generate a rules.ok file on the /jumpstart_dir directory.

23. Share the /jumpstart_dir through NFS.

share -F nfs -o ro,anon=0 /jumpstart_dir

24. #cd /jumpstart_dir/Solaris_9/Tools/

25. Edit the inetd.conf, for tftp to work.

26. #./add_install_client -e 03:12:af:21:12:12 -s server-name:/jumpstart_dir -c server-name:/jumpstart_dir -p server-name:/jumpstart_dir/client-name sun4u

This would create the /etc/bootparams file, start the rpc.bootparamd, start in.rarpd, fill the /tftpboot directory.

27. Above steps complete the jumpstart server config.

28. Go to the client, give

{ok} boot net – install

29. See the magic happen

30. this would complete the unattended installation of client from a jumpstart-server.