Friday, 30 May 2014

Some things to consider before changing settings

Some things to take notice

1)-->/usr/lib/errstop (use carefully ,only in special cases)
Stops the error logging daemonn, disables diagnostic and recovery functions. The errorlog should never be stopped during normal operations.

2)The error log daemon must not be running when the errdead command is run

3)The bos.per.tools and perfagent.tools (filesets must be installed on the system to run
the topas.

4)Procmon tool(Graphical)
Must install below fileset
-bos.perf.gtools

5)The error log is never cleared unless it is manually cleared.
[Never use the cp /dev/null command to clear the error log . A zero length error log file
disables the error logging functions of the operating system and must be replaced from a
backup]


6)The command 'traceroute' (it creates load on system so don't use it on production server)


7)Don't run gated & routed at the same time on a host


8)ODM commands
When no option left then only then try odm commands
if used falsely the system might crash. So better wait for IBM SUPPORT.

9)Do not restart TCP/IP daemons, using the command
-->startsrc -g tcpip
It will start all subsystems defined in the ODM for the tcpip group, which includes both
routed & gated.

10)Do not run fsck command on mounted filesystem.

Monday, 26 May 2014

Daily Management

Daily Management

A)User Administration

/etc/passwd
/etc/security/passwd
Attribute -characteristic of a user or a group that defines the type of function that a user of group can perform . These can be extraordinary privilegest, restrictions, processing environments assigned to a user
-Access rights
-environment
-Authentication
-Account access

Files
1)/etc/security/environ-Environment attributes for users
2)/etc/security/lastlog-Last login attributes for users
3)/etc/security/limits-Process resource limits for users
4)/etc/security/user-Extended attributes for users
5)/usr/lib/security/mkuser.default-Default attributes for new users.
6)/usr/lib/security/mkuser.sys-Customize new user accounts.
7)/etc/passwd-BAsic attributes of user
8)/etc/security/passwd-Password information
9)/etc/security/login.cfg-System default login parameters
10)/etc/utmp-Records of users logged into the system.
11)/var/adm/wtmp-Connect time accounting records.
12)/etc/security/failedlogin-Records all failed login attempts.
13)/etc/motd-Message to be displayed, every time a user logs in to the system.
14)/etc/environment-Basic environment settings for all users.
15)/etc/profile-Additional Environment settings  for all users
16)$HOME/.profile-Environment settings for a specific user
17)/etc/group-Basic attribution of groups
18)/etc/security/group-Extended attributes of groups

+ /etc/passwd
Name:password:USERID:PrincipleGroup:GECOS:HOME:SHELL
*-incorrect passwd
!-Normal passwd is in /etc/security/passwd file

+/etc/utmp
-->who -a

+/etc/profile
First file that the os uses at login time contains -umask, mail ,tty

+$HOME/.profile-2nd file os uses at login time
-shells to  open
-Envir variables
-Default editor
-Prompt appearence

B)User Administration tasks

1)Adding a new  user account
-To create the smith account with smith as an administrator
-->mksuser -a smith
Create user account smith, with default values in /usr/lib/security/mkuser.default

-->mkuser smith
-->smitty mkuser

2)-->passwd
Change your passwd
-->smitty passwd

3)Do not use chuser if you have NIS
-To change the expiration date for the smith to 8a.m. 1 Dec. 1998
-->chuser -a expires=1201080098 smith(Month,Day,Hour,Min,Year)

To add smith to the group program
-->chuser groups=program smith
-smitty chuser

4)lsuser, smitty users
-->lsuser smith
displays all attributes of user smith in default format.
-Display all attributes of all users
-->lsuser ALL

5)Removing a user account,
-->smitty rmuser
-Remove smith
-->rmuser smith
-Remove smith ,all attributes ,passwd,authentication
--rmuser -p smith

6)-->who
-->whoami
-->who -r(runlevel)
-Display any active process that was spawned by init
-->who -p

7)/etc/nologin
if it exists the system  accepts the user's name & password but prevents the user from logging

8)-->chsh
change user's login shell attribute.

9)/etc/security/limits-Specify the process resource limits for each user
default/prashant:
fsize=2097151
core=2097151 largest core file a user's process can create
cpu=-1 max number of seconds of system time that a user's process can use(-1 is turnoff restrictions)
data=262144
rss=65536<largest physical memory user's process can allocate
stack=65536
nofiles=2000 Max number of files a user's process can have open at one time

10)/etc/security/environ-Environment attributes for user.
mksuser creates a user stanza in this file .
Initialization of attributes depends upon their values in the /usr/lib/security/mkuser.default file.
chuser - to change attributes
lsuser- display attributes
rmuser-removes entire record for a user
ex.
-->pg /etc/security/environ
default:
root:
daemon:
bin:
sys:
adm:
uucp:
guest:

11)/usr/lib/security/mkuser/default
Contains the default attributes for new users.
This file have the default values of the attributes for the users created by mkuser command
-->pg /usr/lib/security/mkuser.default
user:
pgrp=staff
groups=staff
shell=/usr/bin/ksh
home=/home/$USER
admin:
pgrp = system
groups= system
shell = /usr/bin/ksh
home=/home/$USER

12)/etc/security/lastlog- Last login attributes for users.
username:
time_last_login=1134081482 (number of seconds since the last successful login)
tty_last_login=/dev/pts/6 Terminal on which the user last logged in.
(last logged host)host_last_login_count=0 The number of unsuccessful login attempts since the last successful login.
This attribute works with the user's login retries attribute,
specified in the /etc/security file , to lock the user's account after a specified number of consecutive unsuccessful  login attempts.
-->chsec -f /etc/security/lastlog -s username -a login_count=0

13)/etc/security/user -extended attributes for user
mkuser creates a stanza in this file for each new user and initializes its attributes  with the default attributes defined in the /usr/lib/security/mkuser.default file
Also this file contains many attributes that allow you to control how users must manage their passwords, such as histsize,histexpire,
maxage-Maximum age(in weeks)of a password
maxexpired,maxrepeats etc.

14)/usr/lib/security/mkuser.sys
shell script that customizes a new user account.
Creates homedir, primary group, profile, for user's shell.

15)/etc/passwd
Basic user attributes
Name:Password:UserID:PrincipleGroup:Gecos:HomeDirectory:Shell

16)/etc/security/passwd-Contains passwd information.
A user who has an invalid password (*) in the /etc/passwd file  will have no entry in the /etc/security/passwd file
ex. root:
    passwd=CHbXMXLTUO1
    lastupdate=1134028556
    flags=
17)/etc/security/login.cfg
System default login parameters,
configuration information for login and user authentication
default:
sak_enabled=false
logintimes=
logindisable=0
logininterval=0
loginreenable=0
logindelay=0
usw:
shells=/bin/sh, /bin/bsh, /bin/csh
maxlogins=32767
logintimeout=60
auth_type=STD_AUTH

18)/etc/utmp
Record of users logged into the system
-->who -a
Processes this file, if this file is missing or corrupted , no output is generated from the who command.
-->/var/adm/wtmp
conncect time accounting records

19)/etc/security/failedlogin-All failed login attempts
-To change the /dev/tty0 port to automatically lock if five unsuccessful login attempts occur within 60 seconds,
-->chsec -f /etc/security/login.cfg -s /dev/tty0
-a logindisable=5 -a logininterval=60
s-name of the stanza to modify
f-name of the stanza file to modify
-To unlock the /dev/tty
-->chsec -f /etc/security/portlog -s /dev/tty0 -a locktime=0
-To allow logins from 8.00 am, until 5.00 pm for all users
-->chsec -f /etc/security/user -s default -a logintimes=0800-1700
-PS1 Primary prompt
-->echo "$PS1"
-Change prompt
-->export PS1="root@'hostname'#"

*mkgroup
-Create a new  group account called managers and set yourself as the administrator
-->mkgroup -A managers
-Create a new group account called managers & set the list  of administrators to steve & mike
-->mkgroup adms=steve, mike managers

*chgroup -->smit chgroup(dont use if you have NIS)
-Changes attributes for group
-To add sam &carol to the finance group , which currently only has frank  as a member
-->chgroup users=sam,carol,frank finance
-->chgroup users=u1,u2,u3, dbm
-To remove frank from finance group, but  retain sam and carol ,and remove the administrators of the finance group
-->chgroup users=sam,carol adms=finance

*chgrpmem : Changes the administrators of members of a group
-To remove joey as an administrator of the friends group
-->chgrpmem -a -joey friends
-To add members rachel & phoeby to group friends
-->chgrpmem -m + rachel, pheby friends
-To list members and administrators of group friends
-->chgrpmem friends

acl examples
attributes :SUID
base permissions:
owner (frank):rw_
group (system):r_x
others:_ _ _
extended permissions:
enabled
permit rw_ u:dhs
deny r_ _ i:chas ,g:system(user chas has not any access until he is a memeber of group chas)
specify r_ _ ,:john, g:gateway, g:mail(untill john is a member of gateway and mail group he has the read access)
permit rw_ g:account, g:finance
-->aclget filename
-Change the shell to /usr/bin/ksh for user prashant
-->chsh prashant /usr/lib/ksh
-To enable user smith to access this system remotely
-->chuser rlogin = true smith

C)Common login errors
1)3004-004 : You attempted to logout, when processes are still running
2)3004-007: Invalid login name or password
3)3004-008:Failed credentials
4)3004-009:Damaged login shell
5)3004-030:Caps lock on
6)3004-302:Account has expired
7)3004-687:User does not exist

D)Monitoring & Managing processes
-)Display all processes
--> ps -e -f
-)Display processes owned by ross, joey, chandler
-->ps -f -l -ross, joey, chandler
-)Display info about all processes & kernel threads
-->ps -emo THREAD
-)list all 64 bit processes
--> ps -M
-)kill
-->kill PID
-->kill -kill 2098 1048
kill processes
-->kill -kill 0
To stop all of your processes and log yourself off
-)To stop all the processes you own
-->kill -9 -1

+ nice & renice
-nice ,runs another command at a different priority,
-renice, changes the priority of an already running process
-nice 0(highest) to 39(lowest)
-renice -20(highest) to 20 (lowest)
-->renice -n 5 p 98732
ProcessID- 987,32 should have lower scheduling priorities
-->renice -n -4 -9 324 25
324 & 25 have higher scheduling priorities
+ fuser
-To list the process numbers and user login names of processes using  the /etc/filesystems
-->fuser -u /etc/filesystems
-To terminate all of the processes using a given filesystem
-->fuser -k -x -u -c /dev/hd1
or
-->fuser -kxuc /home
You might want to use this command if you are trying to unmount the /dev/hd1 filesystem and a process that  is accessing the /dev/hd1 filesystem prevent this.

-)To list all processes that are using a file that has been deleted from a given filesystem
-->fuser -d /usr
what is still active in the filesystem

-)To return the processID, for all processes that have
open references within a specified filesystem
-->fuser -xc /tmp
fuser will show only user processes and not system of kernel process
-->find /home -type d -exec fuser -u {} \ ;

E)File and directory permissions and ownership

+Access Control lists
The major task in administering access control is to define the group memeberships of users, because these memeberships determine the users access rights to the files that they do not own.
With ACL permissions, you can permit or deny file access to specific individuals or groups without changing the base permissions
+Base Permissions
-owner group others
-r,w,x

+Attributes
setuid(SUID):IF owner set suid bit for the file then only it will give permission of execution to everybody, if owner doesn't have the suid(execution)permission, then nobody will able to execute it.

+suid only related to executed -x permission
+small 's' execute permission is there
+Big 'S' execute permission is not there
+suid set only to files
-->chmod ug+s filename
+setgid(SGID)

*Extended Permissions
ex. of ACL
attributes : SUID
base permission :
owner(frank):rw_
group(system):r_x
others:_ _ _

extended permissions: optional
enabled extended permissions enable
permit rw_ u:dhs
deny r_ _ u:chas, g:system
specify r_ _ u:john, g:gateway, g:mail
permit rw_ g:account, g:finance
-)To display the access control  information for the status file
-->aclget status

*Setting Access Control Information(aclput)
-)To set  the access control information for the status file with the access control information stored in the acldefs file
-->aclput -i acldefs status
2. To set the access control  information for the status file with the same information used for  the plans file
-->aclget plans | aclput status

*acledit
-)To edit acl info of plans file
-->acledit plans

*chmod
Modifies the mode bits and the extended access control lists (ACLs) of the specified files or directories.
+Permission for directories
r-list
w-create,delete
x-cd

-->chmod go -w+x mydir

-->chmod u=rwx, go=r_ _ filename
user has all permissions, group & others denied in all way.
-)To recursively descend diretories & change file & directory permissions given the tree structure
-->chmod -R  777 f*

*chown
changes the owner of the file
-) How to change the owner of the file program.c
-->chown prashant program.c
-)change the owner & group of all files in the directory /tmp/src to owner john & group build
-->chown -R john:build /tmp/src/

*chgrp
changes the group associated with the specified file or directory
-)Changes the group ownership of the file or directory
named test to production
-->chgrp production test
(copy group setting of productin on group test)
-)Change the group ownership of the directory named production, and of all the files and subdirectories under it to test
--> chgrp -R test production
copy the group settings of test onto group production

*Cron & crontab
--> crontab -l
lists the contents in /var/spool/cron/crontabs directory
-crontab 0, 15,30,45 8-17 * * 1-5 /home/script1
To execute a command called script1 every 15min between 8AM and 5PM , Monday through friday
-->crontab -e
To create and update  the crontab file.
The crontab command invokes the editor.
-->crontab -v
To check the crontab submission time
-->crontab -r prashant
Removes the /var/spool/cron/crontabs/prashant file

+crontab files are kept in /var/spool/cron/crontabs/
Each cron user has a crontab file with their username
as the filename in this dir.

+crontab
minute, hour, day-of-month, month, day-of-week command
+If the cron.allow file exists, only uesrs whose login names appear in it  can use the crontab command.
The root user name must appear in the cron.allow
file , if the file exists.
If only the cron.deny file exists, any user whose name does not appear in the file can use the crontab command
+A user cannot use the crontab command if one of the following is true
-cron.allow file and cron.deny file donot exists
-cron.allow file exists but the user's login name is not listed in it.
-cron.deny file exists & the user's login name is listed in it.
-->cat > /var/adm/cron/cron.allow
root
deploy

*-->crontab -e
edit
-->crontab -v
check crontab submission time

*Removing crontab file
Avoid running crontab -r when you are logged in as root. IT removes the /var/spool/cron/crontabs/root
file.
-->crontab -r
Do not run it  as root
-->mail denise < letter1
send the file letter1 as a message  to user denise
-->echo $PATH > path (output of command directed)
-->cat path

-->cat file1
line1
-->cat file2
line2
-->cat file2>>file1
-->cat file1
line1
line2
+-->cat >test
test
ctrl+D
-->cat test
TEst

-)chsec command changes the attributes stored in the  security configuration stanza files  ,
-)to Display current environmental variable
-->setsenv
-)To set the file size limit to 100KB
-->ulimit -f 100
sets or reportss user resource limits as defined in the /etc/security/limits



Backup & Recovery

Backup & Recovery

1)mksysb
mksysb command creates a bootable image of all mounted file systems on the rootvg volume group. You can use this backup command to restore a system to its original state.
User defined paging spaces,unmounted file systems, raw devices are not backed up.

+Data layout of a mksysb tape
BOSBOOT IMAGE    mkinsttape image dummy.toc rootvg-data
-BOSBOOT image=kernel,device drivers needed to boot from mksysb tape. created by bosboot.
-mkinsttape image=
./tapeblksz=block size the tape drive was set to when mksysb command was run.
 ./image.data=image installed during  the BOS installation process ,includes sizes,name,maps,mount points of logical volumes and filesystems in rootvg, u can customize it before running mksysb or run
-->mksysb -i
to generate a new ./image.data file on tape
during backup
-->mkszfile
generates the ./image.data file.

+dummy toc is used so that the mksysb tape contains the same number of images as as BOS install tape.
>Excluding file systems from a backup
-->cat /etc/exclude.rootvg
^./tmp/
Then run
-->mksysb -e  /dev/rmt0
-e exclude the contents of exclude.rootvg

>How to create a bootable system backup
-->smitty mksysb
you cannot run the mksysb command against a uservg,use savevg, tar,cpio, backup commands to backup uservg.

>List content of a mksysb image
To verify the content of an mksysb image
-->smitty lsmksysb

>Restore a mksysb image
mksysb image  enables you to restore the system image onto target systems that might not contain the same  hardware devices or adapters, require the same kernel (uniprocessor of microprocessor)
or be the same hardware platform as the source sytem.
You have several  possibilities to restore the mksysb image.
1)If restoring on exactly the same machine , you can boot directly from the mksysb media and restore from there
2)If restoring on a different type of machine,  use cloning function -->smit alt_clone
3)If you do not want to interfere with the production environment,use alternate disk install using mksysb
4)If you want to restore only several files from the mksysb image
-->smitty restmksysb
use the (.)dot before the filename ,ex ./etc/hosts

>tctl command
-->tctl -f /dev/rmt0 rewind

C)Backup Strategies
3 types of backup methods
1-Full backup
2-Differential backup
3-Incremental backup

1.Full backup
2.Differential backup
Only modified files are backed up, but only if they changed after the latest full backup.These are cumulative ,once a file has been modified it will be included in every differential backup untill next full bakcup.
Advantages-To restore ,the latest full backup and only the latest differential backup media sets are needed.
-Backup window is smaller than a full backup.
Disadvantage
If data changes a lot between full backups then number of differential backups increased very much.

3.Incremental backup
Also back up modified files only, however incremental backup checks the difference between the modification time of a file and the last  backup time (either being full or incremental backup). IF the modification date is more recent  than the last backup date,the file is backed up.
Advantages
-Backup window is smaller than a full backup
-Only the difference from a previous backup will be written on media.
Disadvantages
-To restore , the latest full backup and all the subsequent incremental backup media sets following that full backup are needed.
-To restore a single file, tape handling operations are intensive
-A damaged or lost media in the incremental set can mean disaster. The modification of those files on that media may be lost  forever.

D)Related backup and restore  commands
1)savevg -->smit  savevg
To backup uservg
The savevg command uses a data file created by the mkvgdata.
-->/tmp/vgdata/vgname/vgname.data
This vgname.data file contains information about a userr vg. The savevg command uses this file to create a backup image that can be used by the restvg when it restores the vg.
-->savevg  -e /dev/rmt0 datavg
-e exclude files specified in /etc/exclude.vgname file
-u updates the /etc/dumpdates file with raw device name of filesystem , & the time date and level of the backup. You must specify -u flag if you are making incremental backups

2)restvg: --> smit restvg
Restores the uservg and all its containers and files
-->restvg -f /dev/rmt0 hdiskn

3)Backup
backup files and filesystems.
-To backup all the files and subdirectories in the /home directory  using full path names
-->find /home -print | backup -i -f /dev/rmt0
Because the files are archived using full path names, they will be written to the same paths when restored.
-To backup the /(root) filesystem
-->backup -0 -u -f /dev/rmt0 /
-0 zerolevel specifies that all files in /(root) filesystem be backed up.
-u update the /etc/dumpdates file
-To backup all the files in the /(root)
filesystem that have been modified since the last level 0 backup
-->backup -1 -u -f /dev/rmt0 /

4)restore
Extracts files from archives created with the backup command
To exclude data that you do not want to restore from a specific path, use find and pring and send result to the restore.
To restore an entire filesystem archive
-->restore -rvqf /dev/rmt0
-f device
-a medium is ready
-v verbose
Restores the entire filesystem archived on the tape device /dev/rmt0, into the current directory.
To restore a specific directory and the contents of that directory from a archive
-->restore -xdvqf /dev/rmt0 /home/mike/tools
-x extract files by their name
-d extract all files & subdirectories in the /home/mike/tools/ directory.
-->restore -d /vg-backup/latest-backup(file) hdisk2

5)tar
-->tar -cvf /dev/rmt0 /home
-->tar -tvf /dev/rmt0
list contents of file
-->tar -xvf /dev/rmt0
extract in current directory

6)cpio
To copy files in the current directory onto diskette
-->cpio -ov > /dev/fd0
This copies all the files in the current directory whose names end with .c
To copy the current directory and all subdirectories onto diskette
-->find .-print | cpio -ov > /dev/fd0
Saves the directory tree that starts with the current directory(.) and includes all of its subdirectories and files
-o Read from standard input & copies to standard output
-v List filenames

7)pax
-->pax -wf /dev/rmt0
copy the contents of current directory to the tape drive --> pax -rw file1 /tmp
copy file1 to /tmp

8)gzip -c file
compress file

9)gzip -d file.gz
decompress

10)tcopy
-->tcopy /dev/rmt0 /dev/rmt1
-->tcopy /dev/rmt0
Layout of the mksysb image

E)Verify the content of a backup media
It is a good practice to verify the readability to eliminate trouble at recovery time, to avoid tape incomatibilities  ,damaged media or missing  files.
If backup media has difficulties while reading the tape, check below steps
1.Media is not damaged , try another media.
2.Verify that you have latest drivers installed for your backup device
3.Check that the backup device is turned on
4.Try the media on another server
5.Change the block_size parameter or the tape streamer to 0(auto detect)






Problem Determination and Resolution

 Problem Determination and Resolution

1)ping
-->ping
determine the status of the network and various remote  foreign hosts
-Tracking and isolating H/W & S/W problems
-Testing, measuring & managing networks

+Display the route buffer on the returned packets
-->ping -R server2

If you cannot reach other computers on the same subnetwork with the ping, look for problems on your system's network configuration ,use arp & ifconfig.

2)arp
Display and modifies the internet to physical address(MAC address)translation tables used by ARP. The arp command displays the current ARP entry for the host specified by the Hostname variable.
Modifies MAC table used by the ARP(Address Resolution Protocol)
-->ping 9.3.5.193
No response
-->ping 9.3.5.196
Response
-->arp -a | grep 9.3.5.19
9.3.5.193=No MAC
9.3.5.196=MAC - 0:2:55:A8:00:dd
check cable connections, H/W

3)ifconfig
-->ifconfig -a -d
Show only those interfaces that are down.
If a interface is down and you have problem in reaching the subnet on which the interface is configured, run

-->errpt
to  check any errors has been reported for the interface (for ex. duplicate IP address in the network)

-->diag
Diagnostic over the interface
If the interfaes do not have problems ,then they are in active state, and your system cannot reach to the computers on same subnetwork , you should check that the interfaces subnet mask is correct.
Suppose to change the subnet mask to 255.255.255.252 for en1 interface
-->ifconfig en1 netmask 255.255.255.252 up

4)traceroute(it creates load on system so dont use on production server)
-->traceroute
trace the route of an IP packet,network testing , measurement,management ,Primarily for manual fault isolation.

A-2) H/W problems
1)errpt 
Generates an error report from entries in an errorlog, but it  does not perform error log analysis .
for analysis use
-->diag
-->errpt -a

+class -General source of  the error
H-H/W
S-S/W
O-informational messages
U-Undermined

+Type - Severity of the error that has occured.
PEND-The loss of availability of a device or component is imminent
PERF-The performance of the device has degraded to below and acceptable level.
PERM-A condition that could not be recovered from.
-severe errors, defective H/W,S/W module
TEMP-A condition that was recovered from after a number of unusual attempts.
UNKN-not possible to determine the severity of the error.
INFO-Error log entry is informational and was not the result of an error.

+Resource Name-Name of the resource that has detected the error.
Location code-Path to the device,Drawer,slot,connector
,port

2)diag
Diag uses the errorlog to diagnose H/W problems.
System delets -H/W entries 90 days older
        -S/W entries 30 days older
-->diag
Diagnostic Routines - System Verification - Problem Determination.

B)Reasons to monitor root mail
1)mail
-->mail
Most of the processes send a mail to the root account with detailed information
-->diagela
Diagnostic Automatic Error log Analysis
provides the capability to do error log analysis whenever a permanant H/W error is logged.
It sends a message to your console and to all system groups. The message contains SRN or a corrective action, diagela is enabled by default at BOS installation time.

2)crontab
sends mail to root

3)Other software packages,especially security related ones,have the ability to specify the administrator.
ex. incase of security breach, illegal file permission change, or unauthorized passwd-file access , the system administrator receives a message.

C)System dump facility
System generates a system dump when a severe error occurs. System dumps can also be user-initiated  by users with root user authority.
A dump creates a picture of your system's memory contents. Sysadmins and programmers can generate  a dump & analyze its contents when debugging new applications.

a)Configuring a dump device
At the installation time, dump device(/dev/hd6 bydefault, primary) created . Secondary dump device /dev/sysdumpnull.
If your system has 4GB or more of memory then the default dump device is /dev/lg-dumplv & is a dedicated dump device.
A primary dump device is a dedicated dump device, secondary dump device is shared dump device
The dump device can be configured to either tape or a  logical volume on the hard disk to store the system dump.
+To list the current dump destination
-->sysdumpdev -l
+Change primary dump device from /dev/hd6 to logical volume /dev/dumpdev
-->sysdumpdev -P -p /dev/dumpdev

+Info  about previous dump
-->sysdumpdev -L

+Minimum size for the dump space can be determined by
-->sysdumpdev -e

+increase size of dump device
-->extendlv

1+ Start a system dump
Dump can be system initiated or user initiaed . If your system stops with 888 number flashing in the operator panel display , the system has generated a dump and saved it to a primary dump device

2+Understanding 888 error messages
It means either  a H/W or S/W problem has been detected  and a  diagnostic message is ready to be read.
Record info contained in the 888 sequence message,
-888
-102-unexpected system halt
-mmm-cause of halt-crash code h/w,s/w
-ddd-Dump Status-Dump code
-888
when the system dump completes,the system either halts or reboots , depending upon the setting of the auto restart attribute of sys0
-->lsattr -El sys0 -a autorestart
if autorestart true  ,Automatically REBOOT system  after a crash is True
Change this setting
-->chdev -l sys0 -a autorestart =false
sys0 changed
-->lsattr -El sys0 -a  autorestart

+ User initiated dump
-->sysdumpstart -p
write dump to the primary device
-->sysdumpstart -s
to secondary dump device

3+Copy a system dump
-->pax
allow you to copy,creat and modify files that are greater than 2 GB in size such as system dumps from one location to another. This is useful in migrating dumps, as the tar & cpio commands cannot handle manipulating files that are larger than 2GB in size. pax can also view and modify files in the tar and cpio format.
To view the contents of the tar file /tmp/test.rar
-->pax -vf /tmp/test.tar
To create a pax command archive on tape that contains two files
-->pax -x pax -wvf /dev/rmt0 /var/adm/ras/cfglog /var/adm/ras/nimlog

To untar the tar file /tmp/test.tar
to the current directory
-->pax -rvf /tmp/test.tar

To copy the file run.pax to the /tmp directory
-->pax -rw run.pax /tmp

4+snap
Used to gather configuration information of the system. It is a method of sending lslpp & errpt output to your service center, for diagnosing problems.
Default directory for the output from the snap command
-->/tmp/ibmsupt
8MB of temporary disk space is required when executing snap.
To copy general system information ,including file system, kernel parameters and dump information to rmt0
-->/usr/sbin/snap -gfkD -o /dev/rmt0
also copy atest case of problem in /tmp/ibmsupt directory

5+ Analysing system dumps
kdb -allows you to examine a system dump or running kernel

D) alog
--/var/adm/ras/bootlog
Boot log contains info generated by cfgmgr & rc.boot
To change the size of the boot log
--> echo " boot log resizing " | alog -t boot -s 8192

Display the bootlog
--> alog -t boot  -o | more

E) Determine Appropriate actions for user problems -commands
1)usrck
Verifies the correctness  of the user definitions  in the userdatabase file, by checking the definitions for ALL the users or for the users specified by the user parameter.
This command checks
1>/etc/passwd
entries , duplicate names are reported and removed. Duplicate IDs are reported but not fixed .
If entry has fewer than six colon separeted fields entry is reported.
2>/etc/passwd - /etc/security/user, /etc/security/limits.
usrck verifies that each user name listed in the /etc/passwd file has a stanza in the /etc/security/user, Also verifies that each group name listed in /etc/group has stanza in /etc/security/group file.

To verify that all the users exist in the user database, and have any errors reported (but not fixed)
-->usrck -n ALL

To delete, from the user definitions ,those users who are not in the user database files,& have any errors reported
-->usrck -y ALL(-y fix & reports errors)

2) grpck
Verifies the correctness of the group definitions in the user database files by checking the definitions for all the groups or for the groups specified by the Group parameter.
To verify that all the group members and admins exist in the user database ,and have any errors reported (but not fixed)
-->grpck -n ALL

To verify that all group members and admins exist  in the user database & to have errors fixed, but not reported
-->grpck -p ALL

To verify the uniqueness of the group name & groupID defined for the abc group
-->grpck -n

Only report and not correct
-->grpck -t abc

Ask interactively
-->grpck -y abc
fixes errors and reports them.

3)pwdck
Verifies the correctness of passwd info .
verify that all local users have valid passwords
-->pwdck -y ALL
This report errors, & fixes  them.

Ensure that user joey has a valid stanza in /etc/security/passwd
-->pwdck -y joey
fixes errors and reports them

4)sysck
Checks file definitions against the extracted files from the installation and update media and updated the SWVPD.
Used during installation & update of s/w products.
sysck updates the filename,product name,type ,checksum,size of each file in SWVPD database.
A product that uses the installp  command to install has an inventory file in  its image.
To add the definitions to the inventory database and check permission ,links,checksums.
-->sysck -i -f smart.rte.inventory smart.rte

To remove any links to files for a product that has been removed from the system and remove the files from the inventory database
-->sysck -u -f smart.rte.inventory smart.rte

5)lsgroup & lsuser
-->lsgroup -f ALL >> /tmp/check
-->lsuser -f ALL >> /tmp/check
write output in file /tmp/check
-->lsuser joey
used by root for a specific user

6)The user limits
/etc/security/limits file specifies the process resourlce limits for each user.
-->mkuser
-->chuser
-->lsuser
-->rmuser

F)Identifying H/W problems
a)Replacing hot plug devices
-->lsslot -c pci
Display the number ,location ,and capabities of hot plug pci slots.
Before replacing a hot  plug adapter or disk, you should unconfigure all other devices or interfaces that are dependent on the physical device you want to remove.
-->lsdev -C | grep sis
device in available state
The Hot Plug Task can be started with either SMIT or diagnostic (DIAG) tools menu.
-->diag
-Task Selction (Diagnostic ,ADvanced Diagnostics,Service Aids)
-Hot Plug Task -PCI HOT PLUG MANAGER
        -RAID HOT PLUG DEVICES
        -SCSI & SCSI RAID HOT PLUG MANAGER
-PCI HOT PLUG MANAGER
-Unconfigure a device-Device name -ent2
Go back to
-PCI HOT PLUG MANAGER MENU
-Replace/remove a PCI HOT PLUG Adapter.
After this option has been selected ,the pci slot will be put into a state that allows the pci adapter to be removed.
A blinking attention light will identify the slot that contains the adapter that has been selected for replacement
change the adapter now
cfgmgr new device
configure IP
-->smitty chinet
A repair action should be logged in Aix error report against the ent2 device, this will show others that error logged in tghe error reports has been solved
To enter repair action-diag-Task selection-log repair action-ent2 device.

G)Failed disk replacement
Reasons to replace a disk
-failed
-report i/o errors and you want to replace it.
-does not satisfy /meet your requirements.

Scenario1
If the disk you are going to replace is mirrored ,then
1. Remove copies of all logical volumes that were residing on that disk using rmlvcopy of unmirrorvg
2.Remove the disk from vg using reducevg
3.Remove the disk definition using rmdev
4.Physically remove the disk. If the disk is not Hot-Swappable , you may required to reboot the filesystem.
5)Make replacement disk available , If the disk is hot-swappable ,you can run cfgmgr, otherwise you need to reboot the system.
6)Include newly added disk in vg using extendvg.
7)Recreate & synchronize the copies for all lv using mklvcopy, or mirrorvg.

Scenario2
If the disk you are going to replace is not mirrored and is still funcitonal , then
1. Make the replacement disk available.
If the disk is hot-swappable , you can run cfgmgr; otherwise reboot is required.
2.Include newly added disk to vg using extendvg
3.Migrate all partitions from the failing disk to new disk using migratepv or migratelp.
If the disks are part of rootvg, consider the following
-If old disk contains a copy of the BLV, you have to clear it using --> chpv -c hdiskn
-New BLV must be created on new disk using bosboot
-Bootlist must be updated using bootlist.
-If old disk contains a paging space or a primary dump device you should disable them. After the migratepv command completes , you should reactivate them.
4.Remove old disk, reducevg
5.Remove old disk, definition,rmdev

Scenario3
If the disk is not mirrored ,has failed completely and there are other disks available in the vg then,
1.Identify all logical volumes that have at least one partition located on the failed disk
2. Close the lv, unmount all corresponding fs
3.Remove the file systems & logical volumes using rmfs
4.Remove the failing disk form vg ,using reducevg 
5.Remove disk definition,rmdev
6.Physically remove disk, if it is not HOT-SWAPPABLE
reboot is required.
7.Make replacement disk available, if it is HOT-SWAPPABLE run cfgmgr , if not reboot is required.
8.extendvg new disk
9.Recreate all lv, fs using mklv, crfs.
10.If you have a backup of your data,restore your data from backup.

Scenario4
If the disk is not mirrored, failed completely, no other disks available in the vg(vg has only one disk or all pv failed simultaneously) & the vg is not rootvg then.
1-Export  vg definition from system using exportvg
2-Ensure that /etc/filesystem does not contain any incorrect stanzas
3-Remove the disk definitions using the rmdev command
4-Physically remove disk,cfgmgr or reboot
5-Make new replacement disk available ,cfgmgr or reboot
6-If you have a vg backup , restore it using  restvg
7-If you dont have vg backup, recreate the vg,lv,fs
8-If u have a backup of your data,restore your data from backup

Scenario5
If disk is not mirrored ,has failed completely, no other disk available in vg & vg is rootvg then
-Replace the failing disk
-Boot in maintainence mode
-Restore the system from an mksysb image

I)Troubleshoot graphical problems

1)Full /home filesystem
-users will not be able to log in
-looks like hang
-go through command line

2)Name Resolution problems
-nslookup
-verify your systems network access
-server is up and running
-start and stop server-->smitty spnamerslv

2)export DISPLAY=server3:2.0
server3-xhost +server2
grant access to server2 on server3 to connect to the X server
server-xhost -server2
deny access

+TTY display problems
-->clear
failed
-->smitty
failed
TERM variable is not set to the correct value
--export TERM =vt100

J)perfpmr


MONITORING & PERFORMANCE TUNING

 MONITORING &  PERFORMANCE TUNING

 Disk quota
 It controls the use of disk space,
 It is defined for indiviudal users of group,
 It is maintained for each jfs.

Disk quota establishes limits based on the following parameters
-User's or group's soft limits,
-User's or group's hard limits,
-Quota grace period.

Soft limit- The number  of 1 kB disk blocks or the number of files under which the user must remain.
Hard limit- Maximum amount of disk blocks or files the user can accumulate under the established disk quotas.

Quota grace period - This period allows the user to exceed the soft limit for a short period of time (the default value is one week).

 If the user fails to reduce usage below the soft limit during the specified time, the system will interpret the soft limit as the maximum allocation allowed, & no further storage is allocated to the user.

Typically, only those filesystems that contain user home directories and files require disk quotas .

Consideration when implementing disk quota
-Your  system has limited disk space
-You require more file security
-Your disk usage levels are large
(Apply disk quota when above conditions are true)

The disk quota system can be used only with the journaled filesystem.

Dont establish quota for /tmp , because many editors and system utilities create temporary files in the /tmp filesystem, it must be free of quotas.

The specified file systems must be defined with quotas in the /etc/filesystems file and must be mounted/remounted.
The quotaon command looks for quota.user and quota.group(default quota files) in the root directory of the associated filesystem.


Display quota
-->quota

Use the chfs command to include the userquota and groupquota configuration attributes in the /etc/filesystems file.
--> chfs -a "quota = userquota" /home
enable user quota on the /home filesystem.

--> chfs -a "quota=userquota, groupquota" /home
both user and group quotas are on for /home.

The related entry in /etc/filesystems
/home
dev = /dev/hd1
vfs = jfs
log = /dev/hd8
mount = true
check = true
quota = userquota, groupquota
options = rw

The quota.user & quota.group file names are the default names located at the root directories of filesystem.

To name userquota, myquota.user
and groupquota, myquota.group
-->chfs -a "userquota=/home/myquota.user"
-a "groupquota=/home/myquota.group" /home

Entry in /etc/filesystems
/home
dev = /dev/hd1
vfs = jfs
log = /dev/hd8
mount = true
check = true
quota = userquota, groupquota
userquota = /home/myquota.user
groupquota = /home/myquota.group
options = rw

To duplicate the quotas established for user joey on to user ross
-->edquota -p joey ross

To enable quotacheck and turn on quotas during system startup, add in /etc/rc
-->vi /etc/rc
echo " Enabling filesystem quotas"
/usr/sbin/quotacheck -a
/usr/sbin/quotaon -a


To enable user quotas for the /usr/Tivoli/server/db filesystem
-->quotaon -u /usr/Tivoli/server/db

Disable user and group quotas for all filesystems in the /etc/filesystems file
-->quotaoff -v -a

To display your quotas as user joey
-->quota joey

Display quotas as the root user for user ross
-->quota -u ross



B)Recovering from a full filesystem

1)Fix a full / (root) filesystem

a)Use who command to read the contents of the /etc/security/failedlogin
--> who /etc/security/failedlogin

b) The condition of TTYs respawning too rapidly can create failed login entries.
To clear the file after  reading or saving the output,execute
-->cp /dev/null /etc/security/failedlogin

C)Check the /dev directory for a device name that is typed incorrectly . If rmto is created instead of rmt0, a file will be created in /dev called rmto.
Command will proceed until the / is filled , because /dev is part of / filesystem.
--> ls -l | pg
-look for the entries that are not valid, that do not have a major or minor number
-wrong filename
-file size grater than 500 bytes

D) If system auditing  is running, the default /audit directory can rapidly fill up and require attention.

E)Check large files, use find
-->find / -xdev -size +1024 -ls | sort -r +6
find all files greater than 1 MB , sort them in reverse order with the longest files first.

F)Before removing any file, check to ensure a file is not currently in use
-->fuser filename
If a file is open at the time of removal , it is only removed from the directory listing. The blocks allocated to that file are not freed until the process holding the file open is killed.

2) Fix a full /var filesytem

check the following

a)--> find /var -xdev -size +2048 -ls | sort -r +6
Look for large files in /var
b)Check for obsolete or leftover files in /var/tmp

c)Check the size of the /var/adm/wtmp file,
which logs all logins, rlogins , & telnet sessions
The log will grow indefinately untill system accounting clears it out nightly.
-->cp /dev/null /var/adm/wtmp
To clear /var/adm/wtmp

To edit the /var/adm/wtmp file , first copy the file temporarily with the following command
-->/usr/sbin/acct/fwtmp < /var/adm/wtmp > /tmp/out
Edit the /tmp/out file to remove unwanted entries then replace the original file with the following command
-->/usr/sbin/acct/fwtmp -ic < /tmp/out > /var/adm/wtmp

d)Clear the error log in the /var/adm/ras directory using following prodedure. The error log is never cleared unless it is manually cleared
[Never use the cp /dev/null command to clear the error log . A zero length errlog file disables the error logging functions of the operating system and must be replaced from a backup]

clear the error log in the /var/adm/ras directory using the following procedure
a)Stop the error daemon
-->/usr/lib/errstop
b)Remove or move the errorlog file to a different file system
--> rm /var/adm/ras/errolog   
or
-->mv /var/adm/ras/errlog filename(moved file)
C)Restart error daemon
-->/usr/lib/errdemon
D)Check /var/adm/ras/trcfile, if it is large and trace is not currently being run
-->rm /var/adm/ras/trcfile
e)If your dump device is set to hd6(default), there might be a number of vmcore* files in the /var/adm/ras directory . Remove these files if they are older.
f)Check the /var/spool/, which contains the queuing subsystem files
clear the queuing subsystem
-->stopsrc -s qdaemon
-->rm /var/spool/lpd/qdir/*
-->rm /var/spool/lpd/stat/*
-->rm /var/spool/qdaemon/*
-->startsrc -s qdaemon
g)Check /var/adm/acct/ which contains accounting records. If accounting is running ,this directory may contain several large files.
h)Check /var/preserve/ for terminated vi sessions. If a user wants to recover a session , you can use the
-->vi -r
to list all recoverable sessions.
To recover a specific session
--> vi -r filename
i)Check /var/adm/sulog file , which records the number of attempted uses of the su command and whether each was successful.(Recreates automaticaly)
j)Check /var/tmp/snmpd.log which records events from the snmpd daemon(Recreates automaticaly)
This file's size can be limited using /etc/snmpd.conf

3) Fix a full user defined filesystem
Fix a overflowing user defined filesystem
+--> find /fs -xdev -size +2048 -ls | sort -r +6
Check for files larger than 2MB

+Remove old backup files and core files.
-->find /\(-name "*.bak" -o -name \ "*.bak" -o -name ed.hup \) \ -atime +1 -mtime +1 -type f -print | xargs -e \ rm -f
This will remove old backup files,core files.
*.bak, a.out, core, * or ed.hup files

+To prevent files from regularly  overflowing the disk,
--> skulker
as part of the cron process and remove files that are unnecessary or temporary.

+--> find /var -xdev -mtime 0 -ls
Locate files that have been changed in the last 24 hours.

4) Fix a damaged filesystem
Filesystems get corrupted when i-node or superblock information for the directory structure of the filesystem gets corrupted , due to hardware error or corrupted programs.

Symptom of corrupted fs
-System cannot locate, read ,write data located in the particular filesystem.

Solution.
1)Unmount the damaged filesystem
-->smit unmountfs (for a filesystem on a fixed disk drive)
-->smit unmntdsk(for a filesystem on a removable disk)
2)Assess filesystem damage by running fsck
-->fsck /dev/myfilelv (unmount first)
Checks and repairs inconsistent filesystems.
3)If filesystem cannot be repaired , restore it from backup


c)The system error log
+Error logging is automatically started by the rc.boot script during system initialization, and is automatically stopped by the shutdown script during shutdown.
The errdemon program starts the error logging daemon ,reads error records from the /dev/error file, and writes entries to the system error log. The default system errorlog is /var/adm/ras/errlog file.
The last  entry is placed in NVRAM, and when system reboot starts, it is written in errorlog file.

-->/usr/lib/errdaemon
start at  boot, but u can restart it in failure

-->/usr/lib/errstop (use carefully ,only in special cases)
Stops the error logging daemonn, disables diagnostic and recovery functions. The errorlog should never be stopped during normal operations

-->/usr/lib/errdemon -l
Determine the path to your system's errorlog file.

-->/usr/lib/errdemon/ -s 2000000
To change the maximum size of the error log file.

-->/usr/lib/errdemon -B 64000
Change the size of error log device driver's internal buffer.

*errpt command
To retrieve the entries in the error log
1)To display complete summary report of the errors that have been recorded , but it does not perform error log analysis.
-->errpt

To display all the errors which have an specific error ID
-->errpt -j 8527F64

To display all the errors logged in a specific period of time
-->errpt -s 1122160405 -e 1123160405 (MINUTE,DAY,HOUR,MONTH,YEAR)

*The errclear command
To delete entries from the errorlog
To delete all entries from the error log
-->errclear 0

*The errlogger command
This errlogger command allows you to log operator messages  to the system error log.
The messages can be upto 1024 bytes in length
-->errlogger "This is a test of the errlogger command"
-->errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
AA8AB241 1129134705 T 0 OPERATOR OPERATOR NOTIFIATION

Now to display the operator notification generated (id AA8AB341)
-->errpt -a -j AA8AB241
This is a test of the errlogger command.

*Extracting error records from a system dump

    The errdead command extracts error records from a system dump containing the internal buffer maintained by the /dev/error file.
The errdead command extracts the error records from the dump file and adds those error records directly to the error log.
[The error log daemon must not be running when the errdead command is run]
ex. To capture error log info from a dump image that resides in the /dev/hd7 file
-->/usr/lib/errdead /dev/hd7
*Redirecting syslog messages to error log
*Commands for manipulating error messages
errinstall
errupdate
errmsg
errupdate
ras_logger

D)The system log configuration
/etc/syslog.conf file controls the behaviour of the syslog daemon. syslogd uses /etc/syslog.conf
file to determine where to send the error messages or how to react to different system events.

-The  /etc/syslog.pid file contains the process ID of the running syslogd daemon.

+Format of the configuration file /etc/syslog.conf
There are 3 parts -facilities (which application)
         -priorites(seriousness)
        -Destinations(send to whom)
Facilities -- kern-kernal
        user-user
        mail-mail
        daemon, auth, syslog, lpr, news ,uucp

Priorities -- Message priority
emerg,
alert-H/W errors -to all users
crit-improper login attempts
err-unsuccessfull disk write
warning-abnormal but recoverable
notice-important informational messages
info-important informational meassages
debug-"        "        "
none-"        "        "

Destinations--
file Name - Full path name of file opened in append mode.
Host - Host name, start by @

User-- Usernames
*=All users

+Using the system log
After customizing /etc/syslog.conf file
restart syslogd daemon
--> stopsrc -s syslogd
--> startsrc -s syslogd
few eg.
1)To log all mail facility messages at the debug level to the file /tmp/mailsyslog file
(facility)mail.(priority)degug  (destination)/tmp/mailsyslog

2)To send all system messages except those from the mail facility to a host named barney
(facilities)*.debug;(facilities)mail.none

3)To send messages at the emerg priority level from all facilities and messages at the crit priority level and above from the mail and daemon facilities to users joey and ross
(faci)*.(prio)emerg;(faci)mail,(faci)daemon.(prio)crit (desti)joey,ross(destination)

4)To send all mail facility messagess to all users terminal screens
mail(faci).debug(prio) *(dest)

E)Performace tools overview
1) vmstat
Reports statistics about kernel threads, virtual memory, disks,traps,& cpu activity
Used to balance system load activity
-->vmstat
summary of the virtual memory activity since system startup

+Display five  summaries at 1 second interval
-->vmstat 1(interval) 5(reports)
summary of the virtual memory activity since system startup

+Display the count of various events
-->vmstat -s

+To display 5 summaries for hdisk0 and hdisk1 at 2 seconds interval
-->vmstat hdisk0 hdisk1 2 5

+Number of forks,since system startup
-->vmstat -f

2)sar
Collects ,report,saves system activity information.
+To report current activity for each two seconds for the next five seconds
-->sar 2 5 (5 times for every 2 seconds)

+To report activity for the first two processors for each second for the next five seconds
-->sar -u -P 0,1(processors) 1(each second) 5(nxt 5 seconds)

3)topas

Vital statistics about the activity on the local system on a character terminal.
It extracts and displays statistics for a system with a default interval of two  seconds
also
-Overall system statistics
-List of Busiest processes
-WLM statistics
The bos.per.tools and perfagent.tools (filesets must be installed on the system to run the topas

Parameters shown by topas
-cpu utilization,
usage,by user+systems
wait,idle
-Network interfaces
List of NIC, throughput, data received ,data transmitted
-Physical disks
list,Busy%, kBPS,TPS
-WLM classes
-Processes
Name,id, util-cpu, PS speed
-Events/queues
-File/TTY
-Paging
-Memory
-P.S.
-NFS
-->topas -P
busiest processes
-->topas -D
disk metrics
-->topas -i5 -n0 -p10
view top 10 processes in use while not displaying any network interface  statistics, in 5 seconds intervals

*svmon-
 Captures and analyzes a snapshot of virtual memory, current info of memory,memory leaking
-->svmon -P pid -i 1 3

*netstat
--> netstat i
verity status of all nic
-->netstat -in
MAC+IP
-->netstat -rn
Routing table
-->netstat -Cn
Display route costs if you have multiple routes having different costs to the same destination.
-->netstat -in
MTU size
-->netstat -m
kernel handles memory buffer for communication
purpose
-->netstat -v ent0|more
device driver info
-->netstat -s
statistics for all protocols icmp,udp,tcp,igmp,ip
-->netstat -p icmp/ip
about particular protocol
-->netstat -a (-an also)
for all sockets opended on your system

5)iostat
Report  cpu statistics,aysnchronous i/o (AIO)statistics I/O for the entire system,adapters,TTY devices ,disks, CD-ROMs
Use iostat when
-Performance problems
-After H/W and S/W changes to the disk subsytem
-After change to attributes of vg,lv,fs
-After change to OS
-After change to Application

To determine if a physical disk has become a performance bottleneck
-->iostat -T -d 1 60
Monitors disk activity for 60 seconds
check %tm_act & kbps

To display more detailed statistics about disk, we artificialy create disk activity on hdisk0 and then created 10 performance reports every 2 seconds
-->dd if=/dev/hdisk0 of=/dev/null
-->iostat -D hdisk0 2 10

Display cpu utilization
-->iostat -T -t 1 60
Monitor cpu activity for 60 seconds

AIO utilization
-->iostat -A

List mounted fs
-->iostat -AQ

Adapter utilization
-->iostat -a 1 10 | more
-->iostat -a -D | more

6)Procmon tool(Graphical)
Allows you to view and manage the processes running on a system
Default refresh 5 seconds
Process- Priority,nice values,how long running ,how much cpu using,how much memory using,how much i/o a process performing,who has created
Must install below filesets
-bos.perf.gtools
-->./opt/perfwb/procmon/procmon/
-->tty
terminal number

To wait for a process to finish and display the status use procwait
-->find / -type f > /dev/null 2>&1
-->procwait

F)Tuning using the /etc/tunables files
/etc/tunables/nextboot - Applied at boot time
/etc/tunables/lastboot - Lastboot messages
/etc/tunables/lastboot.log- Lastboot messages

G)Documenting a system configuration
Listing device attributes
-->lsattr -El ent0
Display status location code for all disk devices
-->lsdev -Cc disk
Display characteristics ,capabilites of hot plug PCI slots
-->lsslot -c pci
Display system machine type,serial number
-->lscfg -vp | grep -ip cabinets
Display info about H/W ,S/W
-->prtconf

FILE SYSTEMS

 FILE SYSTEMS

*file system types jfs,enhanced jfs, nfs, cdrom fs.

A)File system structure
>Superblock
Contains control info about a filesystem
-overall size of fs in 512 byte blocks.
-filesystem name
-filesystem log device - the version number
-number of inodes
-free inodes
-free data blocks
-date & time of creation
-filesystem state

All this data is stored in the first logical block of the filesystem. Corruption of this data may render the filesystem unusable . This is why the system keeps a second copy of the superblock on logical block 31
-->dd count=1 bs=4k skip=31 seek=1 if=/dev/hdn of=/dev/hdn

>Allocation group
Allocation group consists of inodes & its corresponding data blocks.

>inodes
Contains information about the file
-type
-size
-owner
-data&time
when the file was created ,modified, or last  accessed.
-Contains pointers to data blocks that  store the actual data of the file.

-Every file has a corresponding inode

Fof jfs filesystem, the maximum number of inodes,  & hence the maximum number of files is determined by the nbpi value(during installation)(number of bytes per inode), which is specified when the filesystem is created. For every nbpi bytes of your filesystem, there will be an inode created.
The total number of inodes is fixed.
The nbpi values needs to be correlated with allocation group size.

JFS restricts all filesystems to 16MB (2 rest to 24) indoes.
JFS2 file system manages the necessary space for inodes dynamically so there is no need of any nbpi parameter.

>Data blocks
Data blocks store the actual data of the file or pointers ot other data blocks , The default value for disk block size is 4 kB

>Fragments -for jfs filesystems only
Fragments of logical blocks can be used to support files smaller than the standard size of the logical block(4KB). This rule applies only to the last block of a file smaller than 32kB.
Also you have the option to use compression to allow all logical blocks of a file to be stored as a sequence of contiguous fragments.
These features can be useful to support a large number of small files. Fragment size must be specified for a filesystem at installation time.
Different filesystems can have different fragment sizes.

>Device logs
The journaled filesystem log stores transactional information about  file system metadata changes.
Data  from data blocks are not journaled. Log devices ensure filesystem integrity not data integrity.
This data can be used to roll back incomplete operations if the machine crashes.
JFS-jfslog
JFS2-jfs2log

After the operating system is installed, all file systems within the rootvg use logical volume hd8 as a common log.
You can create a JFS2 filesystem that can use inline logs. This means the log data is written into  the same logical volume as the  filesystem and not into the log logical volume.

 B)Filesystem difference
Function        jfs        jfs2
maximum filesystem size 1TB        4PB
max file size        64GB        4PB
number of inodes     fixed        Dynamic
inode size        128bytes    512bytes
fragment size        512b        512b
block size        4096b        4096b
Directory Org        linear        B-tree
compression        yes        no
jfs log            external(hd8)    external+natv
default ownership
at creation        sys.sys        root system
sgid of deflt filemode    sgid=on        sgid=off
quotas            yes        yes
file system shrink    not possible    possible 5.3+

*If you have to migrate data from a jfs filesystem to a jfs2 filesystem you have to backup the jfs filesystem & resotre the data on the jfs2 filesystem.

C)Filesystem management
>Create a filesystem
1)-->crfs -v jfs -g testvg -a size=10M -m /fs1
creates within volume group testvg,
jfs-filesystem-10MB
mount point =/fs1
if there is no existing jfs logical volume then it will be create now.
If there is no existing jfs log device , the system will create it now.

2)-->crfs -v jfs2 -g testvg -a size=10M -p ro -m /fs2
in testvg
jfs2 filesystem of 10MB
mounting point /fs2
permission -read only

If there is no jfs2 logical volume, it will be created now.
-->cat /etc/filesystems | grep fs1 /grep fs2
/fs1
dev=/dev/lv00
vfs=jfs
log=/dev/loglv00
mount=false    (dont mount at reboot)
account=false

3)Use crfs
-->lsvg -l testvg
testlv logical volume, jfs2 , existed but not associated with any fs, size=128MB, 1pp
-There is a jfs2 log device defined not attached with fs.
so using already existed components we create jfs2 filesystem located on already existing logical volume named testlv, using  jfs2 log device loglv01 and having /test as the mounting point.
-->crfs -v jfs2 -d /dev/testlv -a logname=loglv01 -m /test -a size=130M
Though here we have specified for the filesystem, a size bigger than the logical volume itsef, the size parameter is ignored and the final size of the filesystem will be rounded to the size of the logical volume.

>Mounting and unmounting fs
Mounting is the only way a filesystem is made accessible.
When a filesystem is mounted over a directory, the permissions of the root directory of the mounted filesystem take precedence over the permissions of the mount point.
Means whatever permissions on filesystem will be automatically apply on mount point directory.
-->mount /dev/fslv02 /test
-->umount /test

+Display mounted filesystems using the mount command
-->mount

+Display the characteristics of filesystems
-->lsfs -a
-->lsfs -q

>Removing a filesystem
Unmount the filesystem before deletion, rmfs command will delete the corresponding stanza from the /etc/filesystems and the logical volume on which the filesystem resides.
-->rmfs /test
error: if  still mounted

-->umount /test
-->rmfs /test
-->cat /etc/filesystems | grep test

>Changing the attribute of a filesystem
USe chfs command to change attributes of a file system, such as -mounting point permission
        -log device
        -size
-->lsfs -a
/dev/fslv00    --/fs2    jfs2    243322    ro no

-->chfs -a size=250M -p rw /fs2
filesystem size changed to 512M

If the new size for the filesystem is larger than the size of the logical volume, the logical volume will be extended to accommodate the filesystem, provided that it does not exceed the maximum number of logical partitions.

>Checking filesystem consistency
-fsck command checks filesystem consistency & interactively repairs the filesystem
**Do not run fsck command on mounted filesystem.
-you must be able to read the device file on which the filesystem resides.
-fsck command tries to repair filesystem metadata structure, display information about inconsistencies, prompts you for permission to repair them.
-fsck does not recover the data from datablocks,
If you lost data, you have to restore it from a backup.
-when the system boots, theh fsck command is called to verify the /, /usr, /var, /tmp filesytems.
An unsuccessfull result prevents the system from booting
--at boottime
-->fsck -f /, /var, /usr, /tmp
check repair fs metadata
doesn't recover data

>Log Devices
+Creating log devices
When the size of your file system is increasing , you should consider either increasing the size of the default log or creating new log devices
Use mklv command to specify type of logical volume,jfslog or jfs2log

+Initializing log devices
The log devices are initialized using the logform command by clearing all log records, such as jfslog, jfs2log or inline logs.

The logform command does not affect the data itself.
To initialize the jfs2log device named loglv01
-->logform /dev/loglv01

D) Defragmenting a filesystem
The use of fragments and compression , as well as the creation and deletion of a large number of files, can decreses the amount of contiguous free disk space.
-->defragfs /home
To improve the status of contiguous space within a filesystem.
E)Displaying info. about inodes
-->istat filename
-->istat /etc/passwd

f)Troubleshooting filesystems problems
>Superblock errors Recovery
Errors: fsck : not an aix3 fs
    fsck: not an aix4 fs
    fsck : not a recognized filesystem type
    mount:invalid argument
Solution :Restore the backup of the superblock ovre the primary superblock
dd count=1 bs=4k skip=31 seek=1 if=/dev/lv00 of=/dev/lv00

-->fsck -f /dev/lv00
if still not solved then recreate filesystem, restore the data from a backup.

>Cannot unmount filesystems
A filesystem cannot be unmounted if any refrences are still active within that filesystem.
The following situations can leave open references to a mounted filesystem.
+files are open within a filesystem
-->fuser /fs
Shows the running processes within fs

-->kill PID

+Running kernel extension
-->genkex
report on all loaded kernel extensions

+Filesystems are still mounted within that file system
-Unmount all the filesystems that are mounted within the filesystem to be unmounted

+A user is using a directory within the filesystems as their cwd.
-->find /home -type d -exec fuser -u {} \;
/home/prashant: 3548c(prashant)
Fuser appends the letter "c" to the process ID's of all processes that are using a directory as their cwd. The -u flag shows owner of process.

>Full filesystems
-->df
-->du




Disk Storage Management

 Disk Storage Management

*Storage Management Concepts

+VGDA-There is at least one VGDA per physical volume. Information from VGDA's of all disks that are part of the same volume group must be identical.
VGDA location on the disk depends on the type of VG(original, big,scalable)

+VGSA -Volume group status area is used to describe the state of all physical partitions from all physical volumes within a volume group.
The VGSA indicates if physical partition contains accurate or stale information.
VGSA is used for monitoring and maintaining data copies synchronization.

+LVCB - A logical volume control block.
Contains important information about the logical volume,such as the number of the logical partitions or disk allocation policy. It's location on the disk depends on the type of volume group it belongs to.
For standard volume groups, the LVCB resides on the first block of user data within the LV. For Big VG there is additional LVCB info in VGDA on the disk.
For scalable VG, all relevant lv control information is kept in the VGDA as  part of the LVCB information area and the LV entry area.

VGDA- info about VG, No. of PV, status, Properties of all lv & pv

VGSA- All about pps, total pp in VG in every pv

LVCB-info about lv,lp

VG TYPE    Max.PV    Max. LV    Max.PP Per VG    Max. PP size
Normal    32    256    32512(1016X32)        1GB
Big VG    128    512    130048(1016X128)    1GB
Scalable 1024    4096    2097152            128GB


*Physical Volumes
+For each disk, two device drivers will be created under /dev directory
-Block device driver
-Character device driver

+Disk driver has 32-bit PVID
-->lspv
Displays all physical volumes

+PVID
The following command changes an available disk device to a physical volume by assigning a PVID
-->chdev -l hdisk7 -a pv=yes
This command has no effect if the disk is already a physcial volume.

>The following commands clears the PVID from the physical volume
-->chdev(modify an existing device parameter) -l(Logical name of the device who's properties are going to change) hdisk7 -a(specifies the attribute=value parameter) pv=clear(Property attribute=values)

*Listing info about physical volumes
-->lspv hdisk2
MAX REQUEST -256 kb-LTG size of PV
VG DESCRIPTORS -2 The number of VGDA'S located on this pv, Total PP used,

-->lspv -l hdisk0
Display all lv on hdisk0

-->lspv -p hdisk0
Display the allocation of PP's to logical volumes. Allocation of PP on disk(center/outer/inner/middle)

-->lspv -M hdisk0 | more
Detailed map of the disk layout and display the relationship between each pp to lp

-->lsvg -M hdisk0
-->lslv -m lv1
Display the numbers of logical partitions & their corresponding physical partitions.

*Changing the allocation permission for a physical volume
1)We can disable the partition allocation for a physical volume , so no lv can be created on it.
-->chpv -an hdisk2
(no allocation)

-->lspv hdisk2
ALLOCATABLE : no

-->mklv -y test -t jfs2 testvg 10 hdisk2 error: PV hdisk2 is not allocatable

-->chpv -ay hdisk2
Turn on the allocation permission

*Changing the availability of a PV
-->lsvg testvg
vg active
pv -2 active
vgda 3

-->lsvg  -p testvg
hdisk2 active
hdisk3 active

-->lspv hdisk3
active, vgda 2

-->lspv hdisk2
active, vgda1

-->chpv -vr hdisk3
Makes hdisk3 unavailable

-->lspv hdis3
Confirms that hdisk3 is removed and does not have any VGDA on it. NOVGDA

-->lspv hdisk2
hdisk2-Active
vgda-2(Because any vg must contain at least one vgda)

-->lsvg -p testvg
hdisk3 has been removed

-->lsvg testvg
Shows vg is still active , one pv of two is active,and the total number of vgda has been changed to 2.

-->chpv -va hdisk3
Makes hdisk3 available again.

-->lspv hdisk3
shows that hdisk3 is active & contains only one vgda.

-->lsvg -p testvg
conforms that both disks are now active


*Before changing the availability of any physical volume, you have to close any logical volume residing on that disk and ensure that the vg meets quorum requirements after the disk is removed[close filesystem, check quorum]

*To clear the boot record located on physical volume hdisk1
-->chpv -c hdisk1

*Declaring a physical volume hot spare
HOT SPARE disk must be present and sholud be equal to the size of already presented disk,

To define hdisk3 as a hot  spare
-->chpv -y hdisk3

To remove hdisk3 from the hot spare pool of it vg,
-->chpv -hn hdisk3

*Migraing data from physical volumes.
PP located on a pv can be moved to one or more physical volumes contained in the same vg
-->lsvg -p rootvg

-->lspv -M hdisk1

-->lspv -M hdisk5

-->migratepv hdisk1 hdisk5

-->lspv -M hdisk1

-->chpv -c hdisk1
if hdisk1 had a bootimage,which is how transfered to hdisk5, so deleted from hdisk1

-->lspv -M hdisk5
-->chpv -c hdisk1
If you migrate data from a physical volume that contains a boot image, you should also update the boot list.
It is possible to migrate only data from partitions that  belong to a specific logical volume. To migrate only physical partitions that belong to logical volume testlv from hdisk1 to hdisk5
-->migratepv -l testlv hdisk1 hdisk5

*Migrating partitions
Migrating a partition to another partition on a different physical volme
-->lspv -M hdisk1

-->lspv -M hdisk5
-->migratelp testlv1/2 hdisk5/123
Migrate data from the second copy of the logical parition number 1 of lv to hdisk5 of pp123

-->lspv -M hdisk1
Map of all pp located on hdisk1

*Finding the LTG size
minimum LTG is effective LTG of your system
Logical Track Group is the maximum allowed transfer size for an i/o disk operation
-->lquerypv -M hdiskn
Find LTG size

LTG value will be equal to the minimum of the transfer size of disks that are part of the vg.

Changing LTG
-->chvg -L 128 testvg

*Volume Groups
All pv are divided in pp's having the same size
a)Creating a volume group
For each vg,  2 device driver files are created under directory /dev. Both files will have the major device number equal to the major number of the vg
i)Creating an original volume group
-->mkvg -y vg1 -s64(pp size in mb) -V99(major number) hdisk4
mkvg command will automatically vary on the newly created volume group by calling the varyonvg command.

ii)List all vg known to a system
-->lsvg

iii)List all active vg
-->lsvg -o

iv)Details of particular vg
-->lsvg testvg

v)Display lv contained in vg
-->lsvg -l rootvg

vi)When you investigate LVM metadata corruption, to obtain information about a volume group read from a VGDA located on a specific disk.
-->lsvg -n vgname

*Changing VG characteristics
1)Auto varyon flag set to yes
-->chvg -ay vgname

2)Auto varyon no
-->chvg -an vgname

3)Quorum
This attribute determines if the vg will be varies off or not after losing the simple majority of its physical volumes.
a)Turn off the quorm
-->chvg -Qn testvg

b)Turn on Quorum
-->chvg -Qy testvg

4)Maximum number of Physical partitions per physical volume
You can change  the maximum number of physical partitions per pv
-->lsvg  testvg
MAX PPs per vg 32512
max pp per pv 5080

-->chvg -t 16 testvg
Vg testvg changed . With given characteristics testvg can include upto 2 pv with 16256 pp

-->lsvg testvg
max pp per vg 32512
max pv
max pp per pv 16256

T factor table
No. of disks        Max number of pp/disk
1            32515
2            16256
4            8128
8            4096
16            2032
32            1016

5)Changing a vg format
Once a vg has been converted to a scalable format, it cannot be changed into a different format. Before changing the format of a vg you must varyoff the vg
-->lsvg tablet
max pp per vg 32512
max pv 32
max pp per pv 1016

-->varyoffvg tablet
-->chvg -G tablet
-->varyonvg tablet
-->lsvg tablet

max pp per vg 32768 max pv 1024
The maximum number of physical partitions is no longer defined on a per disk basis, but rather applies to the entire vg. As a consequence the lsvg command will no longer display the max number of physical partition per pv for scalable vgs.

6)Changing the hot spare policy
a)Displays physical volumes that  are part of testvg
-->lsvg -p testvg
b)Designate hdisk4 as a hot spare
-->chpv -hy hdisk4
c)Change the migrate policy of the vg to migrate data from a failing disk to one spare disk
-->chvg -hy testvg
d)Change the migrate policy of the vg to migrate data from a failing disk to the entire pool of spare disks
-->chvg -hY testvg

7)Changing the synchronization policy
Automatic Synchronization of stale partitions within the vg. This option has significance only for partitions that correspond to mirrored logical volumes
-->chvg -sy testvg
AUTO SYNC: yes

-->lsvg testvg

8)Changing the max. number of pp in vg
-->lsvg testvg
max pp per vg 32768

-->chvg -P testvg
max pp per vg 2097152

9)Changing the maximum number of logical volumes
-->lsvg testvg
max lv 256

-->chvg -v 4096 testvg
maxlv 4096

10)VG can be located after an abnormal termination of an LVM
-->chvg -u vgname
REmove the lock

11)Extending a Vg
Extend vg by adding pv , before adding new disk , you have make the disk in an available state
-->extendvg testvg hdisk7
Assign PVID to hdisk7 and adds it to vg

-->extendvg testvg hdisk4
It seems that hdisk4 appears to belong to volume that is not varied on and asks the user use the force flag.

-->extendvg  -f testvg hdisk4
Forcibly adds hdisk4 to vg.

Also you cannot add pv of another vg which is belong to varied on vg.

12)Reducing a VG
The vg must be varied on, when you remove the last physical volume from the vg, otherwise if vg is not varyon still we can remove pv forcefully,as seen above. All lv residing on the disk to be reduced have to be closed before . If lv spans multiple PV, then it corrupts.
If there are lv on disk then without unmounting them we cannot remove the disk

13)Resynchronizing the device configuration database
synclvodm used to  sysnchronize , rebuild information from odm , device files & lvm metadata structures -vgda, lvcb
To synchronize odm to contain the latest LVM information for vg testvg
-->synclvodm testvg (beware before using it)

14)Exporting a volume group
exportvg command only removes vg definition from the odm and does not delete any data from the physical disks. It clears the stanzas from /etc/filesystems that corresponds to the logical volumes contained in the exported volume group, but it will not delete the mounting point. You cannot export a vg  that contains an active paging space.
-->exportvg testvg
There are situations when all data from a vg needs to be moved from one system to another system. You will need to delete any reference to that data from the originating system.

15)Importing a vg
Importing a vg means recreating the reference to the vg data and making that  data available
ex.
-->importvg -y testvg hdisk7
shows the import vg testvg using hdisk7
The importvg  command reads the VGDA of one of the pv that is part of vg.
It uses redefinevg to find all other disk that belong to the vg. It will add corresponding entries into the ODM database and update /etc/filesystem with the new values (if possible) for the new lvs and their mountpoints.
if same named vg is present already then command will fail, so change the name.

imported lvs name is conflicted with other lv then importvg command will automatically assign system default names to those that have been imported & send an error message

-->lsvg -l test2vg
now suppose we try to import another vg of name test2vg then the command will fail, so change the name.

-->importvg -y test1vg hdisk5
Then use fsck on fs

16)Varying on a VG
An already defined vg can be activated using the varyonvg
Steps in varyonvg includes
-varyonvg command will open corresponding file from /etc/vg  to obtain a lock for the vg.

-all info of vgda gets cross-checked and the vgda with the latest time stamp is referred as vgda reference point.

-If majority of pv are not accessible, varyon fails. You will need to forcibly varyon

-LVM info on all pv are updated with latest info about all pv status

-All pv are updated to contain latest consistent copy of vgda.

-lvm device driver is updated with lates info about vg.

-->syncvg command is called to synchronize stale partition , if any.

-->varyonvg -f
forcibly varyon ,but no gurantee about data integrity ,& use only in emergency.

-->varyonvg -n
varyon without synchronizing stale partitions automatically ,useful when we need to do disk sync control in the case of disk problem,
ex.
1)-->lsvg testvg
pv 3 active
quorum active

2)Physically remove hdisk7

3)-->varyoffvg testvg
4)-->varyonvg  testvg
with the use of above 2 commands the system will know that the hdisk7 is missing

5)-->chvg -Qn testvg
Disable the quorum for testvg(otherwise vg will not restart again)

6)varyoff testvg

7)varyonvg testvg
fails to activate the vg because one of the pv hdisk7 is missing.

8)-->varyonvg -f testvg
forcibly activates vg and declares hdisk7 is missing.

9)-->chvg -Qy testvg
Activate the quorum

10)We physically remove hdisk6 from the system and varyoff vg testvg

11)-->varyonvg testvg
fails, because there are not enough active physical volumes to meet the quorum.

12)-->varyonvg -f testvg
Forcibly activates the vg and puts hdisk6 & hdisk7 in the removed status.

*varyonvg -1)latest vgda selected as reference point
2)all pv updated with latest vgda
3)lvm updated
4)syncvg tookplace
5)majority pv should be available

13)Varying off a vg
Deactivate vg & lv in it . All lv must be closed, which requires that all file systems associated with logical volumes be unmounted
-lv close
-fs unmount

-->varyoffvg testvg
-->lsvg -l testvg
-->lsvg -o

14)Reorganizing a vg
Reorganize pp within a vg. The pp will be rearreneged on the disks according to the intra-physical & inter-physical policy allocation for each lv.
Requirement- The vg must have at least one free partition and a relocatable flag of each of the lv that you would like to organize must be set
-vg have one free partition
-lv flags set.

-->lslv -l lv1
-->reorgvg test1vg
-->lslv -l lv1

>To reorganize only logical volumes lv1& lv2
from vg testvg
-->reorgvg testvg lv1 lv2

>To reorganize only partitions located on physical volumes hdisk6 hdisk7 that belong  to logicla volumes lv1 & lv2 from vg testvg
-->echo "hdisk 6 hdisk7 " | reorgvg -i testvg lv1 lv2

15)Synchronizing a VG
To synchronize stale  partitions, time consuming depending on h/w characteristics & the total amount of data.
-f forced sync; & an uncorrupted physical copy is chosen & propagated to all other copies of lp, whether or not they are stale.

>To sync. copies located on pv hdisk4 & hdisk5
-->syncvg -p hdisk4 hdisk5

>To sync all pp from vg testvg
-->syncvg -v testvg

16)Mirroring a vg
Mirror all logical volumes within a vg
a)Extend rootvg to contain a second pv
-->extendvg rootvg hdisk1

b)Create a copy for each lv within rootvg
-->mirrorvg rootvg

Quorum is disabled because all lv are mirrored .
New BLV is created on the newly added disk,
-->bosboot -ad /dev/hdisk1

-->bootlist -m normal hdisk0 hdisk1
Disk included in bootlist

-->shutdown -Fr
reboot

17)Splitting of a VG
To split  a copy of a mirrored vg into a snapshot vg.
To split
-All lv in vg must have a mirror copy
-Mirror must be located on a disk or a set of disks that contain only this set of mirrors.

spliting not be used on a vg with paging spaces.

 (original disk)p----|----p(new disk)

-->splitvg -y newvg -c 1 test1vg
Splits test1vg and creates a snapshots vg named newvg
The original vg will stop using  the disks that are part of the newvg.

New lv & new mounting points will be created in new (snapshot)  vg.
Both vg  will monitor changes of any physical partition so that when the new snapshot vg is rejoined with the original vg, the data will remain consistent.

To rejoin the two copies of the vg test1vg
-->joinvg test1vg

*Managing logical volumes
Each lp has at least one & maximum 3 corresponding pp's that can be located on different physical volumes
each lv --/dev/char file
    --/dev/block file

1)Creating a logical volume
-->mklv -y lv3 -t jfs2 -a im testvg 10 hdisk5

2)Delete lv
-->rmlv -p hdisk7 lv1
close the lv/unmount

3)Listing info about lv
-->lslv lv1
-->lslv -l lv1
-->lslv -m lv1  -no. of lp & corresponding pp
-->lslv -n hdisk6 testlv
info about lv, from vgda loated on hdisk6

-->getlvcb -AT lv1
LVCB of lv
lv time creation
lv time modification

4)Increasing the size of a lv
-->extendlv lv1 3 hdisk5 hdisk6
increase lv1 with 3 lp located on hdisk5, hdisk6

5)Copying a lv
-->cplv -v dumpvg -y lv8 lv1 (source)
To copy logical volume lv1 to the dumpvg vg under the name of lv8

6)Creating copies of lv
Create & synchronize one extra copy of each of the lp of lv lv1. Newly created copies will be located on hdisk7
-->mklvcopy -k lv1 3 hdisk7 &
before it was 2 copies per lp now it becames 3 copies

-->mklvcopy -k lv1 2 hdisk5 &
before it was 1 copy now it becomes 2

*Changing characteristics of lv
To change for logical volume lv1, the maximum number of logical partitions to 1000 & scheduling policy for i/o operations to parallel/round robin.
-->chlv -x 1000 -d pr lv1
-->chlv -x 25 lv1

*Splitting a logical volume
Use splitlvcopy command to split a lv that has at least two copies of each lp into two different lv. The newly creted lv will have the same characteristics as the original.
It is recommended closing the lv to be split. If the original lv contains a filesystem, the data from the newly created lv will have to be accessed as a different filesystem
-->splitlvcopy -y copylv(new) testlv(original) 2

[-->crfs -v jfs2 -d /dev/copylv -m /copy
creates a filesystem structure for copylv , This command will destroy any filesystem data]

instead
-->mkdir /copy
-->mount /dev/copylv /copy

-->vi /etc/filesystems
Manually add an entry for /copy mount point.

*Removing a copy of logical volumes
-->rmlvcopy testlv 2 hdisk6
Removes copies located on hdisk6 & leaves two  mirror copies . (before there was 3 copies)

1)-->lslv -m testlv
3lp hdisk5,6,7

2)-->lslv -m testlv
2lp hdisk5,7

3)-->rmlvcopy testlv 1 hdisk6
delete the one copy of testlv
from hdisk6
-->smit rmlvcopy

4)-->splitlvcopy -y newlv testlv 2
-->smit splitlvcopy


5)-->chlv -x 1000 -d pr lv1
Max no. of lp to 1000 & scheduled policy fo i/o to parallel /round robin

-->smit chlv

6)-->mklvcopy -k lv1 3 hdisk7 &
make 3 copies of lv1 , if previously there was 2 copies then make 3rd .

7)-->cplv -v dumpvg(destination vg) -y lv8(copy) lv1(original lv)

8)-->extendlv -a ie -ex lv1 3 hdisk5 hdisk6
Extend lv1 with 3 lp located on inner edges of both hdisk5 & hdisk6

9)-->getlvcb -AT lv1
Display LVCB

10)-->lslv -n hdisk6 testlv
info about lv testlv read from vgda located on hdisk6

11)-->rmlv -p hdisk7 lv1
Tries to delete partitions of lv1 located on hdisk7
12)-->rmlv lv7
deletes lv7

13)-->mklv -y lv4 -t sysdump -a c -e x -c3 -L demo-label -x5 test1vg 2 hdisk5 hdisk6 hdisk7
Create within a VG test1vg , a lv named lv4, type sysdump, 2lp each having 3 copies, located on center of 3 disks labeled  demo-label , having maximum 5 lp.

14)-->mklv -y lv3 -t jfs2 -a im test1vg 10 hdisk5
Create lv3 in testvg, type jfs, 10 lp of hdisk5

Configuration

 Configuration


3Components of ODM
ODM-object classes(group of similar objects)
objects-(A single record in obj. class)
Descriptors(object info)

*ODM info is divided in 3 parts , it supports diskless
/dataless systems
-->/usr/lib/objrepos
contains predefined object classes
-->/usr/share/lib/objrepos
-->/etc/objrepos
Customized devices object classes

*ODM commands
when no option left then only try odm commands
if used falsely the system might crash

-->odmadd
Adds object to an object class.

-->odmchange
Changes specific objects in a specified object class.

-->odmcreate
Creates empty object classes

-->odmdelete
Removes objects from an object class

-->odmdrop
Removes an entire object class

-->odmget
Removes objects from object  classes

-->odmshow
Description of an object class

*Examples of using the ODM

+Device configuration
Predefined Devices (PdDv) contains entries for all devices that can be supported by AIX.
-->odmget -q "type like lv* " PdDv
To search all objects whose type start with letters lv

+Software vital product data
-->odmget lpp | head -30
To find from class lpp about all the s/w installed on the system & select 30 lines of output

+LVM
The odm maintains a copy of all data used by lvm, Commands that affect the lvm are designed so that data from VGDA's located on hard disks are always synchronized with info stored in odm
-->odmget -q name=hdisk0 CuAt

CuAt contains customized device specific attribute information
The command will find all attributes of hdisk0 from CuAt

*System Management Interface Tool
 SMIT runs in two modes
:ASCII(non-graphical)
:X-window(graphical)
ASCII--> smitty /smit -a
Graphical -->smit/smit -m

*System Management Tasks
1)Software Installation & Maintenance
-Installing new Software
-Updating software
-installing  fixes
-listing installed software
-backing up
-restoring the system image

2)Software License Management
-Adding & Deleting node-locked licenses
-Adding & Removing Server Licenses
-Listing Licenses

3)Devices
-Adding , changing, showing, deleting physical & logical devices
-Configuring & unconfiguring devices
-Listing installed devices
-Managing PCI hot plugs

4)System Storage Management (Physical & Logical Storage)
-Managing logical volumes
-Volume groups
-Physical disks
-Paging Space
-Managing file systems
-Managing files & Directories
-Backing up & Restoring the system

5)Security & USer
-Managing user accounts & groups, passwords, login controls, & roles

6)Communications Applications & Services
-Configuring all installed communications options & applications, including TCP/IP, NFS server or client,NIS,DNS.
7)Print Spooling
8)Problem Determination
-Running hardware diagnostic
-Performing system traces
-initiating system dumps
-Printing error logs
-Verifying software installation and requisites.

9)Performance & Resource Scheduling
-Scheduling jobs
-Managing resource processes
-Configuring & enabling power management
-Configuring & using the WLM
-Running System Trace
-Reporting System Activity

10)System Environments
-Starting & Stopping the system
-Configuring & Monitoring System Environment parameters such as long, date,user interface, time, managing system logs, managing the remote reboot  facitlity, managing system hang detection.

11)Processes & subsystems
-Managing subsystems, processes, subservers.

*Linux Applications in AIX
The AIX toolbox for linux applications CD that is shipped with your BOS contains the most commonly used open source applications that  you can use with aix os.

Your options for installing from this CD include
1)-->install_software
Install RPM packages from the AIX toolbox for linux application CD

2)-->geninstall

3)Installing a bundle
a)Installing RPM packages using SMIT
-->install_software
b)Install using the geninstall command
-->geninstall -d /dev/cd0 R:cdrecord R:mtools

*Install using the rpm command
To install the bundles required for the GNOME desktop and the bc application package from AIX toolbox for linux applications.
1)Insert the Aix toolbox cd in cd-rom
2)Mount the cd-rom
-->mount -vcdrfs -oro /dev/cd0 /mnt
-v specifies the virtual file system type of cdrfs.
-o ro option(mounted file is read only)

3)-->cd /mnt/ezinstall/ppc

4)Install GNOME by using  following commands
-->rpm -Uhv ezinstall/ppc/base/*
-->rpm -Uhv ezinstall/ppc/desktop.base/*
-->rpm -Uhv ezinstall/ppc/gnome.base/*
-->rpm -Uhv ezinstall/ppc/gnome.apps/*
-U updates any earlier versions of each package that you might have on your system.

If rpm command returns an error , it is probably caused by one of the following.
1)Not enough space in your current filesystem.
2)Package is already installed , with same level, version

A script on the cd installs only those packages from a directory that are not already installed on your system.

-->/mnt/contrib/installmissing.sh ezinstall/ppc/desktop.base/*

3)Failed dependencies
*Linux Applications which we have installed , to use them with commands and shells ,we must define PATH
-->print $PATH
/usr/bin:/etc/:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin

-->nl -?
-->export PATH=/usr/linux/bin:$PATH
-->print $PATH
/usr/linux/bin:/usr/bin:/etc:/usr/sbin:...

-->nl -?
Try '/usr/linux/bin/nl --help ' for more info

*Setting an alias
alias command = absolute path to command with options if any.

-->alias rm=/usr/linux/bin/rm

*Network File  System NFS
>NFS - distributed filesystem
>NFS - allows users to access files & directories
of remote servers as though they were local.
-You can use os commands to create , remove, read, write & set file attributes for remote files & directories
>NFS is independent of machine types, operating systems, & network architectures because of its use of remote procedure calls (RPC) for these services.

*For the successfull implementation of NFS you need following things
-NFS daemons should be running on the server and the clients
-The filesytems that need to be remotely available will have to be  exported.
-The exported file systems need to bo mounted on the remote (client)systems.

*NFS Services

Major Services provided by NFS are
+Mount -This service is provided by /usr/sbin/rpc.mountd daemon on server
& /usr/sbin/mount command on the client
The mountd daemon is a Remote Procedure Call (RPC)
that answers a client request to mount a filesystem. The mountd daemon provides a list  of currently mounted filesystems & the clients on which they are mounted.

+Remote file access - This service is provided by
/usr/sbin/nfsd daemon on server &
/usr/sbin/biod daemon on client
The biod daemon runs on all NFS client systems, when a user on an client wants to read or write to a file on a server, the biod daemon sends this request to the server.

*nfs daemons can be started using
-->smitty mknfs
-->mknfs -N

*-->startsrc -g nfs
start all of the NFS daemons
-->startnfs -s nfsd
start nfs daemon, individually

-->lssrc -g nfs
Verify that nfs is already running.

*Exporting NFS directories -using SMIT
1)-->lssrc -g nfs
biod
nfsd
rpc.mountd
rpc.statd
rpc.lockd
(all active)

smitty
-->smitty mknfsexp
-->exportfs -a
-->showmount -e server

command line
-->vi /etc/exports
-->/usr/sbin/exportfs -a

2)-->smitty mknfsexp
/etc/exports file will be updated

3) -->/usr/sbin/exportfs -a (server, send all information in the /etc/exports file to the kernel.

4)Verify that all file systems have been exported
-->showmount -e Myserver(server name)

*Exporting NFS directory using a text editor
-->vi /etc/exports
/home1
/home2
/home3

-->/usr/sbin/exportfs -a

*Exporting NFS directory temporarily
A file system can be exported when needed, & does not change the /etc/exports file
-->exportfs -i /dirname
The exportfs -i command specifies that the /etc/exports file is not to be checked for the specified directory, and all options are taken directly from the command line.

*Un-exporting an NFS directory
+Using SMIT
server -->smitty rmnfsexp
enter the name of directory in the PATHNAME of,exported directory to removed, field.
+Using text editor
1)-->vi /etc/exports
delete the line

2)If nfs is currently running 
-->exportfs -u dirname
where dirname is the full path namem of the directory, you just deleted from the /etc/exports file.

*Mounting an NFS directory
3 Types of mounts
a)Predefined
b)explicit
c)automatic

a. Predefined (by default) mounts are specified in the
/etc/filesystems
ex. NFS stanza in the /etc/filesystems file
/home1:
dev="/home1"
vfs=nfs
nodename=Myserver
mount=true
options=log,hard,intr
account=false


server start -->/etc/rc.nfs-->exportfs -->/etc/exports -->rpc.mountd,nfsd start

client-->/etc/rc.nfs -->biod -->mount command -->/etc/filesystemm -->binding

*NFS mounting process
When a client mounts a directory, it does not make a copy of that directory. Rather, the mounting process uses a series of remote procedure calls to enable a client to access the directories on the server transparently.
The following describes the mounting process.
1)When the server starts, /etc/rc.nfs script runs the exportfs command which reads the server /etc/exports file and then tells the kernel which directories are to be exported and which access restrictions they require

2)The rpc.mountd daemon & several nfsd daemons (8, by default) are then started by the /etc/rc.nfs script.

3)When the client starts, the /etc/rc.nfs script starts several biod daemons(8,bydefault), which forward client mount requests to the appropriate server.

4)Then  the /etc/rc.nfs script executes the mount command , which reads the file systems listed in the /etc/filesystem file.

5)The mount command locates one or more servers that export the information the client wants and sets up communication between itself & that server. This process is called binding.

6)The mount command then requests that one or more servers allow the client to access the directories in the client /etc/filesystems file.

7)The  server rpc.mountd daemon receives the client mount request. If the requested directory is available to that client, the rpc.mountd daemon sends the clients kernel an identifier called a file handle.

8)Client kernel then ties the file handle to the mount point.

*-/etc/rc.nfs runs exportfs
-nfsd, rpc.mountd started
-filehandle

/etc/rc.nfs
-biod starts

server binding client

*Establishing predefined NFS mounts
Mounts that are non-interruptible and running in the foreground can hang the client if the network or server is down when the client system starts up. If a client cannot access the network or server, the user must start the machine again in maintenance mode and edit the appropriate mount requests. So to avoid this situation, define the bg(background) and intr(interruptible) options in the /etc/filesystem file when establishing a predefined mount that is to be mounted during system startup.

To establish predefined mounts through SMIT client
-->smitty mknfsmnt
This method  creates an entry in the /etc/filesystems file for the desired mount and attempts the mount.

To establish the NFS default mounts by editing the /etc/filesystems file (use this method only under special circumstances)
1)-->vi /etc/filesystems
(client)
/home1:
dev=/home1    remote filesystem name
mount=false    NFS will be not mounted when the system boots
vfs=nfs
nodename=Myserver    machine on which the remote filesystem mounted
options=ro,soft
type=nfs_mount    [nfs_mount--the system attempts to mount  the /home1 filesystem like all defined in nfs_mount group]
(This stanza directs the system to mount the /home1 remote directory over the local mount point of the same name, FS mounted as ro, Due to the fact that it  is mounted as soft  an error is returned in the event the server does not respond.)

/home2
dev=/home2
mount=true
vfs=nfs
nodename=Myserver
options=ro,soft,bg
type=nfs_mount

2)save & close

3)-->mount -a
mount all directories specified in the /etc/filesystems.
The NFS directory is now ready to use.

*Mounting an NFS directory explicitly
1)-->showmount -e Myserver
/home1
/home2

Verify that the NFS server has exported the directory
2)-->smit mknfsmnt (client)
3)Complete the process, that's it.

*Mounting an NFS directory automatically(how it works)
automount command sends automatic mount configuration information to the AutoFS kernel extension and start the automountd daemon.
And due to only this the extension automatically and transparently mounts filesystems whenever a file or a directory within the file system is opened. The extension informs the automountd daemon of mount & unmount requests, & the automountd daemon actually performs the requested service.
Autofs allows the filesystems to be mounted as needed. With this method of mounting directories, all the filesystems do not need to be mounted all of the time, only those being used are mounted.
for ex. to mount the /backup NFS directory automatically.

1)Verify that the NFS server has exported the directory by entering
-->showmount -e Myserver
/backup

2)Create an Autofs map file. Autofs will mount & unmount the directories specified in this map file. An example of a map file can be found in /usr/sample/nfs

3)Ensure that the AutoFs kernel extension is loaded and the automounted daemon is running .
This can be accomplished in two ways.
a. Using SRC, enter
-->lssrc -s automountd
if is is not running, start it by
-->startsrc -s automountd
b.Using the automount command
Define the map file
-->/usr/sbin/automount -v /backup /tmp/mount.map
where /backup is the AutoFS mountpoint on the client . Now if a user runs the
-->cd /backup
the AutoFS kernel extension will call the automountd daemon, which will mount the /backup directory & then allow the cd command to complete.

4)To stop the automountd
-->stopsrc -s automountd

*Changing an exported filesystem
This section explains how you can change an exported NFS.
a)Changing an exported NFS directory using SMIT
1)Un-export the filesystem on the server by entering
-->exportfs -u /dirname
/dirname is the name of the filesystem you want to change.

2)On the server
-->smitty chnfsexp
make changes

3)Reexport the filesystem
-->exportfs /dirname

b)Changing an exported NFS directory using a text editor
1)Unexport the filesystem
-->exportfs -u /dirname
2)--> vi /etc/exports
3)Make Changes
4)Reexport
-->exportfs /dirname

*Unmounting a mounted Filesystem
-->umount /directory


1)Start nfs daemon
-->smitty mknfs
-->mknfs -N
-->startsrc -g nfs
-->stratsrc -nfsd

2)Verify that NFS is already running
-->lssrc -g nfs

3)Exporting NFS directories using SMIT
-->smitty mknfsexp
-->exportfs -a
-->showmount -e servername

4)Exporting NFS directory using  command line
-->vi /etc/exports
-->/usr/sbin/exportfs -a

5)Un-exporting an NFS directory using smit
-->smitty rmnfsexp (server)

6)Unexport NFS dir using text editor
--> vi /etc/exports
-->exportfs -u /dirname(which we have deleted from /etc/exports )

7)3 types of mounts
1)Predefined
client -->/etc/filesystem
/home:
dev=/home
vfs=nfs
nodename=ServerName
mount=true
options=bg, hard, intr
account=false

-->smitty mknfsmnt
this will create above entries in /etc/filesystems

2)explicity
-->smit mknfsmnt
completes the process

3)Automatically
-->lssrc -s automountd
-->startsrc -s automountd
-->/usr/sbin/automount -v /dirname /tmp/mount.map
-->stopsrc -s automountd
stop automountd

4)Default NFS mounts
-->vi /etc/filesystems (enter below stanza in this file)
/dirname:
dev=/dirname
mount=true/false
vfs=nfs
nodename=Servername
options=ro/soft/hard/intr/bg
type=nfs_mount

-->mount -a

8)Changing an exported filesystem /dir using SMIT
-->exportfs -u /dirname
unexport

-->smitty chnfsexp
-->exportfs /dirname

9)Network Configuration
Do not restart TCP/IP daemons, using the command
-->startsrc -g tcpip
It will start all subsystems defined in the ODM for the tcpip group, which includes both routed & gated.

*rc.tcpip
If file does not activated during boot then most daemons will not start. We can telnet, ftp to others but the cannot to us. But  others can ping us.
-->telnet host
Remote host refuse connect operation

-->ftp host
Remote host  refuse connect  operation.

10)inted daemon
/usr/sbin/inted daemon provides internet service  management for a network. This daemon invokes other daemons when they are needed.

*Starting and refreshing inted
+If you change the /etc/inted.conf using  SMIT, then the inted daemon will be refreshed automatically , & will read the new /etc/inted.conf file.
+If you change the file using an editor,
run
-->refresh -s inetd
or
-->kill -1 intedPID
this will refresh it.

*Subservers controlled by inted
ftpd,rlogind,rexecd, rshd, talkd, telnetd, uucpd -Started by default

tftpd, fingerd, comsat - not started by default

*Check details of subservers
-->lssrc -ls inetd

*/etc/services
This file contain information about the known services used in the internet network by inted,
If you edit the /etc/services file run .
-->refresh -s inted

*Stopping inted
-->stopsrc -s inted
When the inetd daemon is stopped, the previously started subserver processes are not affected. But new subserver can not be started.

*To check attributes of any interface
-->inconfig lo0
-->lsattr -El lo0

*Portmap
converts RPC in port numbers.
when RPC server starts then it tells the portmap daemon that on which it is accessible, then the portmapper saves that portnumber(which access RPC server). That's why portmapper has all ports of all programs
When client has to access any application then it ask the portmap the port number of that application.
RPC daemon is start before inetd so portmap is also start before inetd.

If Portmap stop/restart then RPC server should be restarted.
-nfsd is a common RPC server

*Loopback interface allows a client & server on the same host to communicate with each other using TCP/IP
-->ifconfig lo0
-->lsattr -El lo0

*Network address is an IP address with all host address set to 0.
-->netstat -nr

*255.255.255.255 - Limited broadcast address.
An address with all host address and network address bits set to 1. This is used as the destination address for all hosts regardless of their network number.
Routers never forward a limited broadcast, it only appears on the local cable.

*The directd broadcast address is an IP address, with all the host address bits set to 1.
It is used to simultaneously address all hosts within the same netwrok.
ex. 195.116.192.2 -C
195.116.192-Network address
so the directd broadcast for this network will be 195.116.192.255
-->ifconfig en0

*Gateways or routes are systems or network devices that will route information onto other systems or networks.

*The default order in resolving hostnames is
1)BIND/DNS (using /etc/resolv.conf)
2)NIS
3)/etc/hosts file.

When a process receives a sysmbolic hostname and needs to resolv it into an address, it call a resolver routine.
The default order can be overwritten by creating the configuration file, /etc/netsvc.conf
and specifying the desired order.
-->/etc/netsvc.conf
hosts=nis,local,bind(as you like it)

*Both the default and /etc/netsvc.conf can be overwritten with the variable NSORDER
-->export NSORDER=bind,nis,local

*The NSORDER environment variable will override the host name resolution list  in /etc/netsvc.conf
If your /etc/netsvc.conf does work properly the check NSORDER variable
-->echo #NSORDER

*/etc/hosts
This file provides a list of server names or aliases and their IP address.
IP Address     Hostname

*/etc/resolv.conf
-->vi /etc/resolv.conf
nameserver 9.31.4
domain itsc.austin.ibm.com (what domain it belongs)

*Troubleshooting resolving a host.
1. Check /etc/resolv.conf
check nameserver IP
check domain-name
2. ping server
3. check nameserver
4. check logs in /etc/syslog.conf

*Adding network routes
-->smit mkroute
to add a route to the private network through the gateway between two networks.
Destination Type -net
Destination Address- 192.168.1
Default gateway Address- 9.3.1.4

-->route add -net 192.168.1(network) -netmask 255.255.255.0 9.3.1.4(gateway)

-->traceroute IP

*Change IP address
If you are moving your machine from one network segment to another, and need to change IP address.
-->smit mktcpip
HOSTNAME
IP
NETMASK
NETWORK INTERFACE
NAMESERVER
IP
DOMAIN NAME
DEFAULT GATEWAY

Do not perform this task in a telnet command session, as you will lose your  connection when the change is made, (you can do it in ssh, but beware to change the ip)

*ifconfig
Identifying network interfaces
-->lsdev -Cc if
or
-->ifconfig -a

*Activating NIC
-->ifconfig tr0 up
-->ifconfig lo0 127.0.0.1
-->inconfig tr0 10.1.2.3 netmask 255.255.255.0 up

*Deactivating NIC
-->ifconfig tr0 down

*Deleting NIC
-->ifconfig tr0 delete

*Detaching a network interface
To remove an interface from the network interface list, the interface must be detached from the system. This command is used when a network interface card has physically been removed from a system or when an interface no longer needs to be defined within the system.
-->ifconfig interface detach
-->infonfig tr0 detach

This command removes all network addresses resigned to the interface & removes the interface from the output of the ifconfig -a command.
To add an interface back to  the system, or to add a new interface to the network interface list
-->ifconfig interface (the interface u want to add)

*Creating an IP alias for a network interface
-->ifconfig interface address[netmask Netmask ] alias

-->ifconfig tr0 10.1.2.3 netmask 255.255.255.0 alias IP address
There will be no ODM record created of the alias by this command, You need to invoke the same commad every time you reboot your system to preserve the alias. If your system configuration has a local startup script defined in the /etc/inittab file.
This command should be included in that local script

-->ifconfig -a -d
show only those interfaces that are down.

*Delete alias
-->ifconfig tr0 10.1.2.3 netmask 255.255.255.0 delete
specify exactly which alias you want to be removed, system will default remove primary network address, then the first alias in the list of network addresses become the primary network address. To remove all aliases from and interface , you must delete each alias individually.

*Changing the MTU size of a network interface
By default, a 16MB token-ring interface will transmit packets that are 1492 bytes long,
Ethernet packets- 1500 bytes long

*Determine MTU size for a network interface
-->lsattr -El interface
-->lsattr -El en0

*Change the MTU size
-->ifconfig en0 mtu 2000

*All systems that are on the same LAN must have the same MTU size, so when to change MTU you must simultaneously chage all nodes MTU.
stop interface and then change.

*The ntp.conf
ntp.conf file controls how network time protocol(ntp)daemon xntpd operates and behaves

*Truseted daemons
-ftpd
-rexecd
-telnetd

*non-trusted
-rshd
-rlogind
-tftpd

*$HOME/.netrc
info used by automatic logic feature of the rexec and ftp commands.
-logic password
-file permission must be 600
entries in the $HOME/.netrc file are like below.
>machine hostname- name of remote host and all other parameters of host

>login UserName-FQDN
>password
>account password
>macdef

*/etc/hosts.equiv
/etc/hosts.equiv along with any local $HOME/.rhosts files, defines the hosts and user accounts that can invoke remote commands on a local host without supplying a password.
A user or host that is not required to supply a password is considered trusted, though the daemons that initiate the connections may be non-trusted in nature(ex. rlogind).
When a loacal host receives a remote command requests, the appropriate local daemon first checks the /etc/hosts.equiv file to determine if the request originates with a trusted user or host.
ex. If the local host receives a remote login request, the rlogind daemon checks for the existence  of a hosts.equiv file on the local host. If the file exists, but does not define the host or user, the system checks the apppropriate $HOME/.rhosts file.
This file is similar to the /etc/hosts.equiv file except that it is maintained for individual users.
If a remote command request is made by the root user, the /etc/hosts.equiv file is ignored & only the /.rhosts file is read.

*Format of the /etc/hosts.equiv & $HOME/.rhosts file as follows
eg. To allow all the users on the hosts idea and vodafone to log in to the local host,enter
idea
vodafone

eg. To only allow the user bob to log in from the host vodafone , enter
idea
vodafone bob

eg. To allow all users from the host idea to log in , while requesting users joey and ross for a password to log in
vodafone
idea-joey
idea-ross
idea

eg. To deny all members of the forum netgroup(NIS)
from logging in automatically ,enter
-@forum

*Operations on a network adapter
1)Adding a network adapter
You should perform this procedure during a system maintenance window, as this procedure will require the shutdown of the system and may interfere with the work of users on the system.
i)examine what network adapters and interfaces are already on the system by
-->lscfg | grep -i adapter
-->lsdev -Cc if

ii)shut down & power off the system for systems without hot plug cards.

iii)Physically install the new network adapter

iv)Poweron the system in normal mode.

v)When the system is fully up, run the cfgmgr command, This will automatically detect the network adapter and add network interfaces for the adapter

vi)-->cfgmgr

vii)Confirm the network adapter has been properly added to the system
-->lscfg | grep -i adapter
-->lsdev -Cc if

* Removing a network adapter
Do this in System maintenance window because you may need to restart the system.

i)Deactivate all network interface definitions for the network adapter by running
-->ifconfig interface down

ii)Remove (detach)all network interface definitions from the network interface list
-->ifconfig interface detach
This will remove all attributes associated with the network interface from the system, including attributes like IP address & MTU size.

iii)Delete the network interface definitons
-->rmdev -l interface -d

iv)Delete the network adapter definitions
-->rmdev -l adapter -d

v)shutdown, remove the adapter

vi)poweron

*Adapter and interface configuration problems
1)MEdia speed configuration problems
An incorrect media speed will prevent the system from communicating with other systems or networks.
>Sysmptoms of a problem due to incorrect media speed
i)-Connection timeouts(telnet)
ii)-No packet transmission or response(ping)
iii)-Unusual pauses & hangs when initializing communication daemons(inetd)

To correct media speed problems
i)Obtain proper media speed from your network admin.

ii)Remove / detach all network interfaces for the network adapter
-->ifconnfig interface detach
Removing all network interface by detach will remove all configuration, for network interface  , keep a copy of all necessary configuration, for network interfaces for reconfiguration

iii)To change the media speed for an ethernet adapter
-->smitty chgenet

*Cable type configuration problems
Ethernet adapters can use different types of cable connections, bnc, dix or tp
If the cable type is set incorrectly, the system may not be able to communicate properly.
To set the cable tyep
-->smitty tcpip

*Paging Space
A page is a unit of virtual memory size = 4 kB
Paging space also called a swap space
-Paging space can use no less than 64MB
-1.5 X RAM / 2 X RAM
when p.s. gets low fork operations failed.

Tips
-1 ps on 1 pv
-PS shouldnt be on Heavily active disk/lv
-Every ps should be of same size
-Do not spread 1 ps on many pv's
-Allocate ps on pv's which attached to differnet disk controllers.

*Managin ps
>Displaying ps usage
-->lsps -a

>Increase the size of hd6 with 3LP
-->chps -s 3 hd6

>Reducing ps of hd6 with 1LP
-->chps -d 1 hd6

*Moving the hd6 ps to another vg
Not recommended

*Moving a ps within the same vg
Moving a ps including hd6 from the default location to different disk within the same vg does not require system reboot
-->migratepv -l hd6 hdisk0 hdisk1
Move the default ps from hdisk0 hdisk1

*Removing ps (not hd6)
First deactivate the ps
-->swapoff devicename
-->swapoff /dev/paging03
to deactivate ps paging03

-->rmps paging03
To remove paging03

*Device Configurations
>Determining the existing device configurations
1)lscfg
To display summary or detailed data about devices, vital product data, part numbers,serial numbers, engineering change levels from either the cutomized VPD object class or platform specific areas. Not all device contain VPD data.
-->lscfg -v -p -l rmt0
Display the  VPD for about rmt0

-->lscfg -v -p -l ent2
Obtain the physical location and firmware version of eth0

2)lsdev
Display information about devices in the device configuration database
-->lsdev -C -c disk
Show the disk drives on your system

-->lsdev -C -c tape
Show the tape devices on your system

-->lsdev -C -c adapter
Show all the adapters on your system

3)lsattr
Display attributes of a given device or kind of device.
i)To learn more about a particular processor,
-->lsattr -El proc0

ii)To discover how much memory is installed
-->lsattr -El sys0 | grep realmem

iii)To discover if ent2 supports jumbo frames transmission
-->lsattr -EHl ent0 -a jumbo-frames

iv)To discover if device driver software for the 14100401 class of adapters(gigabit Ethernet) is installed
-->lslpp -l | grep 1410041

*Remove a device configuration.
To unload an existing device from the system, you have two possibilites ; either change the state from AVAILABLE to DEFINED or permanently delete all entries from the ODM
Following example show the process to change the state
-->lsdev -Cc tape
rmt0 Available 09-87-.....

-->rmdev -l rmt0
rmt0 Defined

-->lsdev -Cc tape
rmt0 Defined

To unload the device configuration from the ODM
use -d option
-->rmdev -dl rmt0
rmt0 deleted

-->lsdev -Cc tape
rmt0 is completely removed

-->cfgmgr
-->lsdev -Cc tape
rmt0 Available

*Modify an existing device configuration

i)To change one or more attributes of the tok0 token-ring adapter to preset values as described in the changattr file
-->chdev -l tok0 -f changattr
tok0 changed

ii)To change the SCSI ID of the available scsi0 SCSI adapter that cannot be changed(mode unavailable due to available disk drives connected to it)
-->chdev -l scsi0 -a id=6 -P
scsi0 changed

iii)Move the defined tty11 TTY device to port 0 on the sa5 serial adapter
-->chdev -l tty11 -p sa5 -w 0
tty11 changed

iv)To change the maximum number of porcesses allowed per user to 100,
-->chdev -l sys0 -a maxuproc=100
sys0 changed

v)-->lsattr -El rmt0
detailed output

vi)Change the block size parameter
-->chdev -l rmt0 -a bloc_size=512
rmt0 changed

*SMIT fast  paths for device configuration
-->smitty devices
-->smitty chdev
-->smitty rmdev

*Special device configurations
i)To turn off simultaneously multithreading  immeadiately, without rebootinng
-->smtctl -m off -w now

-->bindprocessor -q
query the available processors.

ii)Turnon simultaneously multithreading after the next reboot
-->smtctl -m on -w boot
-->bootinfo -y
Type of system hardware you have 32 bit or 64 bit.
If result is 32 you cannot use 64-bit kernel after system installation , you will need to instruct the system to use the 64-bit kernel information stored on the /usr/lib/boot/

2.Two kernels are available in /usr/lib/boot/
unix-mp 32 bit kernel for multiprocessor systems
unix-64 64 bit kernel for multiprocessor systems

To enable the 64 bit kernel after system installation
-->ln -sf /usr/lib/boot/unix_64 /unix
-->ln -sf /usr/lib/boot/unix_64/ /usr/lib/boot/unix
-->bosboot -ad /dev/ipldevice
-->shutdown -r
After the system has rebooted , it will be running the 64-bit kernel. To reactivate the 32-bit kernel , follow the same procedure, substituting unix_mp for unix_64 depending on your system tape.
To verify your settings
-->ls -al /unix