find . -type f | xargs ls -s | sort -rn | awk ‘{size=$1/1024; printf(“%dMb %s\n”, size,$2);}’ | head or du -xak . | sort -n | awk ‘{size=$1/1024; path=”"; for (i=2; i 50) { printf(“%dMb %s\n”, size,path); } }’ or du -a /var | sort -n -r | head -n 10
Deletion by Month
ls -lh | awk ‘{print $6 ” ” $9}’ | sed -n ‘/Mar/p’ | xargs rm -rf
Edit /home/username/.vnc/xstartup script as #!/bin/sh # Uncomment the following two lines for normal desktop: unset SESSION_MANAGER exec /etc/X11/xinit/xinitrc [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & twm &
to kill the vnc vncserver -kill :1 vncserver
By executing vncserver for the second time or execute the vncserver :2 command, this will startup VNC server that bind and listen to port 5802, 5902, and 6002 respectively. To connect to Linux VNC server over HTTP protocol, just type walkernews.net:5801 (replace walkernews.net with your VNC server IP/hostname) at any javascripts-enabled web browser, such as Mozilla Firefox, Opera, or Internet Explorer. To connect to Linux VNC server over RFB protocol, just type walkernews.net:5901 at the VNC client.
Execute preclone on all tiers of the source system. This includes both the database and application tiers. (For this example, TEST is my source system.)
For the database execute: $ORACLE_HOME/appsutil/scripts//adpreclone.pl dbTier Where context name is of the format _
For the application tier: $ADMIN_SCRIPTS_HOME/adpreclone.pl appsTier
Prepare the files needed for the clone and copy them to the target server.
Take a FULL rman backup and copy the files to the target server and place them in the identical path. ie. if your rman backups go to /u01/backup on the source server, place them in /u01/backup on the destination server. To be safe, you may want to copy some of the archive files generated while the database was being backed up. Place them in an identical path on the target server as well.
Application Tier: tar up the application files and copy them to the destination server. The cloning document referenced above ask you to take a copy of the $APPL_TOP, $COMMON_TOP, $IAS_ORACLE_HOME and $ORACLE_HOME. Normally I just tar up the System Base Directory, which is the root directory for your application files.
Database Tier: tar up the database $ORACLE_HOME.
ex. from a single tier system. The first tar file contains the application files and the second is the database $ORACLE_HOME
[oratest@myserver TEST]$ pwd /u01/TEST [oratest@myserver TEST]$ ls apps db inst [oratest@myserver TEST]$ tar cvfzp TEST_apps_inst_myserver.tar.gz apps inst . . [oratest@myserver TEST]$ tar cvfzp TEST_dbhome_myserver.tar.gz db/tech_st Notice for the database $ORACLE_HOME I only added the db/tech_st directory to the archive. The reason is that the database files are under db/apps_st and we don't need those.
Copy the tar files to the destination server, create a directory for your new environment, for example /u01/DEV. (For the purpose of this article I will be using /u01/DEV as the system base for the target envrionment we are building and myserver is the server name.)
Extract each of the tar files with the command tar xvfzp
Ex. tar xvfzp TEST_apps_inst_myserver.tar.gz
Configure the target system.
On the database tier execute adcfgclone.pl with the dbTechStack parameter.
For example. /u01/DEV/db/tech_st/10.2.0/appsutil/clone/bin/adcfgclone.pl dbTechStack
By passing the dbTechStack parameter we are tell the script to configure only the necessary $ORACLE_HOME files such as the init file for the new environment, listener.ora, database environment settings file, etc. It will also start the listener.
You will be prompted the standard post cloning questions such as the SID of the new environment, number of DATA_TOPS, Oracle Home location, port settings, etc.
Once this is complete goto /u01/DEV/db/tech_st/10.2.0 and execute the environment settings file to make sure your environment is set correctly.
[oradev@myserver 10.2.0] . ./DEV_myserver.env
Duplicate the source database to the target.
In order to duplicate the source database you'll need to know the scn value to recover to. There are two wasy to do this. The first is to login to your rman catalog, find the Chk SCN of the files in the last backupset of your rman backup and add 1 to it.
Ex. Output from a rman> List backups . . List of Datafiles in backup set 55729 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---- 7 1 Incr 5965309363843 15-JUN-09 /u02/TEST/db/apps_st/data/owad01.dbf . . So in this case the SCN we would be recovery to is 5965309363843 + 1 = 5965309363844.
The other method is to login to the rman catalog via sqlplus and execute the following query: select max(absolute_fuzzy_change#)+1, max(checkpoint_change#)+1 from rc_backup_datafile;
Use which ever value is greater.
Modify the db_file_name_convert and log_file_name convert parameters in the target init file. Example:
Verify you can connect to source system from the target as sysdba. You will need to add a tns entry to the $TNS_ADMIN/tnsnames.ora file for the source system.
Duplicate the database. Before we use rman to duplicate the source database we need to start the target database in nomount mode.
If there are no connection errors duplicate the database with the following script:
run { set until scn 5965309363844; allocate auxiliary channel ch1 type disk; allocate auxiliary channel ch2 type disk; duplicate target database to DEV }
The most common errors at this point are connection errors to the source database and rman catalog. As well, if the log_file_name_convert and db_file_name_convert parameters are not set properly you will see errors. Fix the problems, login with rman again and re-execute the script.
When the rman duplicate has finished the database will be open and ready to proceed with the next steps.
Execute the library update script:
cd $ORACLE_HOME/appsutil/install/DEV_myserver where DEV_myserver is the of the new environment. sqlplus "/ as sysdba"@adupdlib.sql If your on linux replace with so, HPUX with sl and for windows servers leave blank.
Configure the target database
cd $ORACLE_HOME/appsutil/clone/bin/adcfgclone.pl dbconfig Where is $ORACLE_HOME/appsutil/DEV_myserver.xml
Configure the application tier. cd /u01/DEV/apps/apps_st/comn/clone/bin perl adcfgclone.pl appsTier
You will be prompted the standard cloning questions consisting of the system base directories, which services you want enabled, port pool, etc. Make sure you choose the same port pool as you did when configuring the database tier in step 3.
Once that is finished, initialize your environment by executing . /u01/DEV/apps/apps_st/appl/APPSDEV_myserver.env
Shutdown the application tier.
cd $ADMIN_SCRIPTS_HOME ./adstpall.sh apps/
Login as apps to the database and execute:
exec fnd_conc_clone.setup_clean;
I don't believe this step is necessary but if you don't do this you will see references to your source environment in the FND_% tables. Every time you execute this procedure you need to run autoconfig on each of the tiers (db and application). We will get to that in a second.
Change the apps password. Chances are you don't want to have the same apps password as the source database, so its best to change it now while the environment is down.
With the apps tier environment initialized:
FNDCPASS apps/ 0 Y system/> SYSTEM APPLSYS
Run autoconfig on both the db tier and application tier.
db tier: cd $ORACLE_HOME/appsutil/scripts/DEV_myserver ./adautocfg.sh
Application Tier cd $ADMIN_SCRIPTS_HOME ./adautocfg.sh
If there are no errors with autoconfig start the application. Your already in the $ADMIN_SCRIPTS_HOME so just execute:
./adstrtal.sh apps/
Login to the application and perform any post cloning activities. You may want to override the work flow email address so that notifications goto a test/dev mailbox instead of users. We always change the colors and site_name profile options, etc.
(a) P4 3.0 GHz System with 2GB RAM and 200 GB HDD (Redhat Linux AS 4) /d01 ——- 40 GB (Application Tier Files) /d02 ——- 10 GB (10g Oracle Home) /d03 ——- 80 GB (Data Files) /backup —- 100 GB (NFS mount point Shared on TEST Server) Hostname: prodserver Application Version: 11.5.10.2 Database Version: 10.2.0.2 Target System (TEST):
Destination System (TEST): (b) P4 2.6 GHz system with 1.5 GB RAM with 300 GB HDD (Redhat Linux AS 4) /d01 ——- 40 GB (Application Tier Files) /d02 ——- 10 GB (10g Oracle Home) /d03 ——- 80 GB (Data Files) /backup —- 100GB (NFS Share Directory) Hostname: testserver Application Version: 11.5.10.2 Database Version: 10.2.0.2
Note: This target System was previously cloned with cold backup. This is second time cloning with Hot Backup from PRODSERVER.
Stage1: Prerequisites:
========> Apply OUI22 Patch, 5035661 to every IAS Oracle Home
and RDBMS Oracle Home to be cloned.
If you are having 10g Oracle Home, there is no need of applying this patch.
You need to apply this patch on IAS Oracle Home (if Database is not 10g)
A. Applying the patch on the iAS $ORACLE_HOME:
(a) Unzip the patch into the directory: $unzip -od /d01/prodora/iAS p5035661_11i_LINUX.zip
(b) Source the Apps environment file : $. $APPL_TOP/APPSORA.env
(c) Change directory to the /appsoui/setup $cd $IAS_ORACLE_HOME/appsoui/setup
(d) Execute the perl script OUIsetup.pl: $perl OUIsetup.pl
NOTE:
In the case of a Multi-Node instance, the above process should be repeated on the of each Node.
(B) Applying the patch on the RDBMS $ORACLE_HOME:
(This step is not required for my current setup, because my database version is 10g R2)
(a) Unzip the patch into the directory: $unzip -od /u01/proddb/9.2.0 p5035661_11i_LINUX.zip
(b) Source the DB environment file : $. $ORACLE_HOME/PROD_prodserver.env
(c) Change directory to the /appsoui/setup $cd $ORACLE_HOME/appsoui/setup
(d) Execute the perl script OUIsetup.pl: $perl OUIsetup.pl
======> Check all other Requirements as Perl, JRE, JDK, ZIP utilities on Source and Target Nodes as per
document “Cloning Oracle Applications Release 11i with Rapid Clone”
=======> Apply the Latest AD Minipack on Application Tier (Latest One is AD.I.5)
=======รจ Apply the Latest Autoconfig Template Patch and Latest Rapidclone Patches to Application Tier (Check Metalink for These Patches)
Stage2: Prepare the Source System (PRODSERVER)
(a) Login into Database Tier as ORACLE user and run the preclone $cd $ORACLE_HOME/appsutil/scripts/PROD_prodserver $perl adpreclone.pl dbTier
(b) Login into the Application Tier as APPLMGR User and run the preclone $cd $COMMON_TOP/admin/scripts/PROD_prodserver $perl adpreclone.pl appsTier
Stage3: Put the Database in Begin Backup Mode and copy the Database Files
(a) Login into database as sysdba user $sqlplus “/as sysdba” Sql> alter database begin backup;
(b) Copy Archive log files created during hot backup to /backup directory.
(c) Copy the All Data files to /backup directory.
(d) Backup the control file to trace. Sql> alter database backup control file to trace;
Copy this trace file to /backup directory
(e) Copy the current init.ora file to /backup directory
(f) End the Begin Backup Mode. Sql> alter database end backup.
Stage4: Copy the Application Tier File System Files
(a)Login into the Application Tier as APPLMGR user and copy the APPL_TOP, COMMON_TOP, IAS ORACLE HOME and 8.0.6 Oracle Home to /backup directory
Stage5: Copy the Source Database files and Application Files to Target server
Copy the parameter file, backup control file and archive log files from /backup directory to /d01, /d02 and /d03 in target server.
Stage 6: Configure the Target Database (TESTSERVER)
Log on to the target system as the ORACLE user
(1) Configure the cd /appsutil/clone/bin perl adcfgclone.pl dbTechStack
(2) Create the target database control file manually Open the backed up control file a. remove all lines before the startup nomount statement b. Modify the REUSE to SET c. Modify Source DB SID to Target SID (Here PROD to TEST) d. Modify NORESETLOGS TO RESETLOGS e. delete all lines after the CHARACTER SET statement
————————————
CREATE CONTROLFILE SET DATABASE “TEST” NORESETLOGS ARCHIVELOG…
On the target system, modify the init.ora to have the target SID and location of the control file and also make sure that init.ora parameters are set for archive log mode On the target system,
starup the database in nomount stage sql> startup nomount pfile=< Target init.ora path> sql> @clone.ctl ( here clone.ctl is the control file which we have modified above)
Once control file is created, database will be in mount stage execute recover command using backup control file after the database is mounted SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
After the last archive log has been applied, issue the following command SQL> alter database open resetlogs;
After opening the database, add temp files to target database
(3) Run the library update script against the database cd /appsutil/install/
Where is “sl” for HP-UX, “so” for any other UNIX platform and not required for Windows.
(4)Configure the target database (the database must be open) cd /appsutil/clone/bin perl adcfgclone.pl dbconfig where target context file is: /appsutil/.xml
Stage 7 : Configure the Target Application Tier
Logon to the target system as the APPLMGR user and type the following commands
Just run adstrtal.sh/adstpall.sh, addbctl.sh and addlnctl.sh.
Starting is Simple.
addbctl.sh start
addlnctl.sh start SID
adstrtal.sh apps/password
Stoping is also fairly simple but “small care” needs to be taken to avoid critical issues. I start my preparation sometime before the downtime scheduled, to let the concurrent request finish. Following are the steps to bring down middle-tier services
Bring down the concurrent manager before maintenance say 20 mins before. adcmctl.sh stop apps/Password
Check if any concurrent reqeust is running. if running, check what it is doing, like sql, session is active.
Check previous execution of similar program took how much time.Is it worth to wait or cancel the request
If it is affecting downtime then login from front-end and terminate the concurrent program, and make a note of request id(communicate to user who submitted this request so they can submit again)
Check the OS process id, whether it got terminated or not. If running then its a runaway process kill it. I dont like killings but… SQL> select oracle_process_id from fnd_concurrent_requests where request_id=&Request_id;
For bringing down database tier.
Check if hot backup is going on or not.. To check, go to alert log file $ORACLE_HOME/admin/CONTEXT_NAME/bdump/alert_sid.log and also from sqlplus SQL> select distinct status from v$backup; If it returns row containing “ACTIVE” then hot back is in progress. Wait till it gets over. Otherwise next startup shall create problem. Though we have ways and means to overcome but why do that.
Conditional - If you are using DR, pls take care of following steps
Check which archive dest state refer for DR, enable it . From show parameter log_archive_dest.. you may come to know.. say if you are using 3rd then run the sql SQL>alter system set log_archive_dest_state_3=enable;
Check if standby is performing managed recovery. SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;PROCESS STATUS ——- ———— ARCH CLOSING ARCH CONNECTED MRP0 WAIT_FOR_LOG RFS WRITING RFS RECEIVING RFS RECEIVING
1. Login as oracle application user 2. Go to $FND_TOP/secure 3. java oracle.apps.fnd.security.AdminAppServer apps/ \AUTHENTICATION ON DBC= Please check the below example :- 4. java oracle.apps.fnd.security.AdminAppServer apps/ffdev21 \AUTHENTICATION ON DBC=ffus.com_ffus.dbc Output will be :- AUTHENTICATION ON executed successfully - ffus.com_ffus.dbc 5. In the command prompt echo $FORMS60_TRACE_PATH It will give you the trace path. Make sure the path is set. 6. Open internet explorer Type the url below :- http://:/dev60cgi/f60cgi/?&record=collect&log=$FORMS60_TRACE_PATH/ even you can change =$FORMS60_TRACE_PATH and can have your own path but make sure that path has got read and write permission .It is advisable to have default path. Example is below :- http://ffus.com:8000/dev60cgi/f60cgi/?&record=collect&log=$FORMS60_TRACE_PATH/faoracle.frd Now you can generate FRD trace depend upon your situation It will generate FRD trace file in your $FORMS60_TRACE_PATH directory with the name supplied by you in URL. After finishing the the entire task make sure that you disable the trace :- The steps are as follows:- Note :- It has got security issues ,so make it disable. 1.Login as oracle application user 2.Go to $FND_TOP/secure 3. java oracle.apps.fnd.security.AdminAppServer apps/ \AUTHENTICATION OFF DBC= 4. java oracle.apps.fnd.security.AdminAppServer apps/ffdev21 \AUTHENTICATION OFF DBC=ffus.com_ffus.dbc Output will be :- AUTHENTICATION OFF executed successfully - ffus.com_ffus.dbc Alternative Steps: Backup and open $APPL_TOP/admin/_.xml context file 2. Update the context variable: s_appserverid_authentication By default in 11.5.10, this is set to SECURE. In previous 11i versions, this was set to OFF. For debug purposes, you can use ON or OFF Make it ON 3. Run Autoconfig to instantiate the change. You should now be able to access forms directly again using the f60cgi call. 4. After you have finished your Forms debugging, please reset s_appserverid_authentication to SECURE and re-run Autoconfig
Please find two commands that I use for bouncing the Apache: - $COMMON_TOP/admin/scripts/$TWO_TASK*/adapcctl.sh stop $COMMON_TOP/admin/scripts/$TWO_TASK*/adapcctl.sh start
Of course this needs to be done in Middle Tier of Oracle Applications. In case you have modified any java or class file in OAF ( Oracle Applications Framework ), then Apache bounce becomes mandatory for those changes to take effect.
In case you modify and load the XML Document in Oracle Framework, then it is noticed, for those XML changes to take effect, complete bounce of Middle Tier is required in Oracle Apps. If your client is still stuck with AK Developer, then Apache bounce will be required after akload has been executed.
Please makes sure that the following or later versions of packages are successfully installed along with operating system. These are pre-requisite for oracle 11g installation.
binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
elfutils-libelf-devel-static-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-common-2.5
glibc-devel-2.5
glibc-headers-2.5
kernel-headers-2.6.18
ksh-20060214
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.2
libgomp-4.1.2
libstdc++-4.1.2
libstdc++-devel-4.1.2
make-3.81
numactl-devel-0.9.8.i386
sysstat-7.0.2
To use ODBC, you must also install the following additional 32-bit ODBC RPMs, depending on your operating system:
unixODBC-2.2.11 (32-bit) or later unixODBC-devel-2.2.11 (32-bit) or later
Modify of add these entries in /etc/sysctl.conf using vi editor
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
If necessary, update the resource limits in the /etc/security/limits.conf configuration file for the installation owner. For example, add the following lines to the /etc/security/limits.conf file:
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
Modify or add the following entry in /etc/pam.d/login using vi editor
session required pam_limits.so
Next create the group and user for oracle database
Create the installation directory for oracle installation
mkdir -p /u01/oracle
chown -R oracle:dba /u01/oracle
chmod -R 775 /u01/oracle
Create staging directory for oracle installation
mkdir -p /u01/stage
chown -R oracle:dba /u01/stage
After unzipping the installation files change to the directory containing uninstaller. You must be the oracle user (not root) and you must verify your shell is set correctly. As the oracle user and start the Oracle installer.
$ cd /u01/stage/database
$./runInstaller.
It connects you to the New Window
On that window u can ignore email setting
Then You can skip software updates
From installation option, select create and configure a database.
From the server class, choose server class
From Grid Installation options select "Single Instance database installation"
Choose Install type as Advance Install.
Database Edition should be Enterprise Edition.
As we created earlier, choose /u01/oracle as Base and Software location will be similar to the /u01/oracle/product/11.1.1/dbhome_1
Database is for OBIEE 11g, so please choose Data Warehousing
Then edit Global database name.
If database is for demo purpose, so I have to keep with minimum configuration
Choose File System. But if you are planning for ASM refer configuration details from installation document.
Better to choose the second option this time. Use the same Password for all accounts.
Accept the defaults
This is a typical result. Based on your OS configuration, please read the oracle Installation documents for more info.
Next information is good for future reference.
This will take some time to complete. Be ready for that…
Then the database creation is going on.
Finally database configuration success. If you want to change the password, you can use Password Management button.
Next screen, will prompt to do two things. As a root, you have executed these commands.
1. Login into E-Business suite and select System Administrator responsibility
2. Select function AutoConfig (under Oracle Applications Manager) For each web tier server perform the following: Click on pencil icon under Edit Parameters Select tab System Expand section jtff_server
3. Change value for the entry s_jsp_main_mode from justrun to recompile Confirm the change by clicking Save button
4. Run AutoConfig to propagate the changes to the configuration files Verify that the $INST_TOP/ora/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml has the following:
Check param-name "main_mode" under init-param variables Its changed into "recompile"
In this post, giving some quick tip on how to compile invalid objects apps schema in Oracle applications 11i and R12
You can compile invalid objects (or Apps Schema) using the following methods:
I. Using Database Tier -Login as database tier user 11i
Set environment variable (under $ORACLE_HOME/[SID]_[Hostname].env)
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL>conn /as sysdba
SQL> @utlrp.sql
Release 12
Set environment variable (under $INSTALL_DIR/db/tech_st/RDMBS_Home/[SID]_[Hostname].env)
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL>conn /as sysdba
SQL> @utlrp.sql
II. Using application tier (adadmin) -Login as application tier user 11i
Set environment variable from $APPL_TOP/APPSORA.env)
adadmin
option 3 compile/reload Applications Database Entities menu
option 1 Compile Apps Schema”
Release 12
Set environment variable (under $INSTALL_DIR/apps/apps_st/appl/APPS[sid]_[hostname].env)
adadmin
option 3 compile/reload Applications Database Entities menu
option 1 Compile Apps Schema”
III. From SQL plus, this is individual objects only -Figure out invalid Object in the database using
SQL> select object_name, owner, object_type from all_objects where status ='INVALID';
SQL> alter [object] [object_name] compile;
IV. ADCOMPSC.pls
The order in which to compile Invalid Objects in schemas is SYS, SYSTEM, APPS and then all others. APPS_DDL and APPS_ARRAY_DDL should exist in all schema's. In case of an ORA-1555 error while running adcompsc.pls, restart the script.
The script can be run as followed :
cd $AD_TOP/sql sqlplus @adcompsc.pls SCHEMA_NAME SCHEMA_PASSWORD %
SQL> @adcompsc.pls apps apps %
After the script completes, check for invalid objects again. If the number has decreased, but invalid objects still exist, run adcompsc.pls again. Keep running adcompsc.pls until number of invalid objects stops decreasing.
If there are any objects still left INVALID, verify them by using the script 'aderrchk.sql' to record the remaining INVALID objects. 'Aderrchk.sql' uses the same syntax as 'adcompsc.pls'. This script is also supplied with the Applications. Send the aderrchk.sql to a file using the spool command in sqlplus.
e.g. sqlplus apps/password @aderrchk.sql SCHEMA_NAME SCHEMA_PASSWORD % For objects which will not compile, try the following :
select text from user_source where name = 'OBJECTNAME' and text like '%Header%'; This script will provide the script that creates the packages/recreates the packages.
SQL>@packageheader SQL>@packagebody
If recreating the package does not make the package valid, analyze the user_errors table to determine the cause of the invalid package :
select text from user_errors where name = '< PACKAGENAME >'
There are three methods than can be used to enable or disable the Forms Listener Servlet.
OAM Configuration Wizards. Requires OAM 2.2 or higher (OAM G) -Navigation: -OAM Site Map -> AutoConfig -> Configuration Wizards -> Forms Listener Servlet --Choose the Enable or Disable button
OAM Context Editor -Navigation: -OAM Site Map -> AutoConfig -> Edit Parameters (of required Applications Tier Context file) --Go to the System Tab --Expand the oa_web_server node --Modify the following two variables --Forms Servlet URL (s_forms_servlet_serverurl) ---to enable set to /forms/formservlet ---to disable set to blank ----Forms Servlet Comment (s_forms_servlet_comment) ----to enable set to blank ----to disable set to #
Edit the context file ($APPL_TOP/admin/< contextname>.xml) -Locate the following two variables: -< server_url oa_var="s_forms_servlet_serverurl" > --to enable set to /forms/formservlet --eg: < server_url oa_var="s_forms_servlet_serverurl"> /forms/formservlet --to disable set to blank --eg: < server_url oa_var="s_forms_servlet_serverurl"/ > --< servlet_comment oa_var="s_forms_servlet_comment" > ---to enable set to blank ---eg: < servlet_comment oa_var="s_forms_servlet_comment"/ > ---to disable set to # ---eg: < servlet_comment oa_var="s_forms_servlet_comment" > # If you are migrating from Forms Listener and using Forms Metric Server load balancing the context variable, Metrics Server Load Balancing Host (s_leastloadedhost) will contain a value %LeastLoadedHost%. This must be changed to the Forms Server Host (s_formshost) value. This change is required even if using the Configuration Wizard.
OAM Context Editor navigation paths to these variables under the System Tab:
oa_met_server Metrics Server Load Balancing Host (s_leastloadedhost) -oa_forms_server -Forms Server Host (s_formshost)
In this post, sharing the way of finding the correct version of Oracle Applicatins file version of different component. This should be helpful while patching the applications.
Use the following information for the appropriate file type.
FORM
adident cd $AR_TOP/forms/US Ex. adident Header ARXTWLIN.fmx
strings -a form.frm | grep Revision Ex. cd $AU_TOP/forms/US strings -a POXPOVCT.fmb | grep Revision
strings -a report.rdf | grep Header Ex. strings -a ARBARL.rdf | grep Header
SQL more sqlscript.sql Ex. more ARTACELO.sql
The version will be in a line that starts with 'REM $Header', and should be one of the first lines in the .sql file. grep '$Head' sqlscript.sql Ex. grep '$Head' ARTACELO.sql
BIN or EXECUTABLE An executable in the bin directory will contain numerous C code modules, each with its own version. All of the following examples use ident or strings, but the difference is what you grep for.
1. Get ALL file versions contained in the executable. adident Header executable (Ex. adident Header RACUST) strings -a executable | grep Header (Ex. strings -a RACUST | grep Header)
2. Get ALL of the product specific file versions. adident Header executable (Ex. adident Header RACUST) strings -a executable | grep Header (Ex. strings -a RACUST | grep Header)
3. Get only the version of a specified module. strings -a executable | grep module (Ex. strings -a RAXTRX | grep raaurt)
4. A Collection of class file versions
from the directory where the classfile exists in a command prompt run the following: strings -a Classname.class | grep Header
Get ALL of the product specific file versions.
strings -a executable | grep 'Header: product_short_name' cd $FND_TOP/bin strings -a WFLOAD | grep 'Header: afspc'
Get only the version of a specified module.
strings -a executable | grep module
ORACLE REPORTS From the form, select Help, About Oracle Reports.
RDBMS 1. Use \Help Version 2. Or Help, About Oracle Applications 3. Get into SQL*Plus using any userid/password. You will get a string that tells you the PL/SQL version and data
After a fresh installation of Oracle Applications, database contains many default, open schemas with default passwords. These accounts and corresponding passwords are well-known, and they should be changed, especially for a database to be used in a production environment as a best practice. Default schemas come from different sources and can be classified as below :
1. Default database administration schemas 2. Schemas belonging to optional database features neither used nor patched by E-Business Suite 3. Schemas belonging to optional database features used but not patched by E-Business Suite 4. Schemas belonging to optional database features used and patched by E-Business Suite 5. Schemas common to all E-Business Suite products 6. Schemas associated with specific E-Business Suite products
For the schemas in categories 1, 2 and 3, use standard database commands to change a password: SQL> alter user [SCHEMA] identified by [NEW_PASSWORD];
For the schemas in categories 4, 5 and 6, use the application password change tool: $ FNDCPASS APPS/apps_pwd 0 Y SYSTEM/system_pwd ORACLE [SCHEMA] [NEW_PWD]
To save time, category six (6) schema passwords may be changed en masse using FNDCPASS. FNDCPASS accepts a keyword ALLORACLE forcing a change of all managed schemas to the new password. If your version of FNDCPASS does not already support the ALLORACLE keyword, apply patch 5080487.
$ FNDCPASS APPS/apps_pwd 0 Y SYSTEM/system_pwd ALLORACLE [NEW_PWD] To determine which schemas are managed by E-Business Suite (categories 4, 5 and 6), run the AD adutconf.sql script.
groupadd : This is the command used to create new group. At OS level group is used to give and take pivillages.
Syntax : groupadd <group name>
# groupadd group1
View :
# cat /etc/group -
This command used to view which user belongs to which group.
Output:group1:x:607:
Useradd :This is the command used to create a new user in a group.
Syntax :useradd -g <group name> <user name>
[root@rac5 ~]# useradd -g group1 user1
passwd : This is the command used to give password for create use or to update the password.
Syntax :passwd <user name>
Ex:[root@rac5 ~]# passwd user1
Output :
# Changing password for user sukhi.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
date : This is the command used to view the current system date.
# date
Output :Wed Oct 27 21:55:36 IST 2010
In order to update the date we can give :
Syntax :
# date -s "2 OCT 2010 14:00:00"
OR
# date --set="27 OCT 2010 21:56:00"
Output :Sat Oct 2 14:00:00 IST 2010
cal : This command shows the calender of current year or any.
# Cal
Output :[root@rac5 ~]# October 2010
Su Mo Tu We Th Fr Sa
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
pwd : This command is to view the present working directory.
# pwd
Output :[root@rac5 ~]# /root.
ls : This command is used to list all contents of directories
$ ls
ls –lt :This command is used to list lot of information about contents of directories
$ ls -lt
The permissions are the first 10 characters of the line (-rwxrwx---) and can be broken down as follows.
-
rwx
r--
r--
1
root
root
765
Apr 23
file.txt
File type
Owner
Group
All
Links
Owner
Group
Size
Mod date
Filename
cd : This is the command used to change a directory
$ ls
authorized_keys file file2 oraInventory stand.ora
authorized-keys file1 file3 sukhi
$ cd sukhi
[oracle@rac5 sukhi]$
This is used to go back to parent directory
$ cd ..
mkdir : This command is used for make a new directory.
$mkdir dir1
rmdir : This commad is used for remove a directory.
$rmdir dir1
rm -rf : This command is used to forcefully remove a directory.
$ rm -fr dir1
man : This command is used to show the online manual pages of related commands
$ man ls
touch : This command is used create an empty file $ touch file1
find : This command is used find a file
For a case-sensitive search, use the -name option:
$ find . -name "file*"
For a case-insensitive search, use the -iname option:
$ find . -iname "file*"
You can limit your search to a specific type of files only. For instance, the above command will get the files of all types: regular files, directories, symbolic links, and so on. To search for only regular files, you can use the -type f parameter.
$ find . -name "orapw*" -type f
./orapw+ASM
./orapwDBA102
./orapwRMANTEST
./orapwRMANDUP
./orapwTESTAUX
The -type can take the modifiers f (for regular files), l (for symbolic links), d (directories), b (block devices), p (named pipes), c (character devices), s (sockets).
For the files with extension "trc" and remove them if they are more than three days old. A simple command does the trick:
find . -name "*.trc" -ctime +3 -exec rm {} \;
To forcibly remove them prior to the three-day limit, use the -f option.
find . -name "*.trc" -ctime +3 -exec rm -f {} \;
If you just want to list the files:
find . -name "*.trc" -ctime +3 -exec ls -l {} \;
cp : This command is used to copy a file from one to another
$ cp file1 filenew
mv : This command is used to rename the name of a file to other
$ mv file1 filenew
su : This command gives you root permissions but it does not change the PATH and current working directory. So you could not execute file in /usr/sbin directory. This command is used to switch one user to other. it doesnot change the current working directory. so you cant access the /usr/sbin directories.
$ su sukhi
su - : This command changes the path too and root home becomes your current wokring directory. This command is used to switch one user with changing current working directory.
$ su – sukhi
How to use chown and chgrp commands to change ownership and group of the files.
# ls -l
total 8
-rw-r--r-- 1 user1 users 70 Aug 4 04:02 file1
-rwxr-xr-x 1 oracle dba 132 Aug 4 04:02 file2
-rwxr-xr-x 1 oracle dba 132 Aug 4 04:02 file3
-rwxr-xr-x 1 oracle dba 132 Aug 4 04:02 file4
-rwxr-xr-x 1 oracle dba 132 Aug 4 04:02 file5
-rwxr-xr-x 1 oracle dba 132 Aug 4 04:02 file6
and you need to change the permissions of all the files to match those of file1. Sure, you could issue chmod 644 * to make that change—but what if you are writing a script to do that, and you don’t know the permissions beforehand? Or, perhaps you are making several permission changes and based on many different files and you find it infeasible to go though the permissions of each of those and modify accordingly.
A better approach is to make the permissions similar to those of another file. This command makes the permissions of file2 the same as file1:
chmod --reference file1 file2
Now if you check:
# ls -l file[12]
total 8
-rw-r--r-- 1 user1 users 70 Aug 4 04:02 file1
-rw-r--r-- 1 oracle dba 132 Aug 4 04:02 file2
The file2 permissions were changed exactly as in file1. You didn’t need to get the permissions of file1 first.
You can also use the same trick in group membership in files. To make the group of file2 the same as file1, you would issue:
# chgrp --reference file1 file2
# ls -l file[12]
-rw-r--r-- 1 user1 users 70 Aug 4 04:02 file1
-rw-r--r-- 1 oracle users 132 Aug 4 04:02 file2
Of course, what works for changing groups will work for owner as well. Here is how you can use the same trick for an ownership change. If permissions are like this:
# ls -l file[12]
-rw-r--r-- 1 user1 users 70 Aug 4 04:02 file1
-rw-r--r-- 1 oracle dba 132 Aug 4 04:02 file2
You can change the ownership like this:
# chown --reference file1 file2
# ls -l file[12]
-rw-r--r-- 1 user1 users 70 Aug 4 04:02 file1
-rw-r--r-- 1 user1 users 132 Aug 4 04:02 file2
Note that the group as well as the owner have changed.
This is a trick you can use to change ownership and permissions of Oracle executables in a directory based on some reference executable. This proves
especially useful in migrations where you can (and probably should) install as a different user and later move them to your regular Oracle software owner.
cmp. : The command cmp is similar to diff
# cmp file1 file2
file1 file2 differ: byte 10, line 1
The output comes back as the first sign of difference. You can use this to identify where the files might be different. Like diff, cmp has a lot of options, the
most important being the -s option, that merely returns a code:
0, if the files are identical
1, if they differ
Some other non-zero number, if the comparison couldn’t be made
Here is an example:
# cmp -s file3 file4
# echo $?
0
The special variable $? indicates the return code from the last executed command. In this case it’s 0, meaning the files file1 and file2 are identical.
# cmp -s file1 file2
# echo $?
1
means file1 and file2 are not the same.
Recall from a previous tip that when you relink Oracle executables, the older version is kept prior to being overwritten. So, when you relink, the executable sqlplus is renamed to “sqlplusO” and the newly compiled sqlplus is placed in the $ORACLE_HOME/bin. So how do you ensure that the sqlplus that was just created is any different? Just use:
# cmp sqlplus sqlplusO
sqlplus sqlplusO differ: byte 657, line 7
If you check the size:
# ls -l sqlplus*
-rwxr-x--x 1 oracle dba 8851 Aug 4 05:15 sqlplus
-rwxr-x--x 1 oracle dba 8851 Nov 2 2005 sqlplusO
Even though the size is the same in both cases, cmp proved that the two programs differ
md5sum.
This command generates a 32-bit MD5 hash value of the files:
# md5sum file1
ef929460b3731851259137194fe5ac47 file1
Two files with the same checksum can be considered identical. However, the usefulness of this command goes beyond just comparing files. It can also provide a mechanism to guarantee the integrity of the files.
Suppose you have two important files—file1 and file2—that you need to protect. You can use the --check option check to confirm the files haven't changed. First, create a checksum file for both these important files and keep it safe:
# md5sum file1 file2 > f1f2
Later, when you want to verify that the files are still untouched:
# md5sum --check f1f2
file1: OK
file2: OK
This shows clearly that the files have not been modified. Now change one file and check the MD5:
# cp file2 file1
# md5sum --check f1f2
file1: FAILED
file2: OK
md5sum: WARNING: 1 of 2 computed checksums did NOT match
The output clearly shows that file1 has been modified.
md5sum is an extremely powerful command for security implementations. Some of the configuration files you manage, such as listener.ora, tnsnames.ora, and init.ora, are extremely critical in a successful Oracle infrastructure and any modification may result in downtime. These are typically a part of your change control process. Instead of just relying on someone’s word that these files have not changed, enforce it using MD5 checksum. Create a checksum file and whenever you make a planned change, recreate this file. As a part of your compliance, check this file using the md5sum command. If someone inadvertently updated one of these key files, you would immediately catch the change.
In the same line, you can also create MD5 checksums for all executables in $ORACLE_HOME/bin and compare them from time to time for unauthorized modifications.
alias and unalias
Suppose you want to check the ORACLE_SID environment variable set in your shell. You will have to type:
echo $ORACLE_HOME
As a DBA or a developer, you frequently use this command and will quickly become tired of typing the entire 16 characters. Is there is a simpler way?
There is: the alias command. With this approach you can create a short alias, such as "os", to represent the entire command:
alias os='echo $ORACLE_HOME'
Now whenever you want to check the ORACLE_SID, you just type "os" (without the quotes) and Linux executes the aliased command.
However, if you log out and log back in, the alias is gone and you have to enter the alias command again. To eliminate this step, all you have to do is to put the command in your shell's profile file. For bash, the file is .bash_profile (note the period before the file name, that's part of the file's name) in your home
directory. For bourne and korn shells, it's .profile, and for c-shell, .chsrc.
You can create an alias in any name. For instance, I always create an alias for the command sqlplus "/as sysdba",
alias sql=’sqlplus "/as sysdba"
Here is a list of some very useful aliases I like to define:
alias bdump='cd $ORACLE_BASE/admin/$ORACLE_SID/bdump'
alias l='ls -d .* --color=tty'
alias ll='ls -l --color=tty'
alias mv='mv -i'
alias oh='cd $ORACLE_HOME'
alias os='echo $ORACLE_SID'
alias tns='cd $ORACLE_HOME/network/admin'
To see what aliases have been defined in your shell, use alias without any parameters
$alias
To remove an alias previously defined, just use the unalias command:
$ unalias rm
xargs
Most Linux commands are about getting an output: a list of files, a list of strings, and so on. But what if you want to use some other command with the output of the previous one as a parameter? For example, the file command shows the type of the file (executable, ascii text, and so on); you can manipulate the output to show only the filenames and now you want to pass these names to the ls -l command to see the timestamp. The command xargs
does exactly that. It allows you to execute some other commands on the output.
Let's dissect this command string. The first, file -Lz *, finds files that are symbolic links or compressed. It passes the output to the next command, grep
ASCII, which searches for the string "ASCII" in them and produces the output similar to this:
alert_DBA102.log: ASCII English text
alert_DBA102.log.Z: ASCII text (compress'd data 16 bits)
dba102_asmb_12307.trc.Z: ASCII English text (compress'd data 16 bits)
dba102_asmb_20653.trc.Z: ASCII English text (compress'd data 16 bits)
Since we are interested in the file names only, we applied the next command, cut -d":" -f1, to show the first field only:
alert_DBA102.log
alert_DBA102.log.Z
dba102_asmb_12307.trc.Z
dba102_asmb_20653.trc.Z
Now, we want to use the ls -l command and pass the above list as parameters, one at a time. The xargs command allowed you to to that. The last part,
xargs ls -ltr, takes the output and executes the command ls -ltr against them, as if executing:
ls -ltr alert_DBA102.log
ls -ltr alert_DBA102.log.Z
ls -ltr dba102_asmb_12307.trc.Z
ls -ltr dba102_asmb_20653.trc.Z
Thus xargs is not useful by itself, but is quite powerful when combined with other commands.
Here is another example, where we want to count the number of lines in those files:
vi alert_DBA102.log dba102_cjq0_14493.trc dba102_mmnl_14497.trc dba102_reco_14491.trc dba102_rvwr_14518.trc ?...
Here xarg asks you to confirm before running each command. If you press "y", it executes the command. You will find it immensely useful when you take some potentially damaging and irreversible operations on the file—such as removing or overwriting it.
The -t option uses a verbose mode; it displays the command it is about to run, which is a very helpful option during debugging.
What if the output passed to the xargs is blank? Consider:
Here searching for "SSSSSS" produces no match; so the input to xargs is all blanks, as shown in the second line (produced since we used the -t, or the
verbose option). Although this may be useful, In some cases you may want to stop xargs if there is nothing to process; if so, you can use the -r option:
Suppose you want to remove the files using the rm command, which should be the argument to the xargs command. However, rm can accept a limited
number of arguments. What if your argument list exceeds that limit? The -n option to xargs limits the number of arguments in a single command line.
Here is how you can limit only two arguments per command line: Even if five files are passed to xargs ls -ltr, only two files are passed to ls -ltr at a time.
Cat : This command is used to create and view files of directories
$ cat file1 $ cat file1 > newfile // owerwrite newfile with file1 $ cat file1 >> newfile // append newfile the contents with file1
$ cat /proc/meminfo
free
To display amount of free and used memory (including total in the system), enter: $ free -m $ free -g $ free -k
Systemcopying Command in linux
scp This command is used for copying the files from one system to another. $ scp /home/oracle/sukhi.txt oracle@rac4:/home/oracle/sukhi.txt
Here the target machine name , location , filename shows in red color
Linux Compressing Utilites
Compression Tool
File Extension
Decompression Tool
bzip2
.bz2
bunzip2
gzip
.gz
gunzip
zip
.zip
unzip
bzip2 This command is used to compress files. $ bzip2 mydb2
The file is compressed and saved as mydb2.bz2 $ bunzip2 mydb2.bz2
gzip This command is used to compress files. $ gzip2 mydb2
The file is compressed and saved as mydb2.gz $ bunzip2 mydb2.gz zip This command is used to compress a directory. $ zip -r mydb2.zip filesdir // directory
The file is compressed and saved as mydb2.zip
$ bunzip2 mydb2.bz2
Connect to other system
ssh This is the command used to connect the one system to another. $ ssh oracle@rac4 Last login: Sun Nov 28 13:41:50 2010 from 10.17.57.57
Find the space utilization
du -k This command is used for checking disc space. $ du -k /home/oracle
df -k This command is used for getting information of filesystem (/dev/sda1), mounted poin, used space ,available space, use % etc. size will dipaled in KB.
$ df -k /home/oracle
Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 28898080 10812328 16617816 40% /
df -h This command is used for getting information of filesystem (/dev/sda1), mounted poin, used space ,available space, use % etc. in humanly readable format that is size will give in GB etc [oracle@rac5 ~]$ df -h /home/oracle
Filesystem Size Used Avail Use% Mounted on /dev/sda1 28G 11G 16G 40% /
# du -ch|grep total -- Total Size of a folder
Command for read and print in shell scripts
Read : This command is used to read something from the user. It read and strored in a variable. read variable
echo : This commnad used to print soemthing to the screen. We can display the vlaues of varibles. echo "sowfeer" OR echo $varibale
Howto list the contents of a directory to a text file
Ls : By using the ls command we can do it. ls /home/oracle/* > /tmp/sowfeer.txt
Change ownership Command
chown This command used to change the ownership of file. Syntax : chown [-R] newowner filenames
Give permissions as owner to user hope for the file file.txt.
chown chope file.txt
Give chown permissions to hope for all files in the work directory.
chown -R hope work
Changing file permissions
chmod This command is used for changing the file permissions. . # chmod o+r remove3.txt // for others # chmod u+r remove3.txt // for owner or user[root@rac5 oracle]
# chmod g+r remove3.txt // for groups .
The permissions are encoded as octal number (green in color as shown below) chmod 755 file # Owner=rwx Group=r-x Other=r-x
chmod 500file2 # Owner=r-x Group=--- Other=---
chmod 644 file3 # Owner=rw- Group=r-- Other=r--
chmod +x file # Add execute permission to file for all
chmod o-r file # Remove read permission for others
chmod a+w file # Add write permission for everyone
usermod : command is used to modify the user settings after a user has been created.
root> usermod -s /bin/csh my_user
userde : command is used to delete existing users.
root> userdel -r my_user
The "-r" flag removes the default directory.
passwd : command is used to set, or reset, the users login password.
root> passwd my_user
who : command can be used to list all users who have OS connections.
root> who
root> who | head -5
root> who | tail -5
root> who | grep -i ora
root> who | wc -l
The "head -5" command restricts the output to the first 5 lines of the who command.
The "tail -5" command restricts the output to the last 5 lines of the who command.
The "grep -i ora" command restricts the output to lines containing "ora".
The "wc -l" command returns the number of lines from "who", and hence the number of connected users.
Process Management
Ps : command lists current process information.
root> ps
root> ps -ef | grep -i ora
Specific processes can be killed by specifying the process id in the kill command.
root> kill -9 12345
uname and hostname : commands can be used to get information about the host.
root> uname -a
OSF1 oradb01.lynx.co.uk V5.1 2650 alpha
root> uname -a | awk '{ print $2 }'
oradb01.lynx.co.uk
root> hostname
oradb01.lynx.co.uk
Error Lines in Files
You can return the error lines in a file using.
root> cat alert_LIN1.log | grep -i ORA-
The "grep -i ORA-" command limits the output to lines containing "ORA-". The "-i" flag makes the comparison case insensitive. A count of the error lines can be returned using the "wc" command. This normally give a word count, but the "-l" flag alteres it to give a line count.
root> cat alert_LIN1.log | grep -i ORA- | wc -l
File Exists Check
The Korn shell allows you to check for the presence of a file using the "test -s" command. In the following script a backup log is renamed and moved if it is present.
This is often necessary where CRON jobs are run from the root user rather than the oracle user.
Compress Files
In order to save space on the filesystem you may wish to compress files such as archived redo logs. This can be using either the gzip or the compress commands. The gzip command results in a compressed copy of the original file with a ".gz" extension.
The gunzip command reverses this process.
gzip myfile
gunzip myfile.gz
The compress command results in a compressed copy of the original file with a ".Z" extension. The uncompress command reverses this process.
compress myfile
uncompress myfile
General Performance, System Activity, Hardware and System Information
Vmstat
# vmstat 3
Display Memory Utilization Slabinfo
# vmstat -m
Get Information About Active / Inactive Memory Pages
# vmstat -a
$ vmstat 5 3
Displays system statistics (5 seconds apart; 3 times).
procs
memory
page
disk
faults
cpu
r
b
w
swap
free
re
mf
pi
po
fr
de
sr
s0
s1
s2
s3
in
sy
cs
us
sy
id
0
0
0
28872
8792
8
5
172
142
210
0
24
3
11
17
2
289
1081
201
14
6
80
0
0
0
102920
1936
1
95
193
6
302
1264
235
12
1
0
3
240
459
211
0
2
97
0
0
0
102800
1960
0
0
0
0
0
464
0
0
0
0
0
107
146
29
0
0
100
Having any processes in the b or w columns is a sign of a problem system. Having an id of 0 is a sign that the cpu is over-burdoned. Having high values in pi and po show excessive paging.
procs (Reports the number of processes in each of the following states)
r : in run queue
b : blocked for resources (I/O, paging etc.)
w : runnable but swapped
memory (Reports on usage of virtual and real memory)
swap : swap space currently available (Kbytes)
free : size of free list (Kbytes)
page (Reports information about page faults and paging activity (units per second)
re : page reclaims
mf : minor faults
pi : Kbytes paged in
po : Kbytes paged out
fr : Kbytes freed
de : anticipated short-term memory shortfall (Kbytes)
sr : pages scanned by clock algorith
disk (Reports the number of disk operations per second for up to 4 disks
faults (Reports the trap/interupt rates (per second)
in : (non clock) device interupts
si : system calls
cs : CPU context switches
cpu (Reports the breakdown of percentage usage of CPU time (averaged across all CPUs)
us : user time
si : system time
cs : idle time
Find Out Who Is Logged on And What They Are Doing
w command displays information about the users currently on the machine, and their processes. # w username eg : # w sukhi
Tell How Long The System Has Been Running
The uptime command can be used to see how long the server has been running. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes. # uptime
Top command to find out Linux cpu usage
$ top
CPU Usage
sar
$ sar -u 10 8
Reports CPU Utilization (10 seconds apart; 8 times).
Time
%usr
%sys
%wio
%idle
11:57:31
72
28
0
0
11:57:41
70
30
0
0
11:57:51
70
30
0
0
11:58:01
68
32
0
0
11:58:11
67
33
0
0
11:58:21
65
28
0
7
11:58:31
73
27
0
0
11:58:41
69
31
0
0
Average
69
30
0
1
%usr: Percent of CPU in user mode %sys: Percent of CPU in system mode %wio: Percent of CPU running idle with a process waiting for block I/O %idle: Percent of CPU that is idle
Memory Usage
The command free displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. # free
Average CPU Load, Disk Activity
The command iostat report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS). # iostat
Linux Track NFS Directory / Disk I/O Stats
# iostat -x –n
# iostat -n
Linux Find Out Virtual Memory PAGESIZE
To display size of a page in bytes, enter: $ getconf PAGESIZE OR $ getconf PAGE_SIZE
Collect and Report System Activity
The sar command is used to collect, report, and save system activity information. To see network counter, enter: # sar -n DEV | more
To display the network counters from the 24th: # sar -n DEV -f /var/log/sa/sa24 | more
You can also display real time usage using sar: # sar 4 5
Howto collect Linux system utilization data into a file
The sa1 command is designed to be started automatically by the cron command. Type the following command to list files: # ls /var/log/sa
How do I copy log files?
You can copy all these logs files using ssh/scp or ftp to another computer. You can run use sar command to read binary raw data files, enter # sar -f sa13
Comparison of CPU utilization
display comparison of CPU utilization; 2 seconds apart; 5 times, use:
# sar -u 2 5
Output (for each 2 seconds. 5 lines are displayed):
Linux 2.6.9-42.0.3.ELsmp (www1lab2.xyz.ac.in) 01/13/2007
05:33:24 AM CPU %user %nice %system %iowait %idle
05:33:26 AM all 9.50 0.00 49.00 0.00 41.50
05:33:28 AM all 16.79 0.00 74.69 0.00 8.52
05:33:30 AM all 17.21 0.00 80.30 0.00 2.49
05:33:32 AM all 16.75 0.00 81.00 0.00 2.25
05:33:34 AM all 14.29 0.00 72.43 0.00 13.28
Average: all 14.91 0.00 71.49 0.00 13.61
Where,
-u 12 5 : Report CPU utilization. The following values are displayed:
%user: Percentage of CPU utilization that occurred while executing at the user level (application).
%nice: Percentage of CPU utilization that occurred while executing at the user level with nice priority.
%system: Percentage of CPU utilization that occurred while executing at the system level (kernel).
%iowait: Percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request.
%idle: Percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request.
To get multiple samples and multiple reports set an output file for the sar command. Run the sar command as a background process using. # sar -o output.file 12 8 >/dev/null 2>&1 &
Better use nohup command so that you can logout and check back report later on: # nohup sar -o output.file 12 8 >/dev/null 2>&1 &
All data is captured in binary form and saved to a file (data.file). The data can then be selectively displayed ith the sar command using the -f option. # sar -f data.file
Multiprocessor Usage
Mpstat : The mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor: # mpstat -P ALL
Display the utilization of each CPU individually using mpstat
# mpstat
Display five reports of global statistics among all processors at two second intervals, enter:
# mpstat 2 5
Display five reports of statistics for all processors at two second intervals, enter:
# mpstat -P ALL 2 5
$ mpstat 10 2
Reports per-processor statistics on Sun Solaris (10 seconds apart; 8 times).
CPU
minf
mjf
xcal
intr
ithr
csw
icsw
migr
smtx
srw
syscl
usr
sys
wt
idl
0
6
8
0
438
237
246
85
0
0
21
8542
23
9
9
59
0
0
29
0
744
544
494
206
0
0
95
110911
65
29
6
0
Process Memory Usage
The command pmap report memory map of a process. Use this command to find out causes of memory bottlenecks. # pmap -d PID
To display process memory information for pid # 47394, enter: # pmap -d 47394
To display process mappings, type $ pmap pid $ pmap 3724
The -x option can be used to provide information about the memory allocation and mapping types per mapping. The amount of resident, non-shared anonymous, and locked memory is shown for each mapping:
pmap -x 3526
Displays The Processes
ps command will report a snapshot of the current processes. ps is just like top but provides more information.
To select all processes use the -A or -e option: # ps -A
Show Long Format Output
# ps -Al To turn on extra full mode (it will show command line arguments passed to process): # ps -AlF
The PID column can then be matched with the SPID column on the V$PROCESS view to provide more information on the process.
SELECT a.username,
a.osuser,
a.program,
spid,
sid,
a.serial#
FROM v$session a,
v$process b
WHERE a.paddr = b.addr
AND spid = '&pid';
Find out who is monopolizing or eating the CPUs
Finally, you need to determine which process is monopolizing or eating the CPUs. Following command will displays the top 10 CPU users on the Linux system. # ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10 OR # ps -eo pcpu,pid,user,args | sort -r -k1 | less
Now you know vmware-vmx process is eating up lots of CPU power. ps command displays every process (-e) with a user-defined format (-o pcpu). First field is pcpu (cpu utilization). It is sorted in reverse order to display top 10 CPU eating process.
iostat : You can also use iostat command which report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions. It can be used to find out your system's average CPU utilization since the last reboot. # iostat
You may want to use following command, which gives you three outputs every 5 seconds (as previous command gives information since the last reboot):$ iostat -xtc 5 3
How to count a word, line, character
wc This command is used for word count. cat sukhi.txt | wc -l // for line count cat sukhi.txt | wc -m //for charecter count cat sukhi.txt | wc -w // for word count
How to find the count of files which starts with 'r' in a directory
cat /home/oracle/* | ls r* | wc
This is the command for finding the count of files that strats with character 'r' from a directory. Here r* represents list the file starts with 'r'.'wc' is the count of the listed files.
How to search a pattern and print the contents
cat description.txt | grep 'india'
This is the command to search a pattern and print that. Here Grep command is used for patern seacrhing and cat command is used to print and | pipe symbol is used to concatenate .
grep - globally search for regular expression and printout
grep This commands represent 'globally search fro regular expression and printout '. It searches for perticular pattern of characters and displays all lines that contain that pattern. grep expext a standard input , if we give a line as input , it searches the pattern in that line.
How do I forcefully unmount a Linux disk partition?
If your device name is /dev/sdb1, enter the following command as root user: # lsof | grep '/dev/sda1'
Output:
vi 4453 vivek 3u BLK 8,1 8167 /dev/sda1
Above output tells that user vivek has a vi process running that is using /dev/sda1. All you have to do is stop vi process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command: # umount /dev/sda1
Linux fuser command to forcefully unmount a disk partition
Suppose you have /dev/sda1 mounted on /mnt directory then you can use fuser command as follows:
Type the command to unmount /mnt forcefully: # fuser -km /mnt
Where,
-k : Kill processes accessing the file.
-m : Name specifies a file on a mounted file system or a block device that is mounted. In above example you are using /mnt
Linux umount command to unmount a disk partition You can also try umount command with –l option: # umount -l /mnt
Where,
-l : Also known as Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option works with kernel version 2.4.11+ and above only.
If you would like to unmount a NFS mount point then try following command: # umount -f /mnt
Where,
-f: Force unmount in case of an unreachable NFS system
Caution: Using these commands or option can cause data loss for open files; programs which access files after the file system has been unmounted will get an error.
GUI tools for your laptops/desktops
Above tools/commands are quite useful on remote server. For local system with X GUI installed you can try out gnome-system-monitor. It allows you to view and control the processes running on your system. You can access detailed memory maps, send signals, and terminate the processes. $ gnome-system-monitor
Various Kernel Statistics
/proc file system provides detailed information about various hardware devices and other Linux kernel information. Common /proc examples: # cat /proc/cpuinfo # cat /proc/meminfo # cat /proc/zoneinfo # cat /proc/mounts
Automatic Startup Scripts on Linux
Create a file in the "/etc/init.d/" directory, in this case the file is called "myservice", containing the commands you wish to run at startup and/or shutdown.
Use the chmod command to set the privileges to 750.
chmod 750 /etc/init.d/myservice
Link the file into the appropriate run-level script directories.
Associate the "myservice" service with the appropriate run levels.
chkconfig --level 345 dbora on
The script should now be automatically run at startup and shutdown (with "start" or "stop" as a commandline parameter) like other service initialization scripts.
NFS Mount (Sun)
The following deamons must be running for the share to be seen by a PC.
/usr/lib/nfs/nfsd -a
/usr/lib/nfs/mountd
/opt/SUNWpcnfs/sbin/rpc.pcnfsd
To see a list of the nfs mounted drives already present type.
exportfs
First the mount point must be shared so it can be seen by remote machines.
share -F nfs -o ro /cdrom
Next the share can be mounted on a remote machine by root using.
mkdir /cdrom#1
mount -o ro myhost:/cdrom /cdrom#1
Useful Files
Here are some files that may be of use.
Path
Contents
/etc/passwd
User settings
/etc/group
Group settings for users.
/etc/hosts
Hostname lookup information.
/etc/system
Kernel parameters for Solaris.
/etc/sysconfigtab
Kernel parameters for Tru64.
Network Statistics
ss
The ss command is used to dump socket statistics
Display Sockets Summary
List currently established, closed, orphaned and waiting TCP sockets, enter: # ss -s
Display All Open Network Ports
# ss -l
Type the following to see process named using open socket: # ss –pl
Find out who is responsible for opening socket / port # 4949: # ss -lp | grep 4949
Display All TCP Sockets
# ss -t -a
Display All UDP Sockets
# ss -u -a
Display All Established SMTP Connections
# ss -o state established '( dport = :smtp or sport = :smtp )'
Display All Established HTTP Connections
# ss -o state established '( dport = :http or sport = :http )'
Find All Local Processes Connected To X Server
# ss -x src /tmp/.X11-unix/*
List All The Tcp Sockets in State FIN-WAIT-1
List all the TCP sockets in state -FIN-WAIT-1 for our httpd to network 202.54.1/24 and look at their timers: # ss -o state fin-wait-1 '( sport = :http or sport = :https )' dst 202.54.1/24
Get Detailed Information about Particular IP address Connections Using netstat Command
You can also list abusive IP address using this method. # netstat -nat | awk '{print $6}' | sort | uniq -c | sort –n
Dig out more information about a specific ip address: # netstat -nat |grep {IP-address} | awk '{print $6}' | sort | uniq -c | sort –n
Busy server can give out more information: # netstat -nat |grep 202.54.1.10 | awk '{print $6}' | sort | uniq -c | sort –n
Get List Of All Unique IP Address
To print list of all unique IP address connected to server, enter: # netstat -nat | awk '{ print $5}' | cut -d: -f1 | sed -e '/^$/d' | uniq
To print total of all unique IP address, enter: # netstat -nat | awk '{ print $5}' | cut -d: -f1 | sed -e '/^$/d' | uniq | wc -l
Find Out If Box is Under DoS Attack or Not
If you think your Linux box is under attack, print out a list of open connections on your box and sorts them by according to IP address, enter: # netstat -atun | awk '{print $5}' | cut -d: -f1 | sed -e '/^$/d' |sort | uniq -c | sort -n
Display Summary Statistics for Each Protocol
Simply use netstat -s: # netstat -s | less # netstat -t -s | less # netstat -u -s | less # netstat -w -s | less # netstat -s
netstat command to display established connections
Type the command as follows: $ netstat -nat
To display client / server ESTABLISHED connections only: $ netstat -nat | grep 'ESTABLISHED'
How do I use tcptract to monitor and track TCP connections ?
tcptrack requires only one parameter to run i.e. the name of an interface such as eth0, eth1 etc. Use the -i flag followed by an interface name that you want tcptrack to monitor. # tcptrack -i eth0 # tcptrack -i eth1
You can just monitor TCP port 25 (SMTP) # tcptrack -i eth0 port 25
The next example will only show web traffic monitoring on port 80: # tcptrack -i eth1 port 80
tcptrack can also take a pcap filter expression as an argument. The format of this filter expression is the same as that of tcpdump and other libpcap-based sniffers. The following example will only show connections from host 76.11.22.12: # tcptrack -i eth0 src or dst 76.11.22.12
Display Interface Table
You can easily display dropped and total transmitted packets with netstat for eth0: # netstat --interfaces eth0
Other netstat related articles / tips:
$ man netstat $ man cut $ man awk $ man sed $ man grep
Get Information about All Running Services Remotely
All you have to do is open /etc/inetd.conf under UNIX / Linux: # vi /etc/inetd.conf
Next, use telnet to connect to the netstat service (port 15) and get network connection information: $ telnet server-name netstat $ telnet 192.168.1.5 15
Linux / UNIX Find Out What Program / Service is Listening on a Specific TCP Port
Under Linux and UNIX you can use any one of the following command to get listing on a specific TCP port: => lsof : list open files including ports.
=> netstat : The netstat command symbolically displays the contents of various network-related data and information.
lsof
Type the following command to see IPv4 port(s), enter: # lsof -Pnl +M -i4
Type the following command to see IPv6 listing port(s), enter: # lsof -Pnl +M -i6
First column COMMAND - gives out information about program name. Please see output header for details. For example, gweather* command gets the weather report weather information from the U.S National Weather Service (NWS) servers (140.90.128.70), including the Interactive Weather Information Network (IWIN) and other weather services.
Where,
-P : This option inhibits the conversion of port numbers to port names for network files. Inhibiting the conver- sion may make lsof run a little faster. It is also useful when port name lookup is not working properly.
-n : This option inhibits the conversion of network numbers to host names for network files. Inhibiting conversion may make lsof run faster. It is also useful when host name lookup is not working properly.
-l : This option inhibits the conversion of user ID numbers to login names. It is also useful when login name lookup is working improperly or slowly.
+M : Enables the reporting of portmapper registrations for local TCP and UDP ports.
-i4 : IPv4 listing only
-i6 : IPv6 listing only
netstat
Type the command as follows: # netstat -tulpn OR # netstat -npl
Last column PID/Program name gives out information regarding program name and port. Where,
-t : TCP port
-u : UDP port
-l : Show only listening sockets.
-p : Show the PID and name of the program to which each socket / port belongs
-n : No DNS lookup (speed up operation)
/etc/services file
/etc/services is a plain ASCII file providing a mapping between friendly textual names for internet services, and their underlying assigned port numbers and protocol types. Every networking program should look into this file to get the port number (and protocol) for its service. You can view this file with the help of cat or less command: $ cat /etc/services $ grep 110 /etc/services $ less /etc/services
Detailed Network Traffic Analysis
The tcpdump is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS, enter: # tcpdump -i eth1 'udp port 53'
To display all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets, enter: # tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
To display all FTP session to 202.54.1.5, enter: # tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20'
To display all HTTP session to 192.168.1.5: # tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http'
Use wireshark to view detailed information about files, enter: # tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80
Monitor HTTP Packets ( packet sniffing )
Login as a root and type the following command at console: # tcpdump -n -i {INTERFACE} -s 0 -w {OUTPUT.FILE.NAME} src or dst port 80 # tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80
System Calls
Run strace against /bin/foo and capture its output to a text file in output.txt: $ strace -o output.txt /bin/foo
You can strace the webserver process and see what it's doing. For example, strace php5 fastcgi process, enter: $ strace -p 22254 -s 80 -o /tmp/debug.lighttpd.txt
To see only a trace of the open, read system calls, enter : $ strace -e trace=open,read -p 22254 -s 80 -o debug.webserver.txt
Where,
-o filename : Write the trace output to the file filename rather than to screen (stderr).
-p PID : Attach to the process with the process ID pid and begin tracing. The trace may be terminated at any time by a keyboard interrupt signal (hit CTRL-C). strace will respond by detaching itself from the traced process(es) leaving it (them) to continue running. Multiple -p options can be used to attach to up to 32 processes in addition to command (which is optional if at least one -p option is given).
-s SIZE : Specify the maximum string size to print (the default is 32).
Refer to strace man page for more information: $ man strace
Linux / UNIX: Scanning network for open ports with nmap command
nmap port scanning
TCP Connect scanning for localhost and network 192.168.0.0/24 # nmap -v -sT localhost # nmap -v -sT 192.168.0.0/24