Monday, June 27, 2011

R12 – Various LogFiles Locations :

Startup/Shutdown Log files for Application Tier in R12
$LOG_HOME/appl/admin/log/
Service Logfile Name
TNS Listener Start/Stop log adalnctl.txt
Fulfillment Server Start/Stop log jtffmctl.txt
Oracle HTTP Server start/stop log adapcctl.txt
Concurrent Managers and ICM start/stop log adcmctl.txt
Forms OC4J start/stop log adformsctl.txt
OACore OC4J start/stop log adoacorectl.txtq
OAFM OC4J start/stop log adoafmctl.txt
OPMN start/stop log adopmnctl.txt
Tech Stack  10.1.3 (Web/HTTP Server) Logs
Log File Name Log File Location
AD script log files (e.g.from adapcctl.sh) $INST_TOP/logs/appl/admin/log
CM Log Files ($APPLCSF/$APPLLOG) $INST_TOP/logs/appl/conc/log
AD tools log files (e.g. ADPATCH) $APPL_CONFIG_HOME/admin/$TWO_TASK/log
OPMN Log Files (text and ODL) $ORA_CONFIG_HOME/10.1.3/opmn/logs (may move to  $INST_TOP/logs/10.1.3/opmn)
Apache Log Files (text and ODL) $INST_TOP/logs/10.1.3/Apache/
OC4J Log Files (text) $INST_TOP/logs/10.1.3/j2ee/oacore/
OC4J Log Files (ODL) $INST_TOP/logs/10.1.3/j2ee/oacore/log/oacore_default_group_1/oc4j
Log files related to cloning in R12:
Preclone log files in source instance
– Database Tier – /$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/(StageDBTier_MMDDHHMM.log)
– Application Tier – $INST_TOP/apps/$CONTEXT_NAME/admin/log/ (StageAppsTier_MMDDHHMM.log)
Clone log files in target instance
– Database Tier – $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTier_.log
– Apps Tier – $INST_TOP/apps/$CONTEXT_NAME/admin/log/ApplyAppsTier_.log
Patching related log files in R12
– Application Tier adpatch log – $APPL_TOP/admin/$SID/log/
– Developer (Developer/Forms & Reports 10.1.2) Patch – $ORACLE_HOME/.patch_storage
– Web Server (Apache) patch – $IAS_ORACLE_HOME/.patch_storage
– Database Tier opatch log – $ORACLE_HOME/.patch_storage
Autoconfig related log files in R12
Database Tier Autoconfig log :
– $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/adconfig.log
– $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/NetServiceHandler.log
Application Tier Autoconfig log :
– $INST_TOP/apps/$CONTEXT_NAME/admin/log/$MMDDHHMM/adconfig.log
Autoconfig context file location in R12
– $INST_TOP/apps/$CONTEXT_NAME/appl/admin/$CONTEXT_NAME.xml
Other log files in R12
Database Tier
Relink Log files :
– $ORACLE_HOME/appsutil/log/$CONTEXT_NAME /MMDDHHMM/ make_$MMDDHHMM.log
Alert Log Files:
– $ORACLE_HOME/admin/$CONTEXT_NAME/bdump/alert_$SID.log
Network Logs:
– $ORACLE_HOME/network/admin/$SID.log
OUI Logs
OUI Inventory Logs :
– $ORACLE_HOME/admin/oui/$CONTEXT_NAME/oraInventory/logs
Application Tier
– $ORACLE_HOME/j2ee/DevSuite/log
– $ORACLE_HOME/opmn/logs
– $ORACLE_HOME/network/logs


Apps DBA Free Lance
Sukhwinder Singh

Friday, June 24, 2011

Disaster Recovery - LOSS OF SYSTEM AND CONTROLFILE

SQL> startup
ORACLE instance started.
Total System Global Area  612368384 bytes
Fixed Size                  1292036 bytes
Variable Size             415238396 bytes
Database Buffers          188743680 bytes
Redo Buffers                7094272 bytes
ORA-00205: error in identifying control file, check alert log for more info
 
Step 1: Using RMAN
C:\Users\RMAN TARGET \
RMAN> STARTUP;
connected to target database (not started)
RMAN> SET DBID=1276854982
executing command: SET DBID
RMAN>RESTORE CONTROLFILE FROM AUTOBACKUP;
(WE SHOULD CONFIGURE CONTROLFILE AUTOBACKUP ON)
Finished restore
RMAN>ALTER DATABASE MOUNT;
database mounted
released channel: ORA_DISK_1
RMAN>RESTORE DATABASE;
RMAN>RECOVER DATABASE;
RMAN>ALTER DATBASE OPEN RESETLOGS;


Apps Dba Free Lance
Sukhwinder Singh

Enabling Archive Logs in a RAC Environment

Whether a single instance or clustered database, Oracle tracks (logs) all changes to database blocks in online redolog files. In an Oracle RAC environment, each instance will have its own set of online redolog files known as a thread. Each Oracle instance will use its set (group) of online redologs in a circular manner. Once an online redolog fills, Oracle moves to the next one. If the database is in "Archive Log Mode", Oracle will make a copy of the online redo log before it gets reused. A thread must contain at least two online redologs (or online redolog groups). The same holds true for a single instance configuration. The single instance must contain at least two online redologs (or online redolog groups).

The size of an online redolog file is completely independent of another intances' redolog size. Although in most configurations the size is the same, it may be different depending on the workload and backup / recovery considerations for each node. It is also worth mentioning that each instance has exclusive write access to its own online redolog files. In a correctly configured RAC environment, however, each instance can read another instance's current online redolog file to perform instance recovery if that instance was terminated abnormally. It is therefore a requirement that online redo logs be located on a shared storage device (just like the database files).

As already mentioned, Oracle writes to its online redolog files in a circular manner. When the current online redolog fills, Oracle will switch to the next one. To facilitate media recovery, Oracle allows the DBA to put the database into "Archive Log Mode" which makes a copy of the online redolog after it fills (and before it gets reused). This is a process known as archiving.

The Database Creation Assistant (DBCA) allows users to configure a new database to be in archive log mode, however most DBA's opt to bypass this option. In cases like this where the database is in no archive log mode, it is a simple task to put the database into archive log mode. Note however that this will require a short database outage. From one of the nodes in the Oracle RAC 11g configuration, use the following tasks to put a RAC enabled database into archive log mode.
For the purpose of this article, I will use the node racnode1 which runs the racdb1 instance:
Login to one of the nodes (i.e. racnode1) and disable the cluster instance parameter by setting cluster_database to FALSE from the current instance:     $ sqlplus "/ as sysdba"
    SQL> alter system set cluster_database=false scope=spfile sid='racdb1';

Shutdown all instances accessing the clustered database:  
  $ srvctl stop database -d racdb

Using the local instance, MOUNT the database:     $ sqlplus "/ as sysdba"
    SQL> startup mount
 
Enable archiving:     SQL> alter database archivelog;

Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance:     SQL> alter system set cluster_database=true scope=spfile sid='racdb1';
   
Shutdown the local instance:     SQL> shutdown immediate

Bring all instance back up using srvctl:     $ srvctl start database -d racdb

(Optional) Bring any services (i.e. TAF) back up using srvctl:     $ srvctl start service -d racdb

Login to the local instance and verify Archive Log Mode is enabled:     $ sqlplus "/ as sysdba"
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     83
    Next log sequence to archive   84
    Current log sequence           84

After enabling Archive Log Mode, each instance in the RAC configuration can automatically archive redologs!


Apps Dba Free Lance 
Sukhwinder singh

Tuesday, June 21, 2011

How to Display Time/Date During RMAN Operations

TIMESTAMP FOR UNIX
--------------------------
To get time / date displayed during an RMAN operation the OS command (date)
should be used.

To display a timestamp in the RMAN log, run something similar to the following -

RMAN >run{
allocate channel a1 type disk
.
.
.
HOST 'date';
release channel a1;
}
Output looks like -

RMAN-03022: compiling command: host

Mon Jun 20 16:31:06 IST 2011
RMAN-06134: host command complete

--------------------------------------------------------------------------------
-
If you want the Time or Date to be recorded in a file you can use the following
-

RMAN> run{
host 'echo Backup stored in logfile (logrman.txt) taken on ->/tmp/rmandate.
log';
host 'date>>/tmp/rmandate.log';
}

This will produce a file that has -

Backup stored in logfile (logrman.txt) taken on -

Tue Jun 21 16:49:51 IST 2011


Apps Dba Free Lance
Sukhwinder singh

Block Concurrent Requests From Running

From the manager work shift, by making the processes=0 for that work shift at specific time and date.

Something to remember when creating manager work shifts that:
A. Date-specific work shifts override work shifts that do not specify a specific date.
B. Overlapping Work Shifts - Priority Levels:

1. The work shift with the more specific time period takes effect for the overlapping time period.
2. Overlapping work shifts with the same priority:
The work shift with the largest target processes takes effect.

 Apps Dba Free Lance
Sukhwinder singh

Steps to remove unneeded context files after cloning

1. Look at the $APPL_TOP/admin

Make sure there is only the Target Machine's machine *.xml present under: $APPL_TOP/admin/

2. Remove the source *.xml file if it is there

3. Then run the following from

SQL*Plus:

SQL> EXEC FND_CONC_CLONE.SETUP_CLEAN;COMMIT;EXIT;

4. Re-run AutoConfig on every tiers(DB Tier/s then on Apps Tier/s) to repopulate the required system tables.

5. Bounce the Applications Services


Apps Dba Free Lance
Sukhwinder Singh

How To Recreate the /appsutil/scripts/ directory

1. On the Application Tier (as the APPLMGR user)

2. Log in to the APPL_TOP environment (source the environment file)

3. Create appsutil.zip file

4. perl /bin/admkappsutil.pl

5. This will create appsutil.zip in $APPL_TOP/admin/out .

6. On the Database Tier (as the ORACLE user):

7. Copy or FTP the appsutil.zip file to the RDBMS ORACLE_HOME

8. cd RDBMS_ORACLE_HOME

9. unzip -o appsutil.zip

10. Generate your Database Context File follow the steps below:
On UNIX cd . .env
cd /appsutil/bin perl adbldxml.pl tier=db appsuser= appspasswd=

11. On UNIX cd /appsutil/bin, run adconfig.sh contextfile= appspass= After running adconfig.sh contextfile= appspass= ,the scripts directory "/appsutil/scripts//" is created in appsutil directory.

12. After running adconfig.sh contextfile= appspass= ,the scripts directory "/appsutil/scripts//" is created in appsutil directory

 
Apps Dba Free Lance
Sukhwinder Singh

Steps to Run Autoconfig On Database Tier (DB-Tier)


1. source the APPS environment
i. go to $APPL_TOP folder where APPS is installed
ii. run file   . APPSORA.env or APPSSID_hostname.env
iii. check if the APPS environment was sourced:
echo $ORACLE_HOME

2. stop Middle(APPS) Tier:
i. Go to "/admin/scripts/" folder
ii. run adstpall.sh

3. source the RDBMS environment
i. go to your RDBMS ORACLE_HOME $ORACLE_HOME
ii. source the _.env file
iii. check that the RDBMS environment was sourced
echo $ORACLE_HOME


4. run Autoconfig on DB-Tier
i. go to /appsutil/scripts/ folder
ii. run ADAUTOCFG.SH

5. Check the Autoconfig log file
i. AutoConfig log files are stored under:

Application Tier
/admin//log/

Database Tier
/appsutil/log//
ii. if you have errors (check for ERROR or FAIL words in the log files), solve it.

NOTE: it is very important to solve ALL errors from Autoconfig log files before going further inusing the other AD tools (ie: adpatch, adadmin, etc.) or executing any administrative tasks.

6. Close the current TELNET/SSH/command window and open a new one where you will source the NEW environment you need (this must be done in order not to preserve old values from the old sourced environment)

7. Please source the APPS environment
i. go to $APPL_TOP folder where APPS is installed
ii. run file    . APPSORA.env or APPSSID_hostname.env
iii. check if the APPS environment was sourced:
echo $ORACLE_HOME

8. Restart the Middle (APPS) Tier
i. Go to "/admin/scripts/" folder
ii. run adstrtal.sh


Apps Dba Free Lance
Sukhwinder Singh

Saturday, June 18, 2011

ASM backup using RMAN .By using backup as copy command : -

TEST USING TRANSPORTABLE TABLESPACES, STEP BY STEP
1. Create tablespace accounts
2. Create table accounts on tablespace accounts, insert some records on it
3. Create a Directory for Data Pump and grant privileges on it to Public
4. Check that the tablespace can be part of a transportable tablespace set
5. Flush online logs to disk
6. Set tablespace accounts read only
7. Export tablespace accounts using data pump
8. Set tablespace accounts offline
9. Backup the tablespace using Rman backup as copy set keep until sysdate + 2 years
10. Drop tablespace accounts including datafiles
11. Confirm that you cannot access the table on the dropped tablespace
Restore tablespace steps
12. Restore the datafile backup to ASM
13. Import tablespace accounts back into the database using data pump
14. Set tablespace accounts online and check the control table

1. Create tablespace accounts
SQL> create tablespace accounts;
Tablespace created.
SQL> select file_name from dba_data_files where tablespace_name='ACCOUNTS';
FILE_NAME
-----------------------------------------------

2. Create table accounts on tablespace accounts, insert some records on it
SQL> conn avargas/oracle
Connected.
SQL> create table accounts tablespace accounts as select * from dba_users;
Table created.
SQL> insert into accounts select * from accounts;
15 rows created.
SQL> /
30 rows created.
SQL> commit;
Commit complete.
SQL> select count(*) from accounts;
COUNT(*)
----------
60


3. Create a Directory for Data Pump and grant privileges on it to Public
SQL> create or replace directory XTTS as '/oradisk/app01/oracle/scripts/av';
Directory created.  
SQL> grant read, write on directory XTTS to public;
Grant succeeded.

4. Check that the tablespace can be part of a transportable tablespace set
SQL> EXECUTE sys.DBMS_TTS.TRANSPORT_SET_CHECK('accounts',true);
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
no rows selected

5. Flush online logs to disk
SQL> alter system archive log current;
System altered.

6. Set tablespace accounts read only
SQL> alter tablespace accounts read only;
Tablespace altered.

7. Export tablespace accounts using data pump
{oracle} /oradisk/app01/oracle/scripts/av [pollux.com] > expdp system/oracle
dumpfile=xtts.dmp directory=XTTS transport_tablespaces=accounts
Export: Release 10.2.0.3.0 - Production on Thursday, 03 January, 2008 21:34:20
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

8. Set tablespace accounts offline
SQL> alter tablespace accounts offline;
Tablespace altered.

9. Backup the tablespace using Rman backup as copy
Note that in order to be able to setup the keep time for a period longer than the setup of the Flash Recovery Area
we must make this backup outside the Flash Recovery Area
RMAN> backup as copy datafile '+DB1DATADG/redx/datafile/accounts.287.642998077
2> tag accounts_DF_CP
3> format '/oradisk/app01/oracle/scripts/av/%U'
4> keep until time='sysdate + 730' nologs;
Starting backup at 03/01/2008 21:44:33
using channel ORA_DISK_1
backup will be obsolete on

10. Drop tablespace accounts
SQL> drop tablespace accounts including contents and datafiles;
Tablespace dropped.

11. Confirm that you cannot access the table on the dropped tablespace
SQL> select count(*) from accounts;
select count(*) from accounts
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select tablespace_name from dba_tablespaces where tablespace_name='ACCOUNTS';

12. Restore the datafile backup to ASM
On this step we need to use the Rman copy command instead of restore. Restore do require that the tablespace
exist, and we did drop to it as part of the TTS procedure.
Take note of the output filename given by Rman; you will need to use this name on the next step
RMAN> copy datafilecopy '/oradisk/app01/oracle/scripts/av/data_D-REDX_I-
2305343033_TS-ACCOUNTS_FNO-6_0pj58r62' to '+DB1DATADG';
Starting backup at 03/01/2008 22:15:35
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input is copy of datafile 00006: /oradisk/app01/oracle/scripts/av/data_DREDX_
I-2305343033_TS-ACCOUNTS_FNO-6_0pj58r62
output filename=+DB1DATADG/redx/datafile/accounts.287.643068939
tag=ACCOUNTS_DF_CP recid=21 stamp=643068957
channel ORA_DISK_1: datafile copy complete,

13. Import tablespace accounts back into the database using data pump
[pollux.com] > impdp system/oracle dumpfile=xtts.dmp directory=XTTS
transport_datafiles='+DB1DATADG/redx/datafile/accounts.287.643068939'
Import: Release 10.2.0.3.0 - Production on Thursday, 03 January, 2008 22:18:14
Copyright (c) 2003, 2005, Oracle. All rights reserved.

14. Set tablespace accounts online and check the control table
On the previous check our current log sequence was 27, now we got to sequence 29. This check is made to be
sure that despite the database advanced its scn we will not require to do recover of the offline tablespace when
seting it online again.
SQL> alter tablespace accounts online;
Tablespace altered.
SQL> select count(*) from avargas.accounts

Apps Dba Free Lance
Sukhwinder Singh

Friday, June 17, 2011

How to Enable and Disable the archive log

To Disable the Archive Log:

change the following value to false in the init.ora file
log_archive_start=false

shutdown the database:
shutdown immediate

disable the archivelog:
startup mount
alter database noarchivelog;
alter database open;
archive log list;

To Enable the Archive Log:

change the following value to true in the init.ora file
log_archive_start=true

shutdown the database:
shutdown immediate

enable the archivelog:
startup mount
alter database archivelog;
alter database open;
archive log list;
archive log all;

Apps Dba Free Lance
Sukhwinder Singh

How to license a new product through OAM

Here's how you can license a new product through OAM:

1. Login to OAM
2. Click on Site Map (The link near Applications Dashboard link)
3. Click on License Manager
4. Under License heading, click Products
5. Click on blue plus icon of More options
6. Click on the radio button: License Applications Product
7. Click on continue button
8. You'll get a list of all the individual products
9. Check the box on the left of the product you want to license
10. Click on Next button
11. Click on Submit button
12. You'll get a message: Sucessfully licensed the selected product(s). Click OK

That's it you are done. You may regenerate all the forms of the newly licensed product just to make sure.


Apps Dba Free Lance
Sukhwinder Singh

How to resize archive logs

Modifying the redo logs:
------------------------
SQL> select group#, bytes, status from v$log;

GROUP# BYTES STATUS
---------- ---------- ----------------
1 52428800 INACTIVE
2 52428800 CURRENT
3 52428800 INACTIVE

SQL> select group#, member,status from v$logfile;

GROUP# MEMBER
--------------------------------------------------------------------------------
3 /TESTDATA/TEST/testdata/redo03.log
2 /TESTDATA/TEST/testdata/redo02.log
1 /TESTDATA/TEST/testdata/redo01.log


SQL> alter database add logfile group 4 '/TESTDATA/TEST/testdata/redo04.log' size 125M;

alter database add logfile group 5 '/TESTDATA/TEST/testdata/redo05.log' size 125M;

alter database add logfile group 6 '/TESTDATA/TEST/testdata/redo06.log' size 125M;


select group#, bytes, status from v$log;

Switch until we are into log group 4

alter database drop logfile group 1;
alter database drop logfile group 2;
alter database drop logfile group 3;

remove the files at OS level

 
Apps Dba Free Lance
Sukhwinder Singh

Find out database growth in a year

select to_char(creation_time, 'RRRR Month') "Month",
sum(bytes)/1024/1024
"Growth in Meg"
from sys.v_$datafile
where creation_time > SYSDATE-365
group by to_char(creation_time, 'RRRR Month');



Apps Dba Free Lance
Sukhwinder Singh

Duplicate data files

In order to find duplicate data files in your environemnt use the following sql:
select tablespace_name,file_name from dba_data_files a where 1< (select count(*) from
dba_data_files b where substr(a.file_name,24,60)=substr (b.file_name,24,60)) ;

To fix the duplicate datafiles follow the below procedure:
1. If the database is up, shut it down.
> SHUTDOWN IMMEDIATE
2. Copy the datafile to the new name/location at operating system level.
> cp
3. Mount the database.
> STARTUP MOUNT
This command will read the control file but will not mount the datafiles.
4. Rename the file inside Oracle.
> ALTER DATABASE RENAME FILE
'/FULL_PATH_OF_OLD_LOCATION/AND_DATAFILE_NAME.DBF'
TO
'/FULL_PATH_OF_NEW_LOCATION/AND_DATAFILE_NAME.DBF';
Do this for all the datafiles that were renamed or moved at the operating system level.
5. Open the database.
> ALTER DATABASE OPEN;
6. Query v$dbfile to confirm that the changes made were correct.
> SELECT * FROM V$DBFILE;
7. Remove the datafile(s) from the old location at the operating system level.




Apps Dba Free Lance
Sukhwinder Singh

Thursday, June 16, 2011

Force All Application users to Change their password

This is available starting in RUP4.
Requires Patch 4676589 ATG RUP 4.
The script to expire all passwords in the fnd_user table is $FND_TOP/patch/115/sql/AFCPEXPIRE.sql.
It can be executed from SQL*Plus or as a Concurrent Program: sqlplus -s APPS/ @AFCPEXPIRE.sql
or Submit concurrent request: CP SQL*Plus Expire FND_USER Passwords
This script sets the fnd_user.password_date to null for all users which causes all user passwords to expire. It can also be run as a SQL*Plus concurrent program.
The user will need to create a new password upon the next login.


Apps Dba Free Lance
Sukhwinder Singh

Validating Guest user password

We generally come across a situation where we have to validate our Guest user password. I am mentioning some steps via backend to validate your Guest user password.
1. Check Value in DBC File
grep -i GUEST_USER_PWD $FND_SECURE/hostname_SID.dbc
GUEST_USER_PWD=GUEST/ORACLE

2. Check profile option value
sqlplus apps/passwd
SQL> select fnd_profile.value(’GUEST_USER_PWD’) from dual;
FND_PROFILE.VALUE(’GUEST_USER_PWD’)
——————————————————————————–
GUEST/ORACLE

Value for step 1 and 2 must be sync.

3. Guest user connectivity check
sqlplus apps/passwd
SQL> select FND_WEB_SEC.VALIDATE_LOGIN('GUEST','ORACLE') from dual;
FND_WEB_SEC.VALIDATE_LOGIN('GUEST','ORACLE')
——————————————————————————–-----
Y

Above is the value, then everything is perfect.


Apps Dba Free Lance
Sukhwinder Singh

R12 New features

While using R12 I come across few things which might help you guys too:

1. It doesn't require to change apps password in any config files. So no need to run autoconfig if you used to do it.
2. All startup files are in $INST_TOP/admin/scripts
3. While creating Share Appl Top we can keep all other tops as read only except INST_TOP.
4. While running $INST_TOP/admin/scripts/adautocfg.sh it require apps password only. No need to provide Context file location any more.
5. CONTEXT File location is $INST_TOP/appl/admin
6. If you want to install R12 on virtual host start rapidwiz as "rapidwiz -servername virtual"
7. Just change s_cphost to physical hostname and run autoconfig, otherwise ICM and WF Mailer container components will have issue while starting.

I will keep on posting stuff as and when I come across.



Apps Dba Free Lance

Getting Tech Stack Information on 11i and R12

We have to collect this information whenever boss ask :-). In Oracle Apps 11i and R12 instances we have txkInventory.pl and TXKScript.pl which can help us to get this information.
As applmgr user execute below command:
perl $FND_TOP/patch/115/bin/TXKScript.pl -script=$FND_TOP/patch/115/bin/txkInventory.pl -txktop=$APPL_TOP -contextfile=$CONTEXT_FILE -appspass=apps -outfile=$OA_HTML/apps_techinfo.html

As Oracle user execute below command:
perl $ORACLE_HOME/appsutil/bin/TXKScript.pl -script=$ORACLE_HOME/appsutil/bin/txkInventory.pl -txktop=$ORACLE_HOME/appsutil/temp -contextfile=$CONTEXT_FILE -appspass=apps -outfile=$ORACLE_HOME/appsutil/temp/oracle_techdb.html





Apps Dba Free Lance

Unable to start OACORE OC4J in R12

We were doing JRE upgrade today and had issue in one of the instance not coming up. Basically OACORE OC4J service was unable to get start. Script adstrtal.sh was failing to start this service and erroing out:
==================================================
Executing service control script:/Path/admin/scripts/adoacorectl.sh start
Timeout specified in context file: 100 second(s)
script returned:
****************************************************
ERROR : Timed out( 100000 ): Interrupted Exception
You are running adoacorectl.sh version 120.13
Starting OPMN managed OACORE OC4J instance ...
****************************************************
===================================================

After looking at log file :
===================================================
06/18/10-01:32:04 :: adoacorectl.sh: Starting OPMN managed OACORE OC4J instance
opmnctl: starting opmn managed processes...
================================================================================opmn id=tsgsd2107:6204
0 of 1 processes started.
ias-instance id=XXXXXXXXXXXXX.com
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------------------------------------------------------------
ias-component/process-type/process-set:
default_group/oacore/default_group/
Error
--> Process (index=1,uid=237255567,pid=24554)
failed to start a managed process after the maximum retry limit
Log:
/Path/logs/ora/10.1.3/opmn/default_group~oacore~default_group~1.log
06/18/10-01:36:43 :: adoacorectl.sh version 120.13
06/18/10-01:36:43 :: adoacorectl.sh: Stopping OPMN managed OACORE OC4J instance
==============================================
OPMN log says this:

==============================================
10/06/18 01:35:32 [libopmnoc4j] Host and Port information for port type jms not sent by the OC4J process
10/06/18 01:35:32 [libopmnoc4j] WARNING: OC4J did not send the protocol value for port id: jms
10/06/18 01:35:32 [libopmnoc4j] Port information in the ONS notification is incorrect for proc: 237255567.
Some of the reasons for the failure are:
1. ajp, rmi and jms ports could not be bound by the process.
2. If a port value for a certain port ID is handed by OPMN to the OC4J process and this value is not returned back in the ONS notifications.
For example, this can happen for certain bad JGroups related configurations.
10/06/18 01:35:32 [libopmnoc4j] Process Start Error: default_group~oacore~default_group~1 (237255567:24554)
10/06/18 01:35:34 [pm-process] Stopping Process: default_group~oacore~default_group~1 (237255567:24554)
10/06/18 01:35:34 [libopmnoc4j] Process Stop Error: default_group~oacore~default_group~1 (237255567:24554)
10/06/18 01:35:34 [libopmnoc4j] Failed to construct stop command for proc: 237255567
10/06/18 01:35:34 [libopmnoc4j] Forcefully Terminating Process: default_group~oacore~default_group~1 (237255567:24554)
10/06/18 01:35:36 [pm-requests] Request 7 Completed. Command: /start?process-type=oacore
=======================================

Now big thing is solution. Here you go:
Go to $INST_TOP/ora/10.1.3/j2ee/oacore/persistence/oacore_default_group_1 on application tier and look for jms.state and *.lock files. If you find them in this location once all you services are stop. Delete or rename them. Once you are done try again. Now OACORE OC4J service will come up.

Apps Dba Free Lance

How to change Oracle user ORACLE_HOME

We have to change our ora user $ORACLE_HOME from /u02/oracle to /u02/oracle/11.1.0. Here are steps which I have followed and done this.

1. Run adpreclone.pl on dbtier as oracle user
perl adpreclone.pl dbTier
2. Shutdown down database and listener.
3. mv all files from /u02/oracle to /u02/oracle/11.1.0
4. run adcfgclone.pl for binaries only to avoid db creation
perl adcfgclone.pl dbTechStack
It will ask you certain info, please provide it accordingly.
5. update profiles from $OH/appsutil/install/
sqlplus "/ as sysdba" @adupdlib.sql so
6. change directory to $OH/appsutil/clone/bin
perl adcfgclone.pl dbconfig $OH/appsutil/context_file.xml
7. run autoconfig on appstiers too.
8. Start all services and check.
 
 
Apps Dba Free Lance

Finding XML Publisher Version in R12

1) The following SQL statement will show which patches and thus with version of Oracle XML Publisher is installed on an Oracle E-Business Suite Release 12.0 and 12.1 environment:

SELECT DECODE (bug_number, '4440000', '5.6.3') PATCH, bug_number
FROM ad_bugs WHERE bug_number IN('4440000');
PATCH BUG_NUMBER
----- ------------------------------
5.6.3 4440000

2) The version of a specific Java Class has a one to one relationship with the currently installed version of Oracle XML Publisher. This method is suitable for both Oracle E-Business Suite 11 and Release 12.

$ adident Header $OA_JAVA/oracle/apps/xdo/common/MetaInfo.class:
$Header MetaInfo.java 120.6 2006/08/17 01:25:30 bgkim noship $


Metainfo.class XML Publisher Patch Reference Included in ... R12
120.6 5.6.3 4440000 Note 422508.1



Apps Dba Free Lance




Finding XML Publisher Version in R12

1) The following SQL statement will show which patches and thus with version of Oracle XML Publisher is installed on an Oracle E-Business Suite Release 12.0 and 12.1 environment:

SELECT DECODE (bug_number, '4440000', '5.6.3') PATCH, bug_number
FROM ad_bugs WHERE bug_number IN('4440000');
PATCH BUG_NUMBER
----- ------------------------------
5.6.3 4440000

2) The version of a specific Java Class has a one to one relationship with the currently installed version of Oracle XML Publisher. This method is suitable for both Oracle E-Business Suite 11 and Release 12.

$ adident Header $OA_JAVA/oracle/apps/xdo/common/MetaInfo.class:
$Header MetaInfo.java 120.6 2006/08/17 01:25:30 bgkim noship $


Metainfo.class XML Publisher Patch Reference Included in ... R12
120.6 5.6.3 4440000 Note 422508.1



Apps Dba Free Lance




Having issues with JVM

As Apache is using JVM internally, we have to look at troubleshooting JVM too, before jumping onto Apache. See at jvm log files if you see "Full GC" having quiet frequently, I would suggest to make following changes and if you already have mentioned values, increase something higher then this :


1. jserv.conf
ApJServVMTimeout 180

2. jserv.properties
wrapper.bin.parameters=-verbose:gc -Xmx1024M -Xms256M -XX:MaxPermSize=256M -XX:NewRatio=2 -XX:+PrintGCTimeStamps -XX:+UseTLAB


3. httpd.conf
Timeout 1800


4. ICX:Session Timeout 30 (profile value)

5. zone.propertiessession.timeout=1800000

Note : > 3,4,5 value has to be same

6. JDBC parameter changes in dbc file
FND_JDBC_BUFFER_DECAY_SIZE=20

FND_JDBC_BUFFER_MIN=20
FND_JDBC_BUFFER_MAX=50%

7. Bounce Apache after above changes


Apps Dba Free Lance

Finding Huge files

Yes its a big tasks for DBA's, when you do df -k in solaris and see your mount points are reaching 100%. It difficult to find what to delete for what to compress. I usually try some commands mentioned below :

This command will list top 10 directory from parent directory
du -ks * sort -rn head -10

This command will list out all files greater than mentioned size.
find . -size +10240


Apps Dba Free Lance

Where the Concurrent Manager kept himself Busy

We come across this question many time what Concurrent manager was doing. So here is one script which will help you to find this:

rem ------------------------------------------------------------------
rem Filename: cmup.sql
rem Purpose: Concurrent Manager Usage by Program /day
rem - Can only run from sqlplus
rem -----------------------------------------------------------------------

cl scr
set feedback off
set verify off
set line 1000
set pagesize 1111
set head off
spool d:\cmup.csv
prompt Concurent Program, Program, Total Duration(Min), Avg Duration(Min), Min Duration(Min), Max Duration(Min), Times_Run
select /*+ ALL_ROWS */
fcqtl.user_concurrent_queue_name', 'fcptl.program','round(sum(greatest(actual_completion_date - actual_start_date,0))*60*24, 2)', 'round(avg(greatest(actual_completion_date - actual_start_date,0))*60*24, 2)', 'round(min(greatest(actual_completion_date - actual_start_date,0))*60*24, 2)', 'round(max(greatest(actual_completion_date - actual_start_date,0))*60*24, 2)', 'count(*)
from
fnd_concurrent_programs fcp,
FND_CONC_REQ_SUMMARY_V fcptl,
fnd_concurrent_processes fcproc,
fnd_concurrent_queues_tl fcqtl,
fnd_concurrent_queues fcq
where
fcptl.phase_code = 'C'
and actual_completion_date is not null
and actual_start_date is not null
and fcq.concurrent_queue_id = fcproc.concurrent_queue_id
and fcq.application_id =fcproc.queue_application_id
and fcq.manager_type = 1
and fcptl.controlling_manager = fcproc.concurrent_process_id
and fcptl.program_application_id =fcp.application_id
and fcptl.concurrent_program_id = fcp.concurrent_program_id
and fcp.concurrent_program_name not in ('ACTIVATE','ABORT','DEACTIVATE','VERIFY')
and fcproc.queue_application_id = fcqtl.application_id
and fcproc.concurrent_queue_id = fcqtl.concurrent_queue_id
and fcqtl.language=userenv('LANG')
and actual_start_date >= trunc(sysdate - 1) and actual_start_date <= trunc(sysdate)
group by
fcqtl.user_concurrent_queue_name,
fcptl.program
order by
round(sum(greatest(actual_completion_date - actual_start_date,0))*60*24, 2) desc,
fcqtl.user_concurrent_queue_name asc
/
spool off

Apps Dba Free Lance

Killing All processes at once

Yesterday we come across a situation where we have to kill all look-a-like processes at once, this is what we followed to get rid of it.
kill -9 `ps -ef : grep -i applmgr : grep -i 889 : grep -v grep : awk '{print $2}'`
ps -efw : grep -i applmgr : grep : grep -v grep : awk '{print "kill -9 "$2}'

Note: Kindly replace colon (:) with pipe sign


 You can also try this variation:

ps -ef |grep applmgr|grep -v grep |awk '{print $2}'|xargs kill -9


Apps Dba Free Lance


Password expiring Alert

Following script will help you to get an alert if users have password longer than certain days. Lets say you set an alert for password to expire in 180 days and you want to sent an alert after 150 days:

. .env

cd /tmp
sqlplus -S /nolog <<>
connect apps/$APPSPASS
set feedback off
set pages 100
set lines 100
col "username" format a15
col "account_status" format a15
col "profile" format a15
col "LAST_PASSWD_DATE" format a16
col "CHANGE BY" format a16
spool /tmp/userpassword.log

select /*+ RULE */ dd.username,dd.account_status,dd.profile,
dd.created "CREATION_DATE", u.ptime "LAST_PASSWD_DATE",u.ptime+180 "CHANGE BY"
from dba_users dd, sys.user$ u
where dd.USERNAME = u.NAME
and u.ptime <>
and dd.account_status = 'OPEN'
and dd.USERNAME != 'APPLSYSPUB' -- Exception as this account should have the default password
order by dd.created,u.username
/
spool off
EOF
if [ -s /tmp/userpassword.log ] ; then
cat /tmp/userpassword.log mailx -s "Users with Password older than 150 days" appsdbafreelance@gmail.com
 
fi


Apps Dba Free Lance

Apps Password changed from sqlplus

Many of you might have done this mistake, or no option left like in my client usually happen in every six month, it force you to change apps password via sqlplus. Now what application login will stop working, isn't it.
Yes !!

Do this :
1. Change to old apps password via sqlplus;

SQL> Alter user apps identified by oldappspasswd;

2. Run autoconfig with this password in all tiers.

3. Restore customizations if any.

4. Start all services and you all set to login to your apps frontend login.


 Apps Dba Free Lance

Script for Checking Java Mailer Status

-----------------------------Script Start-----------------------------------
#!/bin/ksh
#Script to monitor WF Mailer Status
. .env
cd /tmp
sqlplus /nolog << EOF
connect apps/password
set heading off
set pages 0
set feedback off
spool monitor_mailer.lst
select COMPONENT_STATUS from FND_SVC_COMPONENTS
where COMPONENT_ID='10006';
spool off
EOF
MONITOR=`grep -v "SQL>" monitor_mailer.lstsed s/\ //g`

if [ $MONITOR != "RUNNING" ] ; then
echo $MONITOR mailx -s "WFMAILER: Check Workflow Mailer STATUS" appsdbafreelance@gmail.com
fi
/bin/rm -f /tmp/monitor_mailer.lst
-----------------------Script End -------------------------------------------------

Put this in crontab for every 10 min or so.
It will send a status whenever its not running. No more OAM checking !!!!


Apps Dba Free Lance

Apache Debug

You might be knowing it, but I would like to summarize the steps again to take Debug for Apache, believe me set this and you can easily resolve the errors:

1. Shut the HTTP Server (Apache/iAS) down. - You can grep for the 'httpd' process to verify it is down
2. Rename (or delete) the following files so we get a fresh copy of them:
/Apache/Jserv/logs/jserv.log
/Apache/Jserv/logs/mod_jserv.log
/Apache/Jserv/logs/jvm (the whole directory if it exists)/Apache/Apache/logs/access_log*
/Apache/Apache/logs/error_log
/Apache/Apache/logs/error_log_pls (if it exists)

Now we will turn on debugging in the log files:

3. Modify the $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.conf file. Search for the parameter:
ApJServLogLevel notice
Change the 'notice' to 'debug'

4. Modify the $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties file. Search for the following section:

log=false
Change this to be log=true
and Changelog.channel=false
To be: log.channel=true

5. Modify the $IAS_ORACLE_HOME/Apache/Apache/conf/httpd.conf file. Search for:
LogLevel
Set the LogLevel to 'debug' from its current value.

6. Modify the $IAS_ORACLE_HOME/Apache/Apache/conf/httpd_pls.conf file (if it exists). Search for the following parameter
LogLevelSet the LogLevel to 'debug' from its current value.

7. In the zone.properties...- servlet.framework.initArgs=debuglevel=1

8. Start the HTTP Server
 

Apps Dba Free Lance

JAVA Mailer not Starting

Today was the bad day for us, we had an outage last weekend and after we finished our job lot of things start failing. Due to DBA errors and Java mailer is one off them. Here is what log says
------------------------------------------------------------------------------------
SVC-GSM-WFMLRSVC-67585-10006 : oracle.apps.fnd.cp.gsc.Logger.Logger(String, int) : Logging to System.out until necessary parameters are retrieved for Logger to be properly started.oracle.apps.fnd.cp.gsc.SvcComponentContainerException: Could not start component; performing rollback -> oracle.apps.fnd.cp.gsc.SvcComponentException: Validation failed for the following parameters -> {FRAMEWORK_USER=Unable to make a connection with the user and responsibility}. Parameters were -> {INBOUND_MAX_IGNORE_SIZE=1000, INBOUND_MAX_LOOKUP_CACHE_SIZE=100, ATTACH_IMAGES=N, ALLOW_FORWARDED_RESPONSE=Y, INBOUND_UNSOLICITED_THRESHOLD=2, NODENAME=PRODMAIL, SEND_UNSOLICITED_WARNING=N, AUTOCLOSE_FYI=N, PROCESSOR_READ_TIMEOUT_CLOSE=Y, INBOUND_PROTOCOL=IMAP, ATTACHED_URLS=WFMAIL:ATTACHED_URLS, TEST_ADDRESS=, EXPUNGE_ON_CLOSE=Y, PROCESSOR_OUT_THREAD_COUNT=1, ATTACH_STYLESHEET=N, OUTBOUND_CONNECTION_TIMEOUT=120, INBOUND_MAX_RET_EMAIL_SIZE=100, MAX_INVALID_ADDR_LIST_SIZE=100, PROCESSOR_MAX_LOOP_SLEEP=60, SEND_ACCESS_KEY=N, PROCESSOR_IN_THREAD_COUNT=1, FRAMEWORK_APP=1, CLOSED=WFMAIL:CLOSED, INBOUND_FETCH_SIZE=100, SUMMARY=WFMAIL:SUMMARY, ENABLE_STYLESHEET=N, PROCESSOR_ERROR_LOOP_SLEEP=60, OPEN_MAIL_FYI=WFMAIL:OPEN_MAIL_FYI, FRAMEWORK_RESP=20420, ALTERNATE_EMAIL_PARSER=oracle.apps.fnd.wf.mailer.DirectEmailParser, INBOUND_PASSWORD=_@8@!864^4$9#64#^@#8@@686##!#44@#!0!99@@`9+^+*#B$$@99*$, HTTP_USER_AGENT=Mozilla/4.76, RESET_NLS=N, PROCESS=Process, OPEN_MAIL_DIRECT=WFMAIL:OPEN_MAIL_DIRECT, PROCESSOR_LOOP_SLEEP=5, MESSAGE_FORMATTER=oracle.apps.fnd.wf.mailer.NotificationFormatter, REPLYTO=emailuser@edomain.com, FRAMEWORK_USER=0, CANCELED=WFMAIL:CANCELED, OUTBOUND_PROTOCOL=SMTP, ACCOUNT=emailuser, DISCARD=Discard, FROM=Expedite, PROCESSOR_DEFER_EVTDATA_READ=Y, SEND_CANCELED_EMAIL=N, WARNING=WFMAIL:WARNING, PROCESSOR_MAX_ERROR_COUNT=50, INBOUND_CONNECTION_TIMEOUT=120, HTMLAGENT=http://hostname.domainname.com:8000/pls/SID, INBOX=INBOX, OPEN_INVALID_MORE_INFO=WFMAIL:OPEN_INVALID_MORE_INFO, OUTBOUND_SSL_ENABLED=N, INBOUND_SERVER=mail.domain.com, OPEN_MORE_INFO=WFMAIL:OPEN_MORE_INFO, MAILER_SSL_TRUSTSTORE=NONE, INLINE_ATTACHMENT=N, OPEN_INVALID=WFMAIL:OPEN_INVALID, EMAIL_PARSER=oracle.apps.fnd.wf.mailer.TemplatedEmailParser, DIRECT_RESPONSE=N, OPEN_MAIL=WFMAIL:OPEN_MAIL, FRAMEWORK_URL_TIMEOUT=30, COMPONENT_LOG_LEVEL=1, SUMHTML=WFMAIL:SUMHTML, PROCESSOR_READ_TIMEOUT=10, DEBUG_MAIL_SESSION=N, INBOUND_SSL_ENABLED=N, OUTBOUND_SERVER=mailhost.domain.com}
at oracle.apps.fnd.cp.gsc.SvcComponentContainer.handleComponentEvent(SvcComponentContainer.java:2211)
at oracle.apps.fnd.cp.gsc.SvcComponentContainer.onBusinessEvent(SvcComponentContainer.java:300)
at oracle.apps.fnd.wf.bes.DispatchThread.run(DispatchThread.java:57)
----------------------------------------------------
And I think I know the issue, it was because of autoconfig only, you might be asking why? Our DBA ran autconfig without clearing FND_Nodes table. And as we were running on virtual host (clustering at db tier). It created a mess there. We have duplicate server_id for virtual and active physical host. Server ID for both of them were same, that's the reason Java mailer was not able to start and failing because of above error. Now what, another downtime---- I don't think customer will agree, but this issue is critical how to resolve this without big downtimes for running autoconfig. We raised Sev 1 tar to see if Oracle Support can help on this, but hard luck, they said the same old story of running autoconfig and all. But after spending few minutes, I got the solutions with 10 min of downtime. Here is what I did :

1. Shutdown Concurrent Manager
2. Take a backup of FND_Nodes.
3. Delete Physical host from OAM (OAM > Administrator > Hosts)
4. Register same host again from OAM.
5. Start Concurrent Manager
6. Mailer was working fine after this.

In your case you need to re run autoconfig, but prior to that run this command as apps user.

EXEC FND_CONC_CLONE.SETUP_CLEAN;
Commit;


Oracle Apps Free Lance

Wednesday, June 15, 2011

Detecting new hard disk ,creating partition,formating a partition,creating mount point point for storage,Using the mount point for storage on linux platform :

For becoming a best DBA we should have sound knowledge of operating system.The below experiment will be very useful for creating partition on linux ,formating the partition and mounting the disk for our storage purpose.

I got the 50 GB hard disk from my system admin,Now I want to use this hard disk for
storing backup of Database files.before doing I need to do the below steps:

Pre-requistics step:
--------------------
For detecting hard disk in linux server you need to reboot.We can do as follows:
[root@testNODE ~]# shutdown -r now
Broadcast message from root (pts/0) (Wed Apr 20 01:23:45 2009).

The system is going down for reboot NOW!

Note: 'reboot' command is also used.
----
Once ther server is up you can detect your hard disk as follows:

[root@testNODE ~]# fdisk -l =>This is for checking disk in linux.

Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 514048+ 83 Linux
/dev/sda2 65 4633 36700492+ 83 Linux
/dev/sda3 4634 5482 6819592+ 83 Linux
/dev/sda4 5483 7832 18876375 5 Extended
/dev/sda5 5483 6265 6289416 82 Linux swap / Solaris
/dev/sda6 6266 7030 6144831 83 Linux
/dev/sda7 7031 7683 5245191 83 Linux

Disk /dev/sdb: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 4981 40009851 83 Linux
/dev/sdb2 4982 5221 1927800 83 Linux

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

So here is my new hard disk '/dev/sdc'.I will create a valid partition for this disk.For creating a valid partition we need to follow below steps:

Step 1: Use the command 'fdisk' as follows:
-------
[root@testNODE ~]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Step 2: To get help for various partition options use 'm' option
-------
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Step 3:As seen above 'n' is used to create a new partition,'e' for extended
---------
partition,'1' is the primary partition used 'w' is for writing to disk

Command (m for help): n =>For creating new partition
Command action
e extended
p primary partition (1-4)
p =>For primary partition
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-6527, default 6527):
=>Above you can specify size as +5G,+10G,...if you want to create 2 or more partition,I want to create only 1 partition of size 50G(default value of cylinder)
so I have not given any values.

Command (m for help): p =>for print of partitions which are created.

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 6080 48837568+ 83 Linux

Command (m for help): w =>for writing and saving the partition
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

So our partition is created with 50GB of space.

Step 4: Check the filesystems types available
-------
[root@testNODE ~]# cat /etc/filesystems =>Gives types of filesystem available in OS
ext3
ext2
nodev proc
nodev devpts
iso9660
vfat
hfs
hfsplus

Step 5:Our attached hard disk is created with extended based filesystem i.e ext3
------
[root@testNODE ~]# fdisk -l /dev/sdc =>For checking the partition

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 6080 48837568+ 83 Linux

So,as you see above we created one partition with size 50GB.

Step 6:Format your partition before mounting
------
a)Check your filesystem with 'e2fsck':
-------------------------------------------
[root@testNODE /]# e2fsck -b 8193 /dev/sdce2fsck 1.39 (29-May-2006)
e2fsck: Bad magic number in super-block while trying to open /dev/sdc

The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193

From the above error I came to know the partition is not formated,Now for formating use 'mke2fs' for formating extended
filesystem.

[root@testNODE /]# e2fsck -b 8193 /dev/sdc1 =>For filesystem check i.e extended here
e2fsck 1.39 (29-May-2006)
e2fsck: Bad magic number in super-block while trying to open /dev/sdc1

The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193

Note 1:'e2fsck' is used to check a Linux ext2/ext3 file system.
Note 2: 'fsck' is used to check and repair a Linux file system

b)Format your partition with 'mke2fs'
------------------------------------------------
[root@testNODE /]# sudo file -s /dev/sdc1
/dev/sdc1: x86 boot sector
[root@testNODE /]# sudo mke2fs -S /dev/sdc1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
6553600 inodes, 13107024 blocks
655351 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 27 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@testNODE /]# sudo mke2fs -j /dev/sdc1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
6553600 inodes, 13107024 blocks
655351 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.


Below are very important notes based on above experiment:
-------------------------------------------------------------

Note 1:'mke2fs' is used to format the extended filesystem.
-------

Note 2:In 'mke2fs' -S option is a very good option as it Write superblock and group descriptors only.
-------
This is useful if all of the superblock and backup superblocks are corrupted, and a last-ditch recovery method is desired. It causes mke2fs to
reinitialize the superblock and group descriptors, while not touching the inode table and the
block and inode bitmaps.

Note 3:'mkfs' is used to build a Linux file system on a device, usually a hard disk partition. filesystem is the device name (e.g. /dev/hda1, /dev/sdb2). blocks is the number of blocks to be used for the file system.
The exit code returned by mkfs is 0 on success and 1 on failure.

Note 4=>'mke2f' -j=>Create the filesystem with an ext3 journal (given the size of the filesystem)

=> So our extended filesystem is formated.

Step 7: Make an entry in /etc/fstab to detect a mount point even after re-boot.
--------
vi /etc/fstab
=>Add the below line.

/dev/sdc1 /u05 ext3 defaults 1 2


Step 8: Mount the filesystem.
-------
Create a directory for mounting as follows:
[root@testNODE /]# mkdir /u05

Once the directory is created we can use this for mounting purpose.
We have to use the 'mount' command to mount a disk with the option as given below.

[root@testNODE /]# mount -t ext3 /dev/sdc1 /u05
=>The above command is for mounting the partition
Here: -t=>is used to indicate the file system type
ext3=>is the filesystem type i.e extended as per entry in the /etc/fstab
/dev/sdc1=>raw disk partition used for mounting
/u05=>Directory used for mounting

[root@testNODE /]# df -h =>For checking the mount point you have created and diskspace utilization.
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 5.7G 4.0G 1.5G 73% /
/dev/sda7 4.9G 4.1G 515M 90% /u01
/dev/sda3 6.3G 1.7G 4.3G 29% /u03
/dev/sda2 34G 32G 719M 98% /u02
/dev/sda1 487M 17M 445M 4% /boot
tmpfs 2.0G 1.4G 641M 68% /dev/shm
/dev/sdb1 38G 34G 2.1G 95% /u04
/dev/sdc1 50G 180M 47G 1% /u05

Let us do filesystem check now after mounting:

[root@testNODE /]# e2fsck -b 8193 /dev/sdc
e2fsck 1.39 (29-May-2006)
e2fsck: Device or resource busy while trying to open /dev/sdc
Filesystem mounted or opened exclusively by another program?

As you see above our raw disk mounted and ready to use.

Step 9:Granting ownership and permission to 'Oracle' user
-------
Once the directory is mounted now let us grant the ownership to 'Oracle' user so that this disk can be used
for various purposes by 'Oracle' owner.

[root@testNODE u05]# chown oracle:oinstall /u05
[root@testNODE u05]# chmod -R 775 /u05

Here:
chown:Granting ownership to operating system user 'Oracle'.
chmod:Granting permission to operating system user 'Oracle'.


Step 10: Login to 'Oracle' user to verify:
--------
[oracle@testNODE u05]$ mkdir TEST
[oracle@testNODE u05]$ cd TEST
[oracle@testNODE TEST]$ touch test1
[oracle@testNODE TEST]$ ls -altr

total 8
drwxrwxr-x 4 oracle oinstall 4096 May 11 17:21 ..
-rw-r--r-- 1 oracle dba 0 May 11 17:22 test1
drwxr-xr-x 2 oracle dba 4096 May 11 17:22 .

Note:'touch' command is used to create an empty file in linux(unix) for creating empty file with 0 byte size'
----

So our directory '/u05' is ready for use.My friends keep on saying me they need more on unix from DBA's point of view.This is just a basic experiment.

Hope it helps.

Regards,
Sukhwinder Singh

Scheduling RMAN cold backup and copy controlfile

RMAN cold backup has its own advantage compare to the traditional cold backup: -

1)RMAN as we all know backup only used blocks so it is fast.Here entire Database I'm taking backup.
 
2)In my below script I'm copying the control file to have the latest copy with me in case of recovery.
This is because of below factors:
-------------------------------------------------------------------------


"copy current controlfile"
Personally I like and use this,Because of below reasons:

*restore after total loss.

I run this command as the *final* command of my database,
run {allocate channel c1 type disk; copy current controlfile to '/some OS location'.

Effectively,I'm backing up my "Backup Meta Data" at the end of each RMAN backup.
My RMAN shell scripts look like this:
--------------------------------------------


1)Backup database in mount State because rman expect Database to be in mount state for taking cold backup

2)copy current controlfile to some location
I'm not saying do not use these other options, just be sure we know ,What we are getting and whether we are able to recover the Database.Test your backups in competency servers and be sure that you can do restore & recovery in case of failure.

3)In my rman scripts,I've used
%U=To have unique backupset number
%T=Date of backup

4)Snapshot Controlfile:
Snapshot controlfile is used by rman to backup control file.Personal speaking not required to do so.If we have copy of current controlfile.Here I have used to save one in the current location of rman backup location.

5)Retention policy:
It is that period of time,till when you want to keep the backup in rman repository.Usually decide this policy well in advance and inform the team.In my case till 7 days I can obtain the backup for restore and recovery purpose.

Below are my scripts:
------------------------
1)run_TESTDB.txt:
---------------------
run_TESTDB.txt is for the setting I'm going to configure for RMAN.We have to create this file first than the actual shell script for taking rman backup.

-- RMAN Configured
run{
shutdown immediate;
startup mount;
allocate channel ch1 device type disk format '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_BK_SET1_%U_%T' maxpiecesize 5G;
allocate channel ch2 device type disk format '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_BK_SET2_%U_%T';
backup database TAG='TESTDB_BACKUP_WEEKLY';
allocate channel c1 type disk;
copy current controlfile to '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_ctrl_%U_%T';
Release channel c1;
configure retention policy to recovery window of 7 days;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/snapcf_TESTDB.f';
CROSSCHECK BACKUP;
release channel ch1;
release channel ch2;
alter database open;
}

2)rman_TESTDB.sh:
--------------------
rman_TESTDB.sh consist of setting Oracle Environment Variables and calling the run_TESTDB.txt file for taking the rman cold backup.

-- TESTDB RMAN COLD BACKUP
export ORACLE_SID=TESTDB
export ORACLE_HOME=/u01/oracle11g/product/11.2.0/dbhome_1
export ORACLE_BASE=/u01/oracle11g
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
rman target sys/TESTDBdba @/u05/DB_BACKUP_TESTDB/Backup_Scipts/run_TESTDB.txt log=/u05/DB_BACKUP_TESTDB/logs/rman_cold_weekly_backup_logs/rman_TESTDB.log

Scheduling RMAN Backup job:
-------------------------------
If I want to schedule this backup every sunday at 9pm.The cronjob script will be:
$vi crontab.oracle

#Script for RMAN COLD BACKUP WEEKLY TESTDB DATABASE
################################################################
00 21 * * 0 /u05/DB_BACKUP_TESTDB/SCRIPTS/rman_TESTDB.sh 2>&1 >/u05/DB_BACKUP_TESTDB/logs/CRONJOBS_LOGS/testdb_rman_cron.log

$crontab crontab.oracle =>scheduling job using 'crontab' command at 9pm every sundays.

Verifying cronjob:
-----------------------

$crontab -l =>List the jobs that are scheduled using Cronjob.Verify the rman job also if we have schedule as above.


Regards,

Sukhwinder Singh

Oracle 10g RAC on Linux Using NFS

There are various ways in which you can implement RAC,In this article I'm focussing on implementing it using NFS(Network File system),which I have done practically on OEL 4.6 in HP prolient servers,you can try this on vmware by choosing ASM(Automatic Storage Management),since it is mostly used now a days because of various benefits,but the concept remain the same,there will change in shared storage setup and steps.

This article describes the installation of Oracle 10g release 2 (10.2.0.1) RAC on Linux (Oracle Enterprise Linux 4.6) using NFS to provide the shared storage.
• Introduction
• Download Software
• Operating System Installation
• Oracle Installation Prerequisites
• Create Shared Disks
• Install the Clusterware Software
• Install the Database Software
• Create a Database using the DBCA
• TNS Configuration
• Check the Status of the RAC
• Direct and Asynchronous I/O

1)Introduction:
NFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems(Now its Oracle) that allows shared access to files stored on computers via an interface called the Virtual File System (VFS) that runs on top of TCP/IP. Computers that share files are considered NFS servers, while those that access shared files are considered NFS clients. An individual computer can be either a NFS server, a NFS client or both.

We can use NFS to provide shared storage for a RAC installation. In a production environment we would expect the NFS server to be a NAS, but for testing it can just as easily be another server or even one of the RAC nodes itself.

To cut costs, this articles uses one of the RAC nodes as the source of the shared storage. Obviously, this means if that node goes down the whole database is lost, so it's not a sensible idea to do this if you are testing high availability. If you have access to a NAS or a third server you can easily use that for the shared storage, making the whole solution much more resilient. Whichever route you take, the fundamentals of the installation are the same.

2)Download Software:
Download the following software.
• Oracle Enterprise Linux (4.6)
• Oracle 10g (10.2.0.1) CRS and DB software


3)Operating System Installation:
This article uses Oracle Enterprise Linux 4.6, but it will work equally well on CentOS 4 or Red Hat Enterprise Linux (RHEL) 4. A general pictorial guide to the operating system installation can be found here. More specifically, it should be a server installation with a minimum of 2.5G swap, firewall and secure Linux disabled and the following package groups installed:
• X Window System
• GNOME Desktop Environment
• Editors
• Graphical Internet
• Server Configuration Tools
• FTP Server
• Development Tools
• Legacy Software Development
• Administration Tools
• System Tools
To be consistent with the rest of the article, the following information should be set during the installation:

Step 1:Operating System installation (Oracle Enterprise Linux 4.6)
(On both Cluster Machines atpl131 and atpl136)


Remember:
1. Install Oracle Enterprise Linux 4.6 on two machines with (atlest 10 -15 gb root
space, 2,5 gb swap space and 1gb tmp space, 128 mb boot space then install
patches as explained above)
2. Make sure during installation give machine name as “atpl131.server” that is
domain should be “server” only not server.com
3. Ensure that both machines must have two ether net cards(eth0 and eth1)
4. Connect first Ethernet card (eth0) of both machines via cross cable connection
wire this makes private network between atpl131 and atpl136 machine
5. Then connect other ether net card (eth1) of both machines via router (this
makes both machines are connected in public network
6. Ensure that in both machines both network cards must be activated and
network wires are connected properly eth0 is private and eth1 is public
Ethernet card also
7. Both machines atpl131 and atpl136 must be having same time (difference up
to 1 second can work but more than that time difference may cause errors.
8. In files # means comments not hash prompt and rest places it is # prompt
9. Public and Virtual IP Address must be of same class (having same subnet
mask).
atpl131:
hostname: atpl131.server
IP Address eth1: 192.168.1.131 (public address)
Default Gateway eth0: 192.168.1.1 (public address)
IP Address eth0: 192.168.0.131 (private address)
Default Gateway eth0: none
Virtual IP Address: 192.168.1.132

atpl136:
hostname: atpl136.server
IP Address eth1: 192.168.1.136 (public address)
Default Gateway eth0: 192.168.1.1 (public address)
IP Address eth0: 192.168.0.136 (private address)
Virtual IP Address: 192.168.1.137
Default Gateway eth0: none

You can choose IP addresses as per your network requirement, but this must be same throughout the process.


Step 2:Install Packages
(On both Cluster Machines atpl131 and atpl136)


Once the basic installation is complete, install the following packages from root user. Install these four patches
compat-gcc-7.3-2.96.128.i386.rpm, compat-libstdc++-devel-7.3-2.96.128.i386.rpm
compat-libstdc++-7.3-2.96.128.i386.rpm, compat-gcc-c++-7.3-2.96.128.i386.rpm

# From Oracle Enterprise Linux Patch CD (or from Linux S/W CDs)
# cd /media/cdrecorder/patch
# rpm -Uvh compat-gcc-7.3-2.96.128.i386.rpm
# rpm -Uvh compat-gcc-c++-7.3-2.96.128.i386.rpm
# rpm -Uvh compat-libstdc++-7.3-2.96.128.i386.rpm
# rpm -Uvh compat-libstdc++-devel-7.3-2.96.128.i386.rpm
# cd /
# eject

Oracle Installation Prerequisites
Perform the following steps with root user into the atpl131 virtual machine.
The /etc/hosts file must contain the following information.

Step 3:CONFIGURING /etc/hosts file
(On both Cluster Machines atpl131 and atpl136)


#vi /etc/hosts

127.0.0.1 localhost.server localhost
# Public
192.168.1.131 atpl131.server atpl131
192.168.1.136 atpl136.server atpl136
#Private
192.168.0.131 atpl131-priv.server atpl131-priv
192.168.0.136 atpl136-priv.server atpl136-priv
#Virtual
192.168.1.132 atpl-vip.server server-vip
192.168.1.137 atpl-vip.server server-vip
#NAS
192.168.1.131 nas1.server nas1

Notice that the NAS1 entry is actually pointing to the atpl131 node. If you are using a real NAS or a third server to provide your shared storage put the correct IP address into the file.

Step 3:CONFIGURING kernel parameters in /etc/sysctl.conf file.
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
#fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
#net.core.rmem_default=262144
#net.core.rmem_max=262144
#net.core.wmem_default=262144
#net.core.wmem_max=262144

# Additional and amended parameters suggested by Kevin Closson
net.core.rmem_default = 524288
net.core.wmem_default = 524288
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.ipfrag_high_thresh=524288
net.ipv4.ipfrag_low_thresh=393216
net.ipv4.tcp_rmem=4096 524288 16777216
net.ipv4.tcp_wmem=4096 524288 16777216
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=0
net.ipv4.tcp_window_scaling=1
net.core.optmem_max=524287
net.core.netdev_max_backlog=2500
sunrpc.tcp_slot_table_entries=128
sunrpc.udp_slot_table_entries=128
net.ipv4.tcp_mem=16384 16384 16384
# These are optional I enter because during OS installation I wrongly gave hostname #and domain name
kernel.domainname=server
kernel.hostname=atpl131

Step 4. Rebuilding kernel without restarting the system for above changed kernel parameter.
(On both Cluster Machines atpl131 and atpl136)


# ./sbin/sysctl -p
if this do not work then restart both machines.

Step 5. Add the following lines to the /etc/security/limits.conf file. ( create same file in atpl136)
(On both Cluster Machines atpl131 and atpl136)


# vi /etc/security/limits.conf

* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

Step 6:Add the following lines to the /etc/pam.d/login file if not already exists.
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/pam.d/login

session required /lib/security/pam_limits.so

Step 7:Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows.
(On both Cluster Machines atpl131 and atpl136)


# vi /etc/selinux/config

SELINUX=disabled

Step 8:Set the hangcheck kernel module parameters by adding the following line to the /etc/modprobe.conf file.
(On both Cluster Machines atpl131 and atpl136)


# vi /etc/modprobe.conf

options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

You can do it also by GUI tool (Applications  System Settings Security Level). Click on the SELinux tab and disable the feature.

Step 9:To load the module immediately, execute this command"modprobe -v hangcheck-timer".
(On both Cluster Machines atpl131 and atpl136)


#modprob –v hangcheck-timer

Step 10:Create the new groups and users(with same user name, user id and os groups on bothe machines .
(On both Cluster Machines atpl131 and atpl136)


a) For atpl131:
Group create:
# groupadd oinstall
# groupadd dba
# groupadd oper
User Create:
# useradd -d /home/oracle -g oinstall -G dba -s /bin/bash oracle
# passwd oracle
# new password: oracle
# reenter password: oracle

To see the groups name and corresponding ids,user name and its id give:

# id oracle

b) For atpl136:
# id oracle


Step 11:During the installation, both RSH and RSH-Server were installed.
Enable remote shell and rlogin by doing the following.
(On both Cluster Machines atpl131 and atpl136)


# chkconfig rsh on
# chkconfig rlogin on
# service xinetd reload

Step 12:Create the /etc/hosts.equiv file as the root user.
(On both Cluster Machines atpl131 and atpl136)


# touch /etc/hosts.equiv
# chmod 600 /etc/hosts.equiv
# chown root:root /etc/hosts.equiv

Step 13:Edit the /etc/hosts.equiv file to include all the RAC nodes(atpl131 and atpl136):
(On both Cluster Machines atpl131 and atpl136)


# vi /etc/hosts.equiv

+atpl131 oracle
+atpl136 oracle
+atpl131-priv oracle
+atpl131-priv oracle

Step 14:Now restart both machines.

Step 15:Login as the oracle user and add the following lines at the end of the .bash_profile file.
(On both Cluster Machines atpl131 and atpl136)


Note:Edit this .bash_profile file in atpl136 also but give different ORACLE_SID there as: ORACLE_SID=RAC2; export ORACLE_SID

$ vi .bash_profile

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID (will be changed in 2nd machine)
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

Note: Remember to set the ORACLE_SID to RAC2 on the second node remaing valus are same for second machine.


Step 16:Create Shared Disks (On atpl131 Machine only)


First we need to set up some NFS shares. In this case we will do this on the atpl131 node, but you can do the on a NAS or a third server if you have one available. On the atpl131 node

Step 16.a:Create the following directories.

# mkdir /share1
# mkdir /share2


Step 16.b:CREATE TWO DISKS /dev/cciss/c0d1p11 and
/dev/cciss/c0d1p12 using fdisk


To see the attached disks

# fdisk -l

Partition the disk in two slices
# fdisk /dev/cciss/c0d1
fdisk> command (m for help) n
l logical (5 or over)
p primary partition (1-4)

fdisk> l
first cylinder (7303-35698, default 7303):7303
using default value 7303
fdisk> Last cylinder or +size or +sizeM or +sizeK (1-29801, default 29801): +10g
fdisk> p
show you created partition
fdisk> w
(This writes the created partition)
Now /dev/cciss/c0d1p12 is created.

Same way create the second partition (/dev/cciss/c0d1p12)

Step 16.c:Mount both disks on share1 and share2
# mount /dev/cciss/c0d1p11 /share1
# mount /dev/cciss/c0d1p12 /share2

Step 16.d:Also enter entries in /etc/fstab file for these two disk for permanent mounting at boot time.
# vi /etc/fstab

/dev/cciss/c0d1p11 /share1 ext3 defaults 0 0
/dev/cciss/c0d1p12 /share2 ext3 defaults 0 0

Step 16.e:Add the following lines to the /etc/exports file.

# vi /etc/exports

/share1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/share2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

NOTE 1: Do not repeat Step 16 for atpl136 machine

NOTE 2: We have use NFS sharing by adding the entry in /etc/exports file


Step 17:Run the following command to export the NFS shares.
(On both Cluster Machines atpl131 and atpl136)


# chkconfig nfs on
# service nfs restart

Step 18:Create two mount points to mount the NFS shares share1 and share2 on both machines.
(On both Cluster Machines atpl131 and atpl136)

# mkdir /u01
# mkdir /u02

Step 19:Add the following lines to the "/etc/fstab" file for nfs mounting .
(On both Cluster Machines atpl131 and atpl136)


# vi /etc/fstab

atpl131:/share1 /u01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300rsize=32768,wsize= 32768,actimeo=0 0 0
atpl131:/share2 /u02 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize =32768,actimeo=0 0 0

NOTE :
1. Here we can also use nas1 in place of atpl131 because nas1 is logical name of
atpl131 this we set it in /etc/hosts file.
2. /share1 and share2 are two folders which are created on the atpl131 and are used on atpl131 and 136 for nfs mount because here we use nfs sharing for cluster installation. this we did because we do not have nas device.

Step 20:Mount the NFS shares on both servers.
(On both Cluster Machines atpl131 and atpl136)


# mount /u01
# mount /u02

Step 21:Create the shared CRS Configuration and Voting Disk files.
(On both Cluster Machines atpl131 and atpl136)

# touch /u01/crs_configuration

NOTE: Don’t create voting_disk file now using # touch /u01/voting_disk
command other wise at the time of runinstaller step give this file is already exists .
when we run the runinstaller then it ask for voting_disk file click next->then it creates this file in “/u01/ ” directory if u again click next raise error so don’t worry go to /u01 directory and change permissions 700 to this file then click next goes without error.

Step 22:Create the directories in which the Oracle software will be installed.
(On both Cluster Machines atpl131 and atpl136)

# mkdir -p /u01/crs/oracle/product/10.2.0/crs
# mkdir -p /u01/app/oracle/product/10.2.0/db_1
# mkdir -p /u01/oradata
# chown -R oracle:oinstall /u01 /u02

Step 23:RESTART BOTH MACHINES atpl131 and atpl136

Step 24:Login to both atpl131 and atpl136 machines as the oracle user and run commands to check whether they are clustered properly or not.
(On both Cluster Machines atpl131 and atpl136)


$ cd /u02/clusterware/cluvfy
$ sh runcluvvfy.sh comp nodereach -n atpl131,atpl136 -verbose
$ sh runcluvfy.sh stage -pre crsinst -n atpl131,atpl136

Step 25:Install the Clusterware Software(On atpl131 machine only first)
Place the clusterware and database software in the /u02 directory and unzip it.

# cd /u02
# unzip 10201_clusterware_linux32.zip
# unzip 10201_database_linux32.zip

Start the Oracle installer.

$ cd /u02/clusterware
$ ./runInstaller

Step R-1. On the "Welcome" screen, click the "Next" button.

Step R-2. Accept the default inventory location by clicking the "Next" button.

Step R-3. Enter the appropriate file name and Oracle Home path (here shown path and file name only because we already set these path & name in “.bash_profile” )for the and click the "Next" button.

Step R-4. Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the "Next" button.



Step R-4 The "Specify Cluster Configuration" screen shows only the atpl131 node in the cluster. Click the "Add" button to continue.

Step R-5 The "Specific Network Interface Usage" screen defines how each network interface will be used. Highlight the "eth0" interface and click the "Edit" button.

Note1:Now it ask for running the two scripts from root user

step 1. (i) first run “orainstRoot.sh” file on atpl131
# ./u01/app/oracle/oraInventory/orainstRoot.sh
step 1. (ii) then run root.sh script in atpl131
# ./u01/crs/oracle/product/10.2.0/crs/root.sh
during running these scripts if any error regarding dir ownership or services not on like comes then ignore this.

after step 1 just do step 2 on atpl136
step 2. (i) after that run orainstRoot.sh on atpl136 then
# ./u01/app/oracle/oraInventory/orainstRoot.sh
step 2. (ii) then run root.sh file on atpl136
# ./u01/crs/oracle/product/10.2.0/crs/root.sh
during running these scripts if any error regarding dir ownership or services not on like comes then ignore this.

step 3. after above step 1 and 2 before clicking on ok on atpl131
run vipca(virtual IP configuration assistanent)
# cd /u01/crs/oracle/product/10.2.0/crs/bin
# ./vipca
then it ask for choosing Ethernet card choose public network card(i.e. eth1)
then give these values
for atpl131:
IP alias name: atpl131-vip.server & IP address: 192.168.1.131 &subnet mask 255.255.255.0
for atpl136:
IP alias name: atpl136-vip.server & IP address: 192.168.1.136 &subnet mask 255.255.255.0

step 4. then come to atpl131. click ok next next …. installation complete.

step 5 launch dbca.


Regards,
Sukhwinder Singh
There was an error in this gadget