Tuesday, November 29, 2011

How To recreate the OraInventory if it gets corrupted or removed?

In cases where the OraInventory is missing or otherwise corrupt, recreate the oraInventory directory on UNIX systems, using the following steps. In a normal installation, there is a Global Inventory (OraInventory) and a Local Inventory ($ORACLE_HOME/inventory).
  1. Locate the oraInst.loc file, which may be in different locations, depending on your system:

          /var/opt/oracle/oraInst.loc file
          or
          /etc/oraInst.loc
  2. Modify the file oraInst.loc file:

          cp /var/opt/oracle/oraInst.loc /var/opt/oracle/oraInst.loc.bak
          mkdir /u01/oracle/oraInventory

          ---file contents---
          inventory_loc=/u01/oracle/oraInventory
          inst_group=oinstall
          ---file contents---
    Important:
    Theses example uses a typical directory, considered an $ORACLE_BASE, and a typical UNIX group which installed the Oracle products. Ensure that the correct values are used for your system.

    The oraInventory directory is usually a directory under the $ORACLE_HOME. For example, if the $ORACLE_HOME  is equal to "/u01/oracle/product/10g", then the OraInventory could be "/u01/oracle/OraInventory".
  3. Change the permissions to be appropriate, (using your directory location):

          chmod 644 /var/opt/oracle/oraInst.loc
  4. For consistency, copy the file to Oracle home directory, (using your directory location):

    cp $ORACLE_HOME/oraInst.loc $ORACLE_HOME/oraInst.loc.bak
    cp /var/opt/oracle/oraInst.loc $ORACLE_HOME/oraInst.loc
  5. Run Oracle Universal Installer from your Oracle home as below, (using your site specific directory location and Oracle home name):

    cd $ORACLE_HOME/oui/bin
    ./runInstaller -silent -attachHome ORACLE_HOME="/u01/oracle/product/10.2" ORACLE_HOME_NAME="Ora10gHome"
  6. Check the inventory output is correct for your Oracle home:

          $ORACLE_HOME/OPatch/opatch lsinventory -detail
  7. If the table at the beginning of the output is showing the proper directories, and the Oracle home components are properly reflected in the details, then the Global Inventory has been successfully created from the Local Inventory. At this time, you may patch an maintain your Oracle home, as normal.

Sukhwinder Singh
Apps DBA

Alter Commands: -

alter database: Alter a Data File

ALTER DATABASE DATAFILE 4 OFFLINE;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' OFFLINE;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' RESIZE 100m;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf'
AUTOEXTEND ON NEXT 100M MAXSIZE 1000M;
ALTER DATABASE DATAFILE 4 END BACKUP;

alter database: Alter a Tempfile

ALTER DATABASE TEMPFILE 4 RESIZE 100M;
ALTER DATABASE TEMPFILE 4 AUTOEXTEND ON NEXT 100M MAXSIZE 1000M;
ALTER DATABASE TEMPFILE 4 DROP INCLUDING DATAFILES;
ALTER DATABASE TEMPFILE 4 OFFLINE;

alter database: ARCHIVELOG Mode Commands


ALTER DATABASE ARCHIVELOG;
ALTER DATABASE NOARCHIVELOG;

ALTER DATABASE FORCE LOGGING;
ALTER DATABASE CLEAR LOGFILE '/opt/oracle/logfiles/redo01.rdo';
ALTER DATABASE CLEAR UNARCHIVED LOGFILE '/opt/oracle/logfiles/redo01.rdo';
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE);
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

alter database: Control File Operations


ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/opt/oracle/logfile_backup/backup_logfile.trc' REUSE RESETLOGS;
ALTER DATABASE BACKUP CONTROLFILE TO '/opt/oracle/logfile_backup/backup_logfile.ctl';

alter database: Create a Data File

ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS '/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE 4 AS '/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS NEW;

alter database: Datafile Offline/Online

See alter database: Alter a Data File

alter database: Logfile Commands

ALTER DATABASE ADD LOGFILE GROUP 2
('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo') SIZE 300M REUSE;
ALTER DATABASE ADD LOGFILE MEMBER '/opt/oracle/logfiles/redo02c.rdo' to GROUP 2;
ALTER DATABASE ADD LOGFILE thread 3 GROUP 2
('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo') SIZE 300M REUSE;
ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE DROP LOGFILE MEMBER '/opt/oracle/logfiles/redo02b.rdo';

alter database: Mount and Open the Database

ALTER DATABASE MOUNT;
ALTER DATABASE OPEN;

alter database: Move or Rename a Database File or Online Redo Log

NOTE
The database must be mounted to rename or move online redo logs.
The database must be mounted or the data files taken offline to move database data files.

ALTER DATABASE RENAME FILE '/ora/datafile/oldfile.dbf' TO '/ora/datafile/newfile.dbf';
alter database: Open the Database Read-Only
ALTER DATABASE OPEN READ ONLY;
alter database: Open the Database with resetlogs
ALTER DATABASE OPEN RESETLOGS; 



Sukhwinder Singh
Apps DBA

How to apply Database Patches

Database patches are of various kinds like,But in broad there are II types:
I)Patchset: To upgrade to higher version we use Database patchset.i.e For Upgrading from 10.2.0.1.0 to 10.2.0.4.0 we have to use
Patchset.

II)Patchset Updates:
Patchset Updates are mainly divided into two types:

1)CPU(Critical Patch Update):
Critical Patch Update, quarterly delivered by Oracle to fix security issues.

2)PSU(Patch Set Update):
PatchSet Update, also quarterly delivered by Oracle,it includes CPU and a bunch of other one-off patches.

PSUs have been introduced for the first time on Oct-2009,We can use them, and stop to apply CPUs. PSUs, as CPUs, are cumulatives, no need to apply all of them, just apply the last one should be enough. One-off patch is single fix, to fix a particular issue.

We need to have metalink(My Oracle Support)Account to download this patch and applying instructions will be there in readme file of Patch.

1)How to apply Patchsets:
-------------------------------------------------


Patchsets are applied via OUI(Oracle Universal Installer).Patchsets are usually applied to upgrade oracle version
Eg: When we want to upgrade from Base release 10.2.0.1.0 to 10.2.0.4.0 for more useful feature and to avoid bugs,We have to use Patchset 10.2.0.4.0.So Patchsets change the version number.
Details of this process is there in the below mentioned link:

http://rafioracledba.blogspot.com/search/label/Database%20Upgrade
i.e Database Upgrade Label

2)How to apply Patch set updates or Critical Patch Updates:
--------------------------------------------------------------------------


PSUs(PatchSet Updates) or CPUs(Critical Patch Updates)are applied via opatch utility.
opatch is an interim utility for applying Database interim patches which will fix the bug in the release. For making use of the opatch utility you have to download the opatch recent version and apply as follow:


Downloading and installing the latesh Opatch version:
------------------------------------------------------

Below are the steps for downloading and installing the latest opatch version.opatch is very much useful for applying the database patches to fix various bugs and it is very much important to have the latest version.

1) Please download the latest OPatch version from My Oracle Support (MOS)

a) Click on the "Patches & Updates" tab

b) In the "Patch Name or Number" field type 6880880

c) In the "Platform" field select the relevant platform

d) Click the Search button.

e) Select the patch that corresponds to the Oracle release installed:

6880880 Universal Installer: Patch OPatch 11.2

f) Click the Download button
Once the above task is done copy the patch to $ORACLE_HOME directory and move the previous OPatch directory to separate directory in OS.We can use winscp or ftp for copying this patch from MOS to windows and
than windows to linux Box.

Now,unzip this patch zip file.The files will be extracting as shown below:

Eg:
-----
$ cd $ORACLE_HOME
(If an OPatch directory already exist then move it)
$ mv Opatch /TEST/OPatch_100057
$cd $ORACLE_HOME
$ unzip p6880880_101000_LINUX64.zip
Archive: p6880880_101000_LINUX64.zip
creating: OPatch/
creating: OPatch/docs/
inflating: OPatch/docs/FAQ
inflating: OPatch/docs/README.txt
inflating: OPatch/docs/Users_Guide.txt
inflating: OPatch/docs/bt1.txt
inflating: OPatch/docs/bt2.txt
inflating: OPatch/docs/tmp
inflating: OPatch/emdpatch.pl
creating: OPatch/jlib/
inflating: OPatch/jlib/opatch.jar
inflating: OPatch/opatch
inflating: OPatch/opatch.bat
inflating: OPatch/opatch.pl

[oracle@kor300949lx1 OPatch]$ opatch version
Invoking OPatch 11.2.0.1.5

OPatch Version: 11.2.0.1.5

OPatch succeeded.

[oracle@kor300949lx1 OPatch]$ opatch apply /u01/app/8264365
Invoking OPatch 11.2.0.1.5

Oracle Interim Patch Installer version 11.2.0.1.5
Copyright (c) 2010, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/product/11.1.0/db_1
Central Inventory : /u01/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.5
OUI version : 11.2.0.1.0
OUI location : /u01/app/oracle/product/11.1.0/db_1/oui
Log file location : /u01/app/oracle/product/11.1.0/db_1/cfgtoollogs/opatch/opatch2011-05-16_17-18-01PM.log

Patch history file: /u01/app/oracle/product/11.1.0/db_1/cfgtoollogs/opatch/opatch_history.txt

Here is your opatch latest version ready,Let us what all it consists of:

Go to the below mentioned path for all opatch commands description.

$ cd $ORACLE_HOME/OPatch/docs/
$ vi Users_Guide.txt

In general for applying any patch,We have to use the below command:
$cd $ORACLE_HOME/OPatch
$opatch apply /u01/Patch_dir/

Where :/u01/Patch_dir: The path where you have kept your Patch in the OS.




Sukhwinder Singh
Apps DBA

Friday, November 18, 2011

Shell script to check if the Workflow mailer is down in APPS R12

We can use the following shell script and can schedule it with crontab. It is very handy, it will check the workflow mailer and if it is down it will send email to APPS DBA.

Following is the script.
#!/bin/sh
## ORACLE_HOME=`cat /etc/var/opt/oratab | grep ^TESTDB: | cut -f2 -d':'`
ORACLE_HOME=/erptest/TST/db/tech_st/11.1.0
export ORACLE_HOME
ORACLE_SID=ERPTEST
export ORACLE_SID
PATH=$PATH:$ORACLE_HOME/bin:.
export PATH
SCRIPT_DIR=/erptestdb/home/scripts
export SCRIPT_DIR
# check_ts.log
$ORACLE_HOME/bin/sqlplus -s test_user/test_user <spool /erptestdb/home/scripts/check_wf_mailer.log

set lines 1000
set head off
# column tablespace_name format a50
select COMPONENT_STATUS
from apps.FND_SVC_COMPONENTS
where component_type='WF_MAILER'
/

spool off
exit
!
#SCRIPT_DIR=/test/appl/homes/oratest/scripts

if [ -s $SCRIPT_DIR/check_wf_mailer.log ]; then
#COUNT=`cat check_ts.log | wc -l `
COUNT=0
STATUS=`cat $SCRIPT_DIR/check_wf_mailer.log | awk '{print $1}'`
VSTATUS='RUNNING'
echo "v status is: " $VSTATUS
echo "file is present"
echo "Script Dir: " $SCRIPT_DIR
echo "status is " $STATUS
if [ $STATUS != $VSTATUS ] ; then
echo "status is in the if stmt " $STATUS
mailx -s "Oracle Workflow Mailer is down in APPS R12, please check " appsdba@test.com < /erptestdb/home/scripts/check_wf_mailer.log

fi
fi

Apps DBA
Sukhwinder Singh

Thursday, October 20, 2011

The Seven Deadly Habits of a DBA…and how to cure them

Calling widespread bad habits in database administration "deadly" may seem extreme. However, when you consider the critical nature of most data, and just how damaging data loss or corruption can be to a corporation, "deadly" seems pretty dead-on.
Although these habits are distressingly common among DBAs, they are curable with some shrewd management intervention. What follows is a list of the seven habits we consider the deadliest, along with some ideas on how to eliminate them.
Habit #1. THE LEAP OF FAITH: "We have faith in our backup."
Blind faith can be endearing, but not when it comes backing up a database. Backups should be trusted only as far as they have been tested and verified.
Cures:
  • Have your DBAs verify that the backup is succeeding regularly, preferably using a script that notifies them if there's an issue.
  • Maintain a backup to your backup. DBAs should always use at least two backup methods. A common technique is to use those old-fashioned exports as a backup to the online backups.
  • Resource test recoveries as often as is practical. An early sign that your DBA team is either overworked or not prioritizing correctly is having a quarter go by without a test recovery. Test recoveries confirm that your backup strategy is on track, while allowing your team to practice recovery activities so they can handle them effectively when the time comes.
Habit #2. GREAT EXPECTATIONS: "It will work the way we expect it to. Let's go ahead."
Although not user friendly in the traditional sense, Oracle is very power-user friendly— once you've been working with it for a while, you develop an instinct for the way things "should" work. Although that instinct is often right, one of the most dangerous habits any DBA can possess is an assumption that Oracle will "just work" the way it should.
Cures:
  • Inculcate a "practice, practice, practice" mentality throughout the organization. DBAs need to rehearse activities in the safe sandbox of a test environment that's designed to closely mimic the behaviour of the production system. The organization needs to allow the time and money for them to do so.
  • Pair inexperienced DBAs with senior ones whenever possible—or take them under your own wing. New DBAs tend to be fearless, but learning from someone else's experience can help instill some much needed paranoia.
  • Review the plans for everything. It's amazing how often DBAs say, "I've done that a hundred times, I don't need a plan." If they're heading into execution mode, they absolutely need a plan.
Habit #3. LAISSEZ-FAIRE ADMINISTRATION: "We don't need to monitor the system. The users always let us know when something's wrong."
If you depend on the users to inform the DBA team that there's a problem, it may already be too late.
Cures:
  • Install availability and performance monitoring systems so that issues are identified and resolved before they cause service-affecting failures.
  • Avoid post-release software issues by working with developers and testers to ensure that all production-ready software is stable and high-performance.
Habit #4. THE MEMORY TEST: "We'll remember how this happened, and what we did to get things going again."
It may seem impossible that a DBA team would forget a massive procedure that took them weeks to get right, and yet it happens all the time. In order to prevent recurring mistakes and take advantage of gained experience, documentation is essential.
Cures:
  • Require that your DBAs maintain a comprehensive documentation library and activity diary, including a significant level of rationale, syntax, and workflow detail.
  • Provide your team with groupware on your intranet so that these documents become searchable in an emergency.
  • Enforce the discipline of documentation and check it periodically. Ask your DBAs: When was this tablespace created, by whom, and with what SQL? What tasks were performed on a particular day? If they can't answer quickly, you'll know they've gone back to relying on memory.
Habit #5. THE BLAME GAME: "Don't look at me, it's the developer's fault that SQL is in production"
Some DBAs have a real "us versus them" mentality when it comes to developers in their organization. They see themselves not as facilitators helping the developers develop quality code from a database standpoint, but rather as guardians who prevent poor-quality code from making it into production. This might seem like semantics, but a confrontational relationship between developers and DBAs results in a lack of developer initiative and significant slowdowns in release cycles.
Cures:
  • Select DBAs who understand it's their responsibility to work as an integrated team with the developers they support.
  • Cultivate a team attitude by structuring continuous DBA involvement in every project rather than at review milestones.
  • Consider assigning an individual DBA in a developer support role. If it's clearly in the job description, there's more motivation to do it well.
Habit #6. THE SOLO ACT: "I know what I'm doing and don't need any help."
Database administration is increasingly complex and even the most senior DBAs can't possibly know every last detail. DBAs have different specialties, which need to be culled and utilized. When DBAs feel like they know, or should know, everything, they don't ask questions and miss out on valuable knowledge they could be gaining from others.
Cures:
  • Foster a teamwork culture where it's acceptable for DBAs to admit they don't know the answer and to ask for help.
  • Encourage your DBAs to seek out an outside peer group as a forum for brainstorming and testing their assumptions. No single person can match the expertise and experience of even a relatively small group.
  • Provide a safety net of tech resources such as reference materials, courses, and outside experts or consultants on call.
Habit #7. TECHNO-LUST: "Things would work so much better if only we had..."
DBAs are often on top of the latest technology, which can help them do a superlative job. But when the desire for new technology causes DBAs to recommend unnecessary hardware purchases or software add-ons, costs tend to skyrocket quickly—as do problems.
Cures:
  • Never upgrade your hardware infrastructure without first exhausting all tuning opportunities. Remember, ten years ago enormous enterprises were run on servers one-tenth the capacity—all thanks to necessity and skill.
  • Never consent to using advanced or new features until you're well aware of the ongoing maintenance commitment and resulting costs.
  • Watch out for DBA support software that presents friendly GUI interfaces for difficult tasks. This type of interface allows a beginner DBA to act as an intermediate DBA under certain circumstances, but simultaneously prevents that beginner from learning the actual skills behind the tasks. Moreover, these tools tend to hide real risks from the DBA, making potentially damaging activities as easy as point-and-click.
Whether it takes a twelve-step program or one tiny adjustment, all of these deadly DBA habits can be kicked. Of course, the first step is recognizing the problem. By starting with this list and doing a careful inventory of the successes and failures in your team's database administration, you'll be well on your way to finding a cure.


Apps DBA
Sukhwinder singh

Wednesday, September 28, 2011

Resolving Oracle networking problems..

This simple two part procedure will help to diagnose and fix the most common sqlnet and tnsnames configuration problems.

  • 1. Test communication between the client and the listener

    We will use tnsping to complete this step. It's a common misconception that tnsping tests connectivity to the instance. In actual fact, it only tests connectivity to the listener.

    Here, we will use it to prove that a) the tnsnames.ora has the correct hostname and port, and b) that there is a listener listening on the specified host and port. Run tnsping:

    tnsping <your_tns_entry_name>
    If it is successful you will see something like this:

    [oradev@testerp] $ tnspinng dev

    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS =
    (PROTOCOL = TCP) (HOST = dev)(PORT = 1521))) (CONNECT_DATA =
    (SERVER = DEDICATED) (SERVICE_NAME = scr9)))
    OK (40 msec)

    If not, here are some common errors, and some suggestions for fixing them:

    TNS-03505: Failed to resolve name
    The specified database name was not found in the tnsnames.ora, onames or ldap. This means that tnsping hasn't even got as far as trying to make contact with a server - it simply can't find any record of the database that you are trying to tnsping. Make sure that you've spelled the database name correctly, and that it has an entry in the tnsnames.ora.

    If you have a sqlnet.ora, look at for the setting NAMES.DEFAULT_DOMAIN. If it is set, then all entries in your tnsnames.ora must have a matching domain suffix.

    TNS-12545: Connect failed because target host or object does not exist
    The host specified in the tnsnames is not contactable. Verify that you have spelled the host name correctly. If you have, try pinging the host with 'ping <hostname>'. If ping returns 'unknown host', speak to your network admin. It might be that you have a DNS issue (you could try using the IP address if you have it to hand). If you get 'host unreachable', again speak to your network person, the problem could be down to a routing or firewall issue.

    TNS-12541: TNS:no listener
    The hostname was valid but the listener was not contactable. Things to check are that the tnsnames has the correct port (and hostname) specified, and that the listener is running on the server and using the correct port.

    tnsping hangs for a long time
    I've seen this happen in situations where there is something listening on the host/port - but it isn't an oracle listener. Make sure you have specified the correct port, and that your listener is running. If all looks ok, try doing a 'netstat -ap | grep 1521' (or whatever port you are using) to find out what program is listening on that port.

  • 2. Attempt a connection to the instance

    Once you have proven that the tnsnames is talking to the listener properly, the next step is to attempt a full connection to the instance. To do this we.ll use sqlplus:

    sqlplus [username]/[password]@<your_tns_entry_name>
    If it works you will successfully log into the instance. If not, here are some common errors:

    ORA-01017: invalid username/password; logon denied
    This is actually a good error in these circumstances! Even though you didn't use the correct username or password, you must have successfully made contact with the instance.

    ORA-12505: TNS:listener does not currently know of SID given in connect
    Either the SID is misspelled in the tnsnames, or the listener isn't listening for it. Check the tnsnames.ora first. If it looks ok, do a 'lsnrctl status' on your server, to see what databases the listener is listening for.

    ORA-12514: TNS:listener could not resolve SERVICE_NAME given in connect
    This is quite a common error and it means that, while the listener was contactable, the database (or rather the service) specified in the tnsnames wasn't one of the things that it was listening out for.

    Begin by looking at your tnsnames.ora. In it, you will a setting like SERVICE_NAME=<name>.

    If you are running a single instance database (ie. not RAC), and you are sure that you are not using services, it might be easier to change SERVICE_NAME= to SID= in your tnsnames. Using service names is the more modern way of doing things, and it does have benefits, but SID still works perfectly well (for now anyway).

    If you would prefer to continue using service names, you must first check that you have not misspelled the service name in your tnsnames. If it looks alright, next check that the listener is listening for the service. Do this by running 'lsnrctl services' on your server. If there isn't an entry for your service, you need to make sure that the service_names parameter is set correctly on the database.

Sukhwinder Singh
Apps DBA

Converting from Rollback to Undo

This procedure will drop the existing rollback segments and tablespace, and replace them with an auto-managed undo tablespace.

  • 1. Identify the rollback segments and tablespaces to be removed:
    select segment_name
    , tablespace_name
    , status
    from dba_rollback_segs
    where tablespace_name != 'SYSTEM'
    /
  • 2. Produce offline and drop statements for each rollback segment:
    select 'alter rollback segment ' || segment_name || ' offline;'
    from dba_rollback_segs
    where tablespace_name = '&TS'
    and tablespace_name != 'SYSTEM'
    /

    select 'drop rollback segment ' || segment_name || ';'
    from dba_rollback_segs
    where tablespace_name = '&TS'
    and tablespace_name != 'SYSTEM'
    /
    Review the output and, if you are happy with it, cut and paste it back into sqlplus.

  • 3. Make a note of the old tablespace's size and location, then drop it.
    drop tablespace <tablespace_name> including contents and datafiles
    /
  • 4. Create a new 'undo' tablespace in place of the old one.
    create undo tablespace undo
    datafile '<path/file>' size <size>
    extent management local
    /
  • 5. Update the initialisation parameters
    If you are using a spfile (and you should be!) run the following commands:
    alter system reset rollback_segments scope=spfile sid='*';
    alter system set undo_management=auto scope=spfile;
    alter system set undo_tablespace=undo scope=spfile;
     If you are still using a pfile (init.ora) then do the following:
    Remove the following settings:
    rollback_segments=...
    Add/alter these two lines:
    undo_management=auto
    undo_tablespace=undo
  • 6. Restart the instance
    Note the location of the alert log. If there is a problem during start-up, you will probably receive nothing more than a message saying 'instance aborted'. The alert log will contain a slightly more detailed explanation.

    Restart the instance:
    shutdown immediate
    startup
    Rerun the query from step 1. There will be a whole load of rollback segments with names like '_SYSSMU1$'. This is an indictaion that the instance is now using auto undo. 

Sukhwinder Singh
Apps DBA

Backup - RMAN..

    * Create a catalog
    * Register a database
    * Un-register a database
    * Reset the catalog after a restlogs on the target
    * Resync the catalog with the target controlfile
    * Delete a backup
    * Backup a database
    * Restore/recover a database
    * Show the controlfile backup record
    * Misc commands

Create a catalog
create a tablespace
create a user
grant connect, resource, recovery_catalog_owner to user

rman catalog user/pass@db
create catalog tablespace "<tablespace_name>";
 
Note. <tablespace_name> is case sensitive (i.e. it must be uppercase)
Note. If you get the error 'rman: can't open catalog', make sure that oracle's rman is being run (which rman). X11 also has a command called rman. Rename it if necessary.
Register a database
Note. ensure the target db has a password file
rman catalog user/pass@rmandb target user/pass@db
register database;
Un-register a database
sqlplus user/pass@rmandb
select * from rc_database;
select db_key, db_id from db;

execute dbms_rcvcat.unregisterdatabase(<db_key>, <db_id>);
Reset the catalog after a restlogs on the target
reset database;
Resync the catalog with the target controlfile
resync catalog;
Delete a backup
allocate channel... delete backuppiece <number>; 
release channel; 
Backup a database
backup database plus archivelog format '/u01/ora_backup/rman/%d_%u_%s';

run {
allocate channel t1 type disk;
backup current controlfile format '/u01/ora_backup/rman/%d_%u_%s';
backup database format '/u01/ora_backup/rman/%d_%u_%s';
backup archivelog all delete input format '/u01/ora_backup/rman/arch_%d_%u_%s';
release channel t1;
}

run {
allocate channel t1 type disk;
backup archivelog all delete input format '/u01/ora_backup/rman/arch_%d_%u_%s';
release channel t1;
}
 
Cold backup (archivelog or noarchivelog mode)
run {
allocate channel t1 type disk;
shutdown immediate;
startup mount;
backup database include current controlfile format '/u01/ora_backup/rman/%d_%u_%s';
alter database open;
}

run {
allocate channel t1 type disk;
backup archivelog all delete input;
}
Restore/recover a database
Full restore and recovery
startup nomount; 
run { allocate channel t1 type disk; 
allocate channel t2 type disk; 
allocate channel t3 type disk; 
allocate channel t4 type disk; 
restore controlfile; 
restore archivelog all; 
alter database mount; 
restore database; 
recover database; 

sql 'alter database open resetlogs';

Restore and roll forward to a point in time
startup nomount; 
run { set until time ="to_date('30/08/2006 12:00','dd/mm/yyyy hh24:mi')"; 
allocate channel t1 type disk; 
allocate channel t2 type disk; 
allocate channel t3 type disk; 
allocate channel t4 type disk; 
restore controlfile; 
restore archivelog all; 
alter database mount; 
restore database; 
recover database; 
sql 'alter database open resetlogs';

If the archive logs are already in place:
startup mount; 
run { set until time ="to_date('08/02/2007 14:00','dd/mm/yyyy hh24:mi')"; 
allocate channel t1 type disk; 
allocate channel t2 type disk; 
allocate channel t3 type disk; 
allocate channel t4 type disk; 
restore database; 
recover database; 
sql 'alter database open resetlogs';

startup mount; 
run { allocate channel t1 type disk; 
recover database; 
}
Show the control file backup record
set pages 999 lines 100 
col name format a60 
break on set_stamp skip 1 

select set_stamp 
, to_char(ba.completion_time, 'HH24:MI DD/MM/YY') finish_time 
, df.name 
from v$datafile df 
, v$backup_datafile ba 
where df.file# = ba.file# 
and ba.file# != 0 
order by set_stamp, ba.file# 
/
Misc commands
list backupset; 
list backup of database; 
list backup of archivelog all; 
report obsolete; 
report obsolete redundancy = 2; 
delete obsolete; - remove unneeded backups 
restore database validate; - check the backup 
report unrecoverable; 
report schema; - show current db files 
crosscheck backup; - make sure the backups in the catalog still physically exist 
delete expired backup; - delete epired backups found by crosscheck 

rman target sys/*****@scr10 catalog rman/rman@dbarep 
LIST BACKUPSET OF DATABASE; 
ALLOCATE CHANNEL FOR MAINTENANCE DEVICE TYPE DISK; 
DELETE OBSOLETE REDUNDANCY = 4 device type disk; 
delete obsolete REDUNDANCY = 2 device type disk;

Delete archive log older than...
DELETE NOPROMPT ARCHIVELOG UNTIL TIME "SYSDATE-5" 

Crosscheck the available archivelogs (fixes RMAN-06059)
change archivelog all crosscheck;


Sukhwinder Singh
Apps DBA

Clone an Oracle database using RMAN duplicate (same server)..

This procedure will clone a database onto the same server using RMAN duplicate.
  • 1. Backup the source database.
    To use RMAN duplicate an RMAN backup of the source database is required. If there is already one available, skip to step 2. If not, here is a quick example of how to produce an RMAN backup. This example assumes that there is no recovery catalog available:
    rman target sys@<source database> nocatalogbackup database plus archivelog format '/u01/ora_backup/rman/%d_%u_%s';
     This will backup the database and archive logs. The format string defines the location of the backup files. Alter it to a suitable location.

  • 2. Produce a pfile for the new database
    This step assumes that the source database is using a spfile. If that is not the case, simply make a copy the existing pfile.

    Connect to the source database as sysdba and run the following:
    create pfile='init<new database sid>.ora' from spfile;
    This will create a new pfile in the $ORACLE_HOME/dbs directory.

    The new pfile will need to be edited immediately. If the cloned database is to have a different name to the source, this will need to be changed, as will any paths. Review the contents of the file and make alterations as necessary.

    Because in this example the cloned database will reside on the same machine as the source, Oracle must be told how convert the filenames during the RMAN duplicate operation. This is achieved by adding the following lines to the newly created pfile:
    db_file_name_convert=(<source_db_path>,<target_db_path>)
    log_file_name_convert=(<source_db_path>,<target_db_path>)
    Here is an example where the source database scr9 is being cloned to dg9a. Note the trailing slashes and lack of quotes:
    db_file_name_convert=(/u01/oradata/scr9/,/u03/oradata/dg9a/)
    log_file_name_convert=(/u01/oradata/scr9/,/u03/oradata/dg9a/)

  • 3. Create bdump, udump & cdump directories
    Create bdump, udump & cdump directories as specified in the pfile from the previous step.

  • 4. Add a new entry to oratab, and source the environment
    Edit the /etc/oratab (or /opt/oracle/oratab) and add an entry for the new database.

    Source the new environment with '. oraenv' and verify that it has worked by issuing the following command:
    echo $ORACLE_SID
    If this doesn't output the new database sid go back and investigate why not.

  • 5. Create a password file
    Use the following command to create a password file (add an appropriate password to the end of it):
    orapwd file=${ORACLE_HOME}/dbs/orapw${ORACLE_SID} password=<your password>
  • 6. Duplicate the database
    From sqlplus, start the instance up in nomount mode:
    startup nomount
    Exit sqlplus, start RMAN and duplicate the database. As in step 1, it is assumed that no recovery catalog is available. If one is available, simply amend the RMAN command to include it.
    rman target sys@<source_database> nocatalog auxiliary /

    duplicate target database to <clone database name>;
    This will restore the database and apply some archive logs. It can appear to hang at the end sometimes. Just give it time - I think it is because RMAN does a 'shutdown normal'.

    If you see the following error, it is probably due to the file_name_convert settings being wrong. Return to step 2 and double check the settings.
    RMAN-05001: auxiliary filename '%s' conflicts with a file used by the target database
    Once the duplicate has finished RMAN will display a message similar to this:
    database opened
    Finished Duplicate Db at 26-FEB-11

    RMAN> exit RMAN.

  • 7. Create an spfile
    From sqlplus:
    create spfile from pfile;

    shutdown immediate

    startup

    Now that the clone is built, we no longer need the file_name_convert settings:
    alter system reset db_file_name_convert scope=spfile sid='*'
    /

    alter system reset log_file_name_convert scope=spfile sid='*'
    /
  • 8. Optionally take the clone database out of archive log mode
    RMAN will leave the cloned database in archive log mode. If archive log mode isn't required, run the following commands from sqlplus:
    shutdown immediate
    startup mount
    alter database noarchivelog;
    alter database open;
  • 9. Configure TNS
    Add entries for new database in the listener.ora and tnsnames.ora as necessary.


Sukhwinder singh
Apps DBA

Clone an Oracle database using a cold backup..

This procedure will clone a database using a cold copy of the source database files. If a cold backup of the database is available, restore it to the new location and jump to step 2.

  • 1. Identify and copy the database files
    With the source database started, identify all of the database's files. The following query will display all datafiles, tempfiles and redo logs:
    set lines 100 pages 999
    col name format a50
    select name, bytes
    from (select name, bytes
    from v$datafile
    union all
    select name, bytes
    from v$tempfile
    union all
    select lf.member "name", l.bytes
    from v$logfile lf
    , v$log l
    where lf.group# = l.group#) used
    , (select sum(bytes) as poo
    from dba_free_space) free
    /
    Make sure that the clone databases file-system is large enough and has all necessary directories. If the source database has a complex file structure, you might want to consider modifying the above sql to produce a file copy script.

    Stop the source database with:
    shutdown immediate
    Copy, scp or ftp the files from the source database/machine to the target. Do not copy the control files across. Make sure that the files have the correct permissions and ownership.

    Start the source database up again
    startup
  • 2. Produce a pfile for the new database
    This step assumes that you are using a spfile. If you are not, just copy the existing pfile.

    From sqlplus:
    create pfile='init<new database sid>.ora' from spfile;
    This will create a new pfile in the $ORACLE_HOME/dbs directory.

    Once created, the new pfile will need to be edited. If the cloned database is to have a new name, this will need to be changed, as will any paths. Review the contents of the file and make alterations as necessary. Also think about adjusting memory parameters. If you are cloning a production database onto a slower development machine you might want to consider reducing some values.

    Note. Pay particular attention to the control locations.

  • 3. Create the clone controlfile
    Create a control file for the new database. To do this, connect to the source database and request a dump of the current control file. From sqlplus:
    alter database backup controlfile to trace as '/home/oracle/cr_<new sid>.sql'
    /
    The file will require extensive editing before it can be used. Using your favourite editor make the following alterations:

    • Remove all lines from the top of the file up to but not including the second 'STARTUP MOUNT' line (it's roughly halfway down the file).
    • Remove any lines that start with --
    • Remove any lines that start with a #
    • Remove any blank lines in the 'CREATE CONTROLFILE' section.
    • Remove the line 'RECOVER DATABASE USING BACKUP CONTROLFILE'
    • Move to the top of the file to the 'CREATE CONTROLFILE' line. The word 'REUSE' needs to be changed to 'SET'. The database name needs setting to the new database name (if it is being changed). Decide whether the database will be put into archivelog mode or not.
    • If the file paths are being changed, alter the file to reflect the changes.

    Here is an example of how the file would look for a small database called dg9a which isn't in archivelog mode:
    STARTUP NOMOUNT
    CREATE CONTROLFILE SET DATABASE "DG9A" RESETLOGS FORCE LOGGING NOARCHIVELOG
    MAXLOGFILES 50
    MAXLOGMEMBERS 5
    MAXDATAFILES 100
    MAXINSTANCES 1
    MAXLOGHISTORY 453
    LOGFILE
    GROUP 1 '/u03/oradata/dg9a/redo01.log' SIZE 100M,
    GROUP 2 '/u03/oradata/dg9a/redo02.log' SIZE 100M,
    GROUP 3 '/u03/oradata/dg9a/redo03.log' SIZE 100M
    DATAFILE
    '/u03/oradata/dg9a/system01.dbf',
    '/u03/oradata/dg9a/undotbs01.dbf',
    '/u03/oradata/dg9a/cwmlite01.dbf',
    '/u03/oradata/dg9a/drsys01.dbf',
    '/u03/oradata/dg9a/example01.dbf',
    '/u03/oradata/dg9a/indx01.dbf',
    '/u03/oradata/dg9a/odm01.dbf',
    '/u03/oradata/dg9a/tools01.dbf',
    '/u03/oradata/dg9a/users01.dbf',
    '/u03/oradata/dg9a/xdb01.dbf',
    '/u03/oradata/dg9a/andy01.dbf',
    '/u03/oradata/dg9a/psstats01.dbf',
    '/u03/oradata/dg9a/planner01.dbf'
    CHARACTER SET WE8ISO8859P1
    ;

    ALTER DATABASE OPEN RESETLOGS;

    ALTER TABLESPACE TEMP ADD TEMPFILE '/u03/oradata/dg9a/temp01.dbf'
    SIZE 104857600 REUSE AUTOEXTEND OFF;
  • 4. Add a new entry to oratab and source the environment
    Edit the /etc/oratab (or /opt/oracle/oratab) and add an entry for the new database.
    Source the new environment with '. oraenv' and verify that it has worked by issuing the following command:
    echo $ORACLE_SID
    If this doesn't output the new database sid go back and investigate.

  • 5. Create the a password file
    Use the following command to create a password file (add an appropriate password to the end of it):
    orapwd file=${ORACLE_HOME}/dbs/orapw${ORACLE_SID} password=<your password>
  • 5. Create the new control file(s)
    Ok, now for the exciting bit! It is time to create the new controlfiles and open the database:
    sqlplus "/ as sysdba"
    @/home/oracle/cr_<new database sid>
    It is quite common to run into problems at this stage. Here are a couple of common errors and solutions:
    ORA-01113: file 1 needs media recovery
    You probably forgot to stop the source database before copying the files. Go back to step 1 and recopy the files.
    ORA-01503: CREATE CONTROLFILE failed
    ORA-00200: controlfile could not be created
    ORA-00202: controlfile: '/u03/oradata/dg9a/control01.ctl'
    ORA-27038: skgfrcre: file exists
    Double check the pfile created in step 2. Make sure the control_files setting is pointing at the correct location. If the control_file setting is ok, make sure that the control files were not copied with the rest of the database files. If they were, delete or rename them.

  • 6. Perform a few checks
    If the last step went smoothly, the database should be open. It is advisable to perform a few checks at this point:

    • Check that the database has opened with:
      select status from v$instance;
      The status should be 'OPEN'
    • Make sure that the datafiles are all ok:
      select distinct status from v$datafile;
      It should return only ONLINE and SYSTEM.
    • Take a quick look at the alert log too.

  • 7. Set the databases global name
    The new database will still have the source databases global name. Run the following to reset it:
    alter database rename global_name to <new database sid>
    /
  • 8. Create a spfile
    From sqlplus:
    create spfile from pfile;
  • 9. Change the database ID
    If RMAN is going to be used to back-up the database, the database ID must be changed. If RMAN isn't going to be used, there is no harm in changing the ID anyway - and it's a good practice to do so.
    From sqlplus:
    shutdown immediate
    startup mount
    exit
    From unix:
    nid target=/
    NID will ask if you want to change the ID. Respond with 'Y'. Once it has finished, start the database up again in sqlplus:
    shutdown immediate
    startup mount
    alter database open resetlogs
    /
  • 10. Configure TNS
    Add entries for new database in the listener.ora and tnsnames.ora as necessary.

  • 11. Finished

Sukhwinder Singh
Apps DBA

Correcting invalid spfile parameters

Consider the following situation. An alteration is made to the spfile which results in the instance being unable to start. Because the instance will not start, the mistake can not be corrected:

SQL> show parameter sga_max_size
sga_max_size big integer 537989896

SQL> alter system set sga_max_size=99999999999 scope=spfile;
System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORA-27102: out of memory

SQL> startup nomount
ORA-27102: out of memory

SQL> alter system set sga_max_size=537989896 scope=spfile;
alter system set sga_max_size=537989896 scope=spfile
*
ERROR at line 1:
ORA-01034: ORACLE not available

The usual way to fix this problem (apart from being more careful in the first place) is to:
  • create pfile from spfile
  • edit the pfile
  • startup nomount
  • create spfile from pfile
  • shutdown
  • startup
  • remove the pfile
There is another way however - and one that I prefer. It relies on the fact that a database can have a spfile and a pfile at the same time, and furthermore parameters specified in the pfile override those in the spfile! The spfile location must be specified in the pfile for this to work. Check out the following trace:

SQL> !
[oradev@testerp] $ vi $ORACLE_HOME/dbs/init${ORACLE_SID}.ora

spfile=/u02/oradata/scr9/spfilescr9.ora
sga_max_size=537989896


:wq

oracle@bloo$ exit

SQL> startup
ORACLE instance started.
Total System Global Area 554767132 bytes
Fixed Size 451356 bytes
Variable Size 402653184 bytes
Database Buffers 150994944 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.

SQL> alter system set sga_max_size=537989896 scope=spfile;
System altered.

SQL> !rm $ORACLE_HOME/dbs/init${ORACLE_SID}.ora

SQL>
Sukhwinder Singh
Apps DBA

Clone an Oracle database using an Online / Hot backup..

This procedure will clone a database using a online copy of the source database files. Before beginning though, there are a few things that are worth noting about online/hot backups:

  • When a tablespace is put into backup mode, Oracle will write entire blocks to redo rather than the usual change vectors. For this reason, do not perform a hot backup during periods of heavy database activity - it could lead to a lot of archive logs being created.
  • This procedure will put all tablespaces into backup mode at the same time. If the source database is quite large and you think that it might take a long time to copy, consider copying the tablespaces one at a time, or in groups.
  • While the backup is in progress, it will not be possible to take the tablespaces offline normally or shut down the instance.
Ok, lets get started...
  • 1. Make a note of the current archive log change number
    Because the restored files will require recovery, some archive logs will be needed. This applies even if you are not intending to put the cloned database into archive log mode. Work out which will be the first required log by running the following query on the source database. Make a note of the change number that is returned:
    select max(first_change#) chng
    from v$archived_log
    /
  • 2. Prepare the begin/end backup scripts
    Two scripts; begin_backup.sql and end_backup.sql. When executed, these scripts will either put the tablespaces into backup mode or take them out of it.
  • 3. Put the source database into backup mode
    Run the begin backup script created in the last step:
    From sqlplus: @begin_backup
    This will put all of the databases tablespaces into backup mode.

  • 4. Copy the files to the new location
    Copy, scp or ftp the files from the source database/machine to the target. Do not copy the control files across. Make sure that the files have the correct permissions and ownership.

  • 5. Take the source database out of backup mode
    Once the file copy has been completed, take the source database out of backup mode. Run the end backup script created in step 2. From sqlplus: @end_backup
  • 6. Copy archive logs
    It is only necessary to copy archive logs created during the time the source database was in backup mode. Begin by archiving the current redo:
    alter system archive log current;
    Then, identify which archive log files are required. When run, the following query will ask for a change number. This is the number noted in step 1.
    select name from v$archived_log
    where first_change# >= &change_no
    order by name
    /
    Create an archive directory in the clone database.s file system and copy all of the identified logs into it.

  • 7. Produce a pfile for the new database
    This step assumes that you are using a spfile. If you are not, just copy the existing pfile.
    From sqlplus:
    create pfile='init<new database sid>.ora' from spfile;
    This will create a new pfile in the $ORACLE_HOME/dbs directory.

    Once created, the new pfile will need to be edited. If the cloned database is to have a new name, this will need to be changed, as will any paths. Review the contents of the file and make alterations as necessary. Also think about adjusting memory parameters. If you are cloning a production database onto a slower development machine you might want to consider reducing some values.

    Ensure that the archive log destination is pointing to the directory created in step 6.

  • 8. Create the clone controlfile
    Create a control file for the new database. To do this, connect to the source database and request a dump of the current control file. From sqlplus:
    alter database backup controlfile to trace as '/home/oracle/cr_<new sid>.sql'
    /
    The file will require extensive editing before it can be used. Using your favourite editor make the following alterations:

    • Remove all lines from the top of the file up to but not including the second 'STARTUP MOUNT' line (it's roughly halfway down the file).
    • Remove any lines that start with --
    • Remove any lines that start with a #
    • Remove any blank lines in the 'CREATE CONTROLFILE' section.
    • Remove the line 'RECOVER DATABASE USING BACKUP CONTROLFILE'
    • Remove the line 'ALTER DATABASE OPEN RESETLOGS;'
    • Make a copy of the 'ALTER TABLESPACE TEMP...' lines, and then remove them from the file. Make sure that you hang onto the command, it will be used later.
    • Move to the top of the file to the 'CREATE CONTROLFILE' line. The word 'REUSE' needs to be changed to 'SET'. The database name needs setting to the new database name (if it is being changed). Decide whether the database will be put into archivelog mode or not.
    • If the file paths are being changed, alter the file to reflect the changes.

    Here is an example of how the file would look for a small database called dg9a which isn't in archivelog mode:
    STARTUP NOMOUNT
    CREATE CONTROLFILE SET DATABASE "DG9A" RESETLOGS FORCE LOGGING NOARCHIVELOG
    MAXLOGFILES 50
    MAXLOGMEMBERS 5
    MAXDATAFILES 100
    MAXINSTANCES 1
    MAXLOGHISTORY 453
    LOGFILE
    GROUP 1 '/u03/oradata/dg9a/redo01.log' SIZE 100M,
    GROUP 2 '/u03/oradata/dg9a/redo02.log' SIZE 100M,
    GROUP 3 '/u03/oradata/dg9a/redo03.log' SIZE 100M
    DATAFILE
    '/u03/oradata/dg9a/system01.dbf',
    '/u03/oradata/dg9a/undotbs01.dbf',
    '/u03/oradata/dg9a/cwmlite01.dbf',
    '/u03/oradata/dg9a/drsys01.dbf',
    '/u03/oradata/dg9a/example01.dbf',
    '/u03/oradata/dg9a/indx01.dbf',
    '/u03/oradata/dg9a/odm01.dbf',
    '/u03/oradata/dg9a/tools01.dbf',
    '/u03/oradata/dg9a/users01.dbf',
    '/u03/oradata/dg9a/xdb01.dbf',
    '/u03/oradata/dg9a/andy01.dbf',
    '/u03/oradata/dg9a/psstats01.dbf',
    '/u03/oradata/dg9a/planner01.dbf'
    CHARACTER SET WE8ISO8859P1
    ;
  • 9. Add a new entry to oratab and source the environment
    Edit the /etc/oratab (or /opt/oracle/oratab) and add an entry for the new database.
    Source the new environment with '. oraenv' and verify that it has worked by issuing the following command:
    echo $ORACLE_SID
    If this doesn't output the new database sid go back and investigate.

  • 10. Create the a password file
    Use the following command to create a password file (add an appropriate password to the end of it):
    orapwd file=${ORACLE_HOME}/dbs/orapw${ORACLE_SID} password=<your password>
  • 11. Create the new control file(s)
    Ok, now for the exciting bit! It is time to create the new controlfiles and open the database:
     
    sqlplus "/ as sysdba"
    @/home/oracle/cr_<new database sid>
    If all goes to plan you will see the instance start and then the message 'Control file created'.

  • 12. Recover and open the database
    The archive logs that were identified and copied in step 6 must now be applied to the database. Issue the following command from sqlplus:
    recover database using backup controlfile until cancel
    When prompted to 'Specify log' enter 'auto'. Oracle will then apply all the available logs, and then error with ORA-00308. This is normal, it simply means that all available logs have been applied. Open the database with reset logs:
    alter database open resetlogs;

  • 13. Create temp files
    Using the 'ALTER TABLESPACE TEMP...' command from step 8, create the temp files. Make sure the paths to the file(s) are correct, then run it from sqlplus.

  • 14. Perform a few checks
    If the last couple of steps went smoothly, the database should be open. It is advisable to perform a few checks at this point:

    • Check that the database has opened with:
      select status from v$instance;
      The status should be 'OPEN'

    • Make sure that the datafiles are all ok:
      select distinct status from v$datafile;
      It should return only ONLINE and SYSTEM.

    • Take a quick look at the alert log too.

  • 15. Set the databases global name
    The new database will still have the source databases global name. Run the following to reset it:
    alter database rename global_name to <new database sid>
    /
    Note. no quotes!

  • 16. Create a spfile
    From sqlplus:
    create spfile from pfile;
  • 17. Change the database ID
    If RMAN is going to be used to back-up the database, the database ID must be changed. If RMAN isn't going to be used, there is no harm in changing the ID anyway - and it's a good practice to do so.

    From sqlplus:
    shutdown immediate
    startup mount
    exit
    From unix:
    nid target=/
    NID will ask if you want to change the ID. Respond with 'Y'. Once it has finished, start the database up again in sqlplus:
    shutdown immediate
    startup mount
    alter database open resetlogs
    /
  • 18. Configure TNS
    Add entries for new database in the listener.ora and tnsnames.ora as necessary.
  • 19. Finished 

Sukhwinder Singh
Apps DBA

Cannot add data file - when attempting to re-add a missing tempfile to a temporary tablespace

SQL> select name from v$tempfile;

NAME
-----------------------------------
/u02/oradata/scr9/temp01.dbf

SQL> !ls //u02/oradata/scr9/temp01.dbf
/u02/oradata/scr9/temp01.dbf not found

SQL> alter tablespace TEMP
add tempfile '/u02/oradata/scr9/temp01.dbf' reuse;
2 alter tablespace TEMP
*
ERROR at line 1:
ORA-01537: cannot add data file '/u02/oradata/scr9/temp01.dbf' - file already
part of database.
Why is this happening?

This can happen if a step has been missed during a database cloning exercise.

How to fix it?

With a temporary tablespace it is possible to drop the missing tempfile and then add a new one. You can only drop a tempfile if it is not in use - but in this case the temp file doesn't actually exist, so it can't be in use. Use the following commands:
SQL> alter tablespace <TEMP_TS_NAME>
2 drop tempfile '<FILE_PATH_AND_NAME>';

SQL> alter tablespace <TEMP_TS_NAME>
2 add tempfile '<FILE_PATH_AND_NAME>' size <FILE_SIZE>;
For example:

SQL> alter tablespace temp
2 drop tempfile '/u02/oradata/scr9/temp01.dbf';

Tablespace altered.

SQL> alter tablespace TEMP
2 add tempfile '/u02/oradata/scr9/temp01.dbf' size 8192m;

Tablespace altered.

Sukhwinder Singh
Apps DBA

Maximum number of processes exceeded..

This means that you have too many database connections basically. Unfortunately since the demise of svrctl, it also means that you as a DBA cannot connect internal to do something about it!

You need to either ask a couple of users to log out, or you need to kill some sessions at the OS level. Once you managed to connect, see if you can spot anyone making abnormal numbers of connections. If you can't find anything you will need to increase the init parameter 'processes'. Unfortunately, this will require a restart of the database to take effect:
 
**********------------------------------------ **********
**********------------------------------------ **********
If you are using spfile: -

SQL> show parameter spfile;
SQL> show parameter processes

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------
processes                            integer     150

SQL> select count(*) from v$process;

COUNT(*)
----------
149

SQL> alter system set processes=300 scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

SQL> show parameter processes

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------
processes                            integer     300 
 
 
**********------------------------------------ **********
**********------------------------------------ ********** 
If you use pfile then  create spfile from pfile;

Shutdown db
then startup

then run 
SQL> alter system set PROCESSES= xx scope = spfile;


**********------------------------------------ **********
**********------------------------------------ **********
If you want to use pfile instead of spfile,than go to $ORACLE_HOME/dbs/


cp initSID.ora initSID.ora_org
Now Edit initSID.ora file
 
vi initSID.ora
Enter "i" (Without Quotes)
PROCESSES=XX
SESSIONS  2*XX
Press ESC
Then Type ":w"  (Without Quotes to save Value in initSID.ora file)
Then ":q!" (Without Quotes to exit from initSID.ora)
You may need to reboot db after change this parameter
SHUTDOWN DB
Start DB 
Note. The maximum allowed processes is one less than the init parameter 'processes'

Note. When connecting via TNS, you may get this error
when the database is out of processes:

ORA-12516: TNS:listener could not find available handler with matching protocol stack
 
 
 =========
 
Frist decide your system open wiht spfile or init file. You can understand it by using below query

show parameter spfile;

If you use spfile then issue:

alter system set PROCESSES= xx scope = spfile;

If you use pfile than 

create spfile from pfile;

Shutdown db

then startup
then run 

alter system set PROCESSES= xx scope = spfile;

If you want to use pfile instead of spfile,than go to $ORACLE_HOME/dbs/

cp initSID.ora initSID.ora_org

vi initSID.ora
PROCESSES=xxx
You may need to reboot db after change this parameter 

=============================
 
I assume you are using pfile.

to check if you are using pfile or spfile ,

go to sql> / as sysdba

show parameter spfile;
If it shows none , means you are using pfile.


Before making any changes in the init.ora file always make a copy for the same as init.ora.backup.

Once that is done , then open your init.ora file , it would be by the name initSID.ora.

Once you open that , then there will be a parameter process=150 or any value.

Change it there and then , since you are using a pfile , you need to restart the database to bring the changes into effect.

go to sql > / as sysdba

shutdown immediate <enter>

startup < enter>

Then the new processes will be reflected in the database.

If you are using spfile , then use the following command :


alter system set processes=value scope=both;

to check the value of the processes parameter :
sqlplus "/ as sysdba"

show parameter processes;


========

To Increase the number of process
Step-1
Create pfile= /oracle/.... from spfile
Step 2
Shutdown database
Step-3
Amend that pfile and increase the number of processes
Step -4
startup the database with new pfile
startup pfile='/oracle/....'
Step-5
Create spfile from pfile='/otacle/....'
So that when you startup database next time it will read spfile (new)

 
Sukhwinder Singh
Apps DBA

Thursday, August 25, 2011

Find which media tapes have the latest backup

You can find which media tapes have the latest RMAN backup with the following query: 
SELECT RECID, STAMP, SET_STAMP, SET_COUNT, PIECE#, DEVICE_TYPE, HANDLE, MEDIA,
STATUS, DELETED, START_TIME,  COMPLETION_TIME, BYTES

FROM GV$BACKUP_PIECE

WHERE HANDLE LIKE '%ORACLE%SID%'

AND TRUNC(START_TIME) > TRUNC(SYSDATE-6)

ORDER BY START_TIME DESC;
Where SID is the database SID, for example A7







To find the distinct tapes use: 


SELECT DISTINCT(MEDIA) FROM GV$BACKUP_PIECE 

WHERE HANDLE LIKE 'ORACLE%A7%'

AND TRUNC(START_TIME) > TRUNC(SYSDATE-7);
Apps DBA Free Lance,
Sukhwinder Singh

Kill sessions in Unix - Linux while shutting down the database

When you shutdown the database with the immediate option you have to wait for all processes to terminate. Shutting down a database with 5000 sessions maybe will take some time!

If the database SID for example is DWH01 then you can execute in the Unix shell this command to kill all the oracle sessions to speed up the procedure. 

 
$ ps -ef|grep 'oracleDWH01 (LOCAL=NO)'|grep -v grep|awk '{print$2}'|xargs -i kill {}



OR



for i in `ps -ef|grep oracleDWH01|grep -v grep|awk '{print$2}'`

do

kill -9 $i

done
 It doesn't kill the smon and pmon processes, only the LOCAL=NO 


Apps DBA Free Lance
Sukhwinder Singh

Backup time

 You can find your backup time with the following query
SELECT * FROM (SELECT END.dt, 'Incremental' BACKUP, STAR.TIME started, END.TIME END,  

DECODE(TRUNC(END.TIME - star.TIME), 0, NULL, TRUNC(END.TIME - star.TIME) || ' Days' || ' + ') ||

TO_CHAR(TO_DATE(TRUNC(MOD(END.TIME - star.TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') TIME

FROM

(SELECT TRUNC(start_time) dt , MAX(COMPLETION_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='I'

GROUP BY TRUNC(start_time)) END,

(SELECT TRUNC(start_time) dt, MIN(START_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='I'

GROUP BY TRUNC(start_time)) STAR

WHERE END.dt = STAR.dt

UNION ALL

SELECT END.dt, 'Full Database' BACKUP, STAR.TIME started, END.TIME END,

DECODE(TRUNC(END.TIME - star.TIME), 0, NULL, TRUNC(END.TIME - star.TIME) || ' Days' || ' + ') ||

TO_CHAR(TO_DATE(TRUNC(MOD(END.TIME - star.TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') TIME

FROM

(SELECT TRUNC(start_time) dt , MAX(COMPLETION_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='D'

GROUP BY TRUNC(start_time)) END,

(SELECT TRUNC(start_time) dt, MIN(START_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='D'

GROUP BY TRUNC(start_time)) STAR

WHERE END.dt = STAR.dt)

ORDER BY dt ASC;
Apps DBA Free Lance
Sukhwinder Singh

Backup calendar

If you have set online RMAN backup for the archives and database files, 
you can see how thing going on with tha backup with the script : - 
 
SELECT * FROM 

(SELECT dates.dt, TO_CHAR(dates.dt, 'DAY') DAY,

DECODE(archives."00", NULL, NULL, archives."00" || 'L') || ' ' || DECODE(data_files."00", NULL, NULL, data_files."00" || 'D')

|| ' ' || DECODE(INCREMENTAL."00", NULL, NULL, INCREMENTAL."00" || 'I') "00",

DECODE(archives."01", NULL, NULL, archives."01" || 'L') || ' ' || DECODE(data_files."01", NULL, NULL, data_files."01" || 'D')

|| ' ' || DECODE(INCREMENTAL."01", NULL, NULL, INCREMENTAL."01" || 'I') "01",

DECODE(archives."02", NULL, NULL, archives."02" || 'L') || ' ' || DECODE(data_files."02", NULL, NULL, data_files."02" || 'D')

|| ' ' || DECODE(INCREMENTAL."02", NULL, NULL, INCREMENTAL."02" || 'I') "02",

DECODE(archives."03", NULL, NULL, archives."03" || 'L') || ' ' || DECODE(data_files."03", NULL, NULL, data_files."03" || 'D')

|| ' ' || DECODE(INCREMENTAL."03", NULL, NULL, INCREMENTAL."03" || 'I') "03",

DECODE(archives."04", NULL, NULL, archives."04" || 'L') || ' ' || DECODE(data_files."04", NULL, NULL, data_files."04" || 'D')

|| ' ' || DECODE(INCREMENTAL."04", NULL, NULL, INCREMENTAL."04" || 'I') "04",

DECODE(archives."05", NULL, NULL, archives."05" || 'L') || ' ' || DECODE(data_files."05", NULL, NULL, data_files."05" || 'D')

|| ' ' || DECODE(INCREMENTAL."05", NULL, NULL, INCREMENTAL."05" || 'I') "05",

DECODE(archives."06", NULL, NULL, archives."06" || 'L') || ' ' || DECODE(data_files."06", NULL, NULL, data_files."06" || 'D')

|| ' ' || DECODE(INCREMENTAL."06", NULL, NULL, INCREMENTAL."06" || 'I') "06",

DECODE(archives."07", NULL, NULL, archives."07" || 'L') || ' ' || DECODE(data_files."07", NULL, NULL, data_files."07" || 'D')

|| ' ' || DECODE(INCREMENTAL."07", NULL, NULL, INCREMENTAL."07" || 'I') "07",

DECODE(archives."08", NULL, NULL, archives."08" || 'L') || ' ' || DECODE(data_files."08", NULL, NULL, data_files."08" || 'D')

|| ' ' || DECODE(INCREMENTAL."08", NULL, NULL, INCREMENTAL."08" || 'I') "08",

DECODE(archives."09", NULL, NULL, archives."09" || 'L') || ' ' || DECODE(data_files."09", NULL, NULL, data_files."09" || 'D')

|| ' ' || DECODE(INCREMENTAL."09", NULL, NULL, INCREMENTAL."09" || 'I') "09",

DECODE(archives."10", NULL, NULL, archives."10" || 'L') || ' ' || DECODE(data_files."10", NULL, NULL, data_files."10" || 'D')

|| ' ' || DECODE(INCREMENTAL."10", NULL, NULL, INCREMENTAL."10" || 'I') "10",

DECODE(archives."11", NULL, NULL, archives."11" || 'L') || ' ' || DECODE(data_files."11", NULL, NULL, data_files."11" || 'D')

|| ' ' || DECODE(INCREMENTAL."11", NULL, NULL, INCREMENTAL."11" || 'I') "11",

DECODE(archives."12", NULL, NULL, archives."12" || 'L') || ' ' || DECODE(data_files."12", NULL, NULL, data_files."12" || 'D')

|| ' ' || DECODE(INCREMENTAL."12", NULL, NULL, INCREMENTAL."12" || 'I') "12",

DECODE(archives."13", NULL, NULL, archives."13" || 'L') || ' ' || DECODE(data_files."13", NULL, NULL, data_files."13" || 'D')

|| ' ' || DECODE(INCREMENTAL."13", NULL, NULL, INCREMENTAL."13" || 'I') "13",

DECODE(archives."14", NULL, NULL, archives."14" || 'L') || ' ' || DECODE(data_files."14", NULL, NULL, data_files."14" || 'D')

|| ' ' || DECODE(INCREMENTAL."14", NULL, NULL, INCREMENTAL."14" || 'I') "14",

DECODE(archives."15", NULL, NULL, archives."15" || 'L') || ' ' || DECODE(data_files."15", NULL, NULL, data_files."15" || 'D')

|| ' ' || DECODE(INCREMENTAL."15", NULL, NULL, INCREMENTAL."15" || 'I') "15",

DECODE(archives."16", NULL, NULL, archives."16" || 'L') || ' ' || DECODE(data_files."16", NULL, NULL, data_files."16" || 'D')

|| ' ' || DECODE(INCREMENTAL."16", NULL, NULL, INCREMENTAL."16" || 'I') "16",

DECODE(archives."17", NULL, NULL, archives."17" || 'L') || ' ' || DECODE(data_files."17", NULL, NULL, data_files."17" || 'D')

|| ' ' || DECODE(INCREMENTAL."17", NULL, NULL, INCREMENTAL."17" || 'I') "17",

DECODE(archives."18", NULL, NULL, archives."18" || 'L') || ' ' || DECODE(data_files."18", NULL, NULL, data_files."18" || 'D')

|| ' ' || DECODE(INCREMENTAL."18", NULL, NULL, INCREMENTAL."18" || 'I') "18",

DECODE(archives."19", NULL, NULL, archives."19" || 'L') || ' ' || DECODE(data_files."19", NULL, NULL, data_files."19" || 'D')

|| ' ' || DECODE(INCREMENTAL."19", NULL, NULL, INCREMENTAL."19" || 'I') "19",

DECODE(archives."20", NULL, NULL, archives."20" || 'L') || ' ' || DECODE(data_files."20", NULL, NULL, data_files."20" || 'D')

|| ' ' || DECODE(INCREMENTAL."20", NULL, NULL, INCREMENTAL."20" || 'I') "20",

DECODE(archives."21", NULL, NULL, archives."21" || 'L') || ' ' || DECODE(data_files."21", NULL, NULL, data_files."21" || 'D')

|| ' ' || DECODE(INCREMENTAL."21", NULL, NULL, INCREMENTAL."21" || 'I') "21",

DECODE(archives."22", NULL, NULL, archives."22" || 'L') || ' ' || DECODE(data_files."22", NULL, NULL, data_files."22" || 'D')

|| ' ' || DECODE(INCREMENTAL."22", NULL, NULL, INCREMENTAL."22" || 'I') "22",

DECODE(archives."23", NULL, NULL, archives."23" || 'L') || ' ' || DECODE(data_files."23", NULL, NULL, data_files."23" || 'D')

|| ' ' || DECODE(INCREMENTAL."23", NULL, NULL, INCREMENTAL."23" || 'I') "23"

FROM

(SELECT DISTINCT(TRUNC(COMPLETION_TIME)) dt FROM V$BACKUP_SET WHERE BACKUP_TYPE='L'

UNION

SELECT DISTINCT(TRUNC(COMPLETION_TIME)) dt FROM V$BACKUP_SET WHERE BACKUP_TYPE='D'

UNION

SELECT DISTINCT(TRUNC(COMPLETION_TIME)) dt FROM V$BACKUP_SET WHERE BACKUP_TYPE='I'

ORDER BY dt DESC) dates,

(SELECT * FROM (SELECT TO_DATE(date_time, 'DD/MM/YYYY') DT, TO_CHAR(TO_DATE(date_time, 'DD/MM/YYYY'), 'DAY') DAY,

"00", "01", "02", "03", "04", "05", "06", "07", "08", "09",

"10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23"

FROM

(

SELECT DATE_TIME,

SUM(DECODE(HOUR,'00',1,NULL)) "00", SUM(DECODE(HOUR,'01',1,NULL)) "01", SUM(DECODE(HOUR,'02',1,NULL)) "02",

SUM(DECODE(HOUR,'03',1,NULL)) "03", SUM(DECODE(HOUR,'04',1,NULL)) "04", SUM(DECODE(HOUR,'05',1,NULL)) "05",

SUM(DECODE(HOUR,'06',1,NULL)) "06", SUM(DECODE(HOUR,'07',1,NULL)) "07", SUM(DECODE(HOUR,'08',1,NULL)) "08",

SUM(DECODE(HOUR,'09',1,NULL)) "09", SUM(DECODE(HOUR,'10',1,NULL)) "10", SUM(DECODE(HOUR,'11',1,NULL)) "11",

SUM(DECODE(HOUR,'12',1,NULL)) "12", SUM(DECODE(HOUR,'13',1,NULL)) "13", SUM(DECODE(HOUR,'14',1,NULL)) "14",

SUM(DECODE(HOUR,'15',1,NULL)) "15", SUM(DECODE(HOUR,'16',1,NULL)) "16", SUM(DECODE(HOUR,'17',1,NULL)) "17",

SUM(DECODE(HOUR,'18',1,NULL)) "18", SUM(DECODE(HOUR,'19',1,NULL)) "19", SUM(DECODE(HOUR,'20',1,NULL)) "20",

SUM(DECODE(HOUR,'21',1,NULL)) "21", SUM(DECODE(HOUR,'22',1,NULL)) "22", SUM(DECODE(HOUR,'23',1,NULL)) "23", COUNT(*) TOTAL

FROM (SELECT TO_CHAR(COMPLETION_TIME, 'DD/MM/YYYY') DATE_TIME,

SUBSTR(TO_CHAR(COMPLETION_TIME, 'DD/MM/YYYY HH24:MI:SS'),12,2) HOUR FROM V$BACKUP_SET WHERE BACKUP_TYPE='D')

GROUP BY date_time

) ORDER BY dt DESC)) data_files,

(SELECT TO_DATE(date_time, 'DD/MM/YYYY') DT, TO_CHAR(TO_DATE(date_time, 'DD/MM/YYYY'), 'DAY') DAY,

"00", "01", "02", "03", "04", "05", "06", "07", "08", "09",

"10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23"

FROM

(

SELECT DATE_TIME,

SUM(DECODE(HOUR,'00',1,NULL)) "00", SUM(DECODE(HOUR,'01',1,NULL)) "01", SUM(DECODE(HOUR,'02',1,NULL)) "02",

SUM(DECODE(HOUR,'03',1,NULL)) "03", SUM(DECODE(HOUR,'04',1,NULL)) "04", SUM(DECODE(HOUR,'05',1,NULL)) "05",

SUM(DECODE(HOUR,'06',1,NULL)) "06", SUM(DECODE(HOUR,'07',1,NULL)) "07", SUM(DECODE(HOUR,'08',1,NULL)) "08",

SUM(DECODE(HOUR,'09',1,NULL)) "09", SUM(DECODE(HOUR,'10',1,NULL)) "10", SUM(DECODE(HOUR,'11',1,NULL)) "11",

SUM(DECODE(HOUR,'12',1,NULL)) "12", SUM(DECODE(HOUR,'13',1,NULL)) "13", SUM(DECODE(HOUR,'14',1,NULL)) "14",

SUM(DECODE(HOUR,'15',1,NULL)) "15", SUM(DECODE(HOUR,'16',1,NULL)) "16", SUM(DECODE(HOUR,'17',1,NULL)) "17",

SUM(DECODE(HOUR,'18',1,NULL)) "18", SUM(DECODE(HOUR,'19',1,NULL)) "19", SUM(DECODE(HOUR,'20',1,NULL)) "20",

SUM(DECODE(HOUR,'21',1,NULL)) "21", SUM(DECODE(HOUR,'22',1,NULL)) "22", SUM(DECODE(HOUR,'23',1,NULL)) "23", COUNT(*) TOTAL

FROM (SELECT TO_CHAR(COMPLETION_TIME, 'DD/MM/YYYY') DATE_TIME,

SUBSTR(TO_CHAR(COMPLETION_TIME, 'DD/MM/YYYY HH24:MI:SS'),12,2) HOUR FROM V$BACKUP_SET WHERE BACKUP_TYPE='L')

GROUP BY date_time

) ORDER BY dt DESC) archives,

(SELECT TO_DATE(date_time, 'DD/MM/YYYY') DT, TO_CHAR(TO_DATE(date_time, 'DD/MM/YYYY'), 'DAY') DAY,

"00", "01", "02", "03", "04", "05", "06", "07", "08", "09",

"10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23"

FROM

(

SELECT DATE_TIME,

SUM(DECODE(HOUR,'00',1,NULL)) "00", SUM(DECODE(HOUR,'01',1,NULL)) "01", SUM(DECODE(HOUR,'02',1,NULL)) "02",

SUM(DECODE(HOUR,'03',1,NULL)) "03", SUM(DECODE(HOUR,'04',1,NULL)) "04", SUM(DECODE(HOUR,'05',1,NULL)) "05",

SUM(DECODE(HOUR,'06',1,NULL)) "06", SUM(DECODE(HOUR,'07',1,NULL)) "07", SUM(DECODE(HOUR,'08',1,NULL)) "08",

SUM(DECODE(HOUR,'09',1,NULL)) "09", SUM(DECODE(HOUR,'10',1,NULL)) "10", SUM(DECODE(HOUR,'11',1,NULL)) "11",

SUM(DECODE(HOUR,'12',1,NULL)) "12", SUM(DECODE(HOUR,'13',1,NULL)) "13", SUM(DECODE(HOUR,'14',1,NULL)) "14",

SUM(DECODE(HOUR,'15',1,NULL)) "15", SUM(DECODE(HOUR,'16',1,NULL)) "16", SUM(DECODE(HOUR,'17',1,NULL)) "17",

SUM(DECODE(HOUR,'18',1,NULL)) "18", SUM(DECODE(HOUR,'19',1,NULL)) "19", SUM(DECODE(HOUR,'20',1,NULL)) "20",

SUM(DECODE(HOUR,'21',1,NULL)) "21", SUM(DECODE(HOUR,'22',1,NULL)) "22", SUM(DECODE(HOUR,'23',1,NULL)) "23", COUNT(*) TOTAL

FROM (SELECT TO_CHAR(COMPLETION_TIME, 'DD/MM/YYYY') DATE_TIME,

SUBSTR(TO_CHAR(COMPLETION_TIME, 'DD/MM/YYYY HH24:MI:SS'),12,2) HOUR FROM V$BACKUP_SET WHERE BACKUP_TYPE='I')

GROUP BY date_time

) ORDER BY dt DESC) INCREMENTAL

WHERE dates.dt = data_files.dt(+) AND

dates.dt = archives.dt (+)

AND dates.dt = INCREMENTAL.dt(+)

)

ORDER BY dt DESC;
Apps DBA Free Lance
Sukhwinder Singh