Oracle DBA, How To, Error, Cause and Action

Oracle Database Health Check

1. Oracle Database Parameters

Document down oracle database parameter (pfile).
If your database is using spfile, create the pfile using following command.

   CREATE pfile='/tmp/database_parameter.pfile' FROM SPFILE;

Make sure there the parameter is not change from initial baseline.

2. Multiplex Control File

Control files must be multiplex (minimum 2 files) and located in different locations:

SQL> select status, name from v$controlfile;


STATUS NAME
------- ---------------------------------
/u01/oradata/oracle/control01.ctl
/u02/oradata/oracle/control02.ctl

3. Redolog files multiple members

Redo log files should at least have 2 members for each group and located in different mount points.
The extension of the redo log files can be .rdo to prevent some one without database knowledge mistaken these files as ordinary log file and remove this files.

SQL> select * from v$logfile;


GROUP# STATUS TYPE MEMBER
--------- ------- ------ -----------------------------------
1 ONLINE /u01/oradata/oracle/redo01_A.rdo
1 ONLINE /u02/oradata/oracle/redo01_B.rdo


2 ONLINE /u01/oradata/oracle/redo02_A.rdo
2 ONLINE /u02/oradata/oracle/redo02_B.rdo


3 ONLINE /u01/oradata/oracle/redo03_A.rdo
3 ONLINE /u02/oradata/oracle/redo03_B.rdo


4. Archive Log mode

Production database must run on archive log mode.

SQL> archive log list


Database log mode No Archive Mode --OR-- Archive Mode
Automatic archival Disabled --OR-- Enabled
Archive destination --OR-- USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence seq. no
Current log sequence seq. no


5a. Datafile autoextensible may cause error

Monitor the growth of datafile to ensure that it is not goes beyond what the OS limit.
A standard Oracle datafile can have, at most, 4194303 Oracle datablocks.
So this also implies that the maximum size is dependant on the Oracle Block size used.
Starting from Oracle 10g, we have a new functionality called BIGFILE, which allows for bigger files to be created. Please also consider that every Operating System has its limits, therefore you should make sure that the maximum size of a datafile cannot be extended past the Operating System allowed limit.

select tablespace_name, file_name
from dba_data_files
where autoextensible = 'YES';


5b. Datafile location

Make sure the datafile in correct storage location.

SQL> select * from v$dbfile;

FILE# NAME
--------- --------------------------------------------------
1 D:\DATABASE\SYS1D806.DBF
2 D:\DATABASE\D806\RBS1D806.DBF
3 D:\DATABASE\D806\TMP1D806.DBF
5 D:\DATABASE\D806\USR1D806.DBF
6 D:\USR2D806.DBF
7 F:\ORACLE\USR3D806.DBF



6a. System Tablespace store non system objetcs

User objects should not be created in the system tablespace. Doing so can lead to unnecessary fragmentation and preventing system tables of growing. The following query returns a list of objects that are created in the system tablespace but not owned by SYS or SYSTEM.

SQL> select owner, segment_name, segment_type
from dba_segments
where tablespace_name = 'SYSTEM'
and owner not in ('SYS','SYSTEM');



6.2 SYSAUX Tablespace (10g Release and above)

The SYSAUX tablespace was automatically installed as an auxiliary tablespace to the SYSTEM tablespace when you created or upgraded the database. Some database components that formerly created and used separate tablespaces now occupy the SYSAUX tablespace.

If the SYSAUX tablespace becomes unavailable, core database functionality will remain operational. The database features that use the SYSAUX tablespace could fail, or function with limited capability.

The amount of data stored in this tablespace can be significant and may grow over time to unmanageble sizes if not configured properly. There are a few components that need special attention.

To check which components are occupying space:



SQL> select space_usage_kbytes usage, occupant_name, occupant_desc from v$sysaux_occupants;


     USAGE OCCUPANT_NAME             OCCUPANT_DESC
---------- ------------------------- -------------------------------------------------------
      7744 LOGMNR                    LogMiner
      1024 LOGSTDBY                  Logical Standby
      3328 SMON_SCN_TIME             Transaction Layer - SCN to TIME mapping
       384 PL/SCOPE                  PL/SQL Identifier Collection
      1024 STREAMS                   Oracle Streams
    213312 XDB                       XDB
     44288 AO                        Analytical Workspace Object Table
     44288 XSOQHIST                  OLAP API History Tables
     15936 XSAMD                     OLAP Catalog
     76800 SM/AWR                    Server Manageability - Automatic Workload Repository
     11776 SM/ADVISOR                Server Manageability - Advisor Framework


     USAGE OCCUPANT_NAME             OCCUPANT_DESC
---------- ------------------------- -------------------------------------------------------
     16640 SM/OPTSTAT                Server Manageability - Optimizer Statistics History
      5952 SM/OTHER                  Server Manageability - Other Components
         0 STATSPACK                 Statspack Repository
     47488 SDO                       Oracle Spatial
      7296 WM                        Workspace Manager
     11200 ORDIM                     Oracle interMedia ORDSYS Components
         0 ORDIM/PLUGINS             Oracle interMedia ORDPLUGINS Components
         0 ORDIM/SQLMM               Oracle interMedia SI_INFORMTN_SCHEMA Components
    118272 EM                        Enterprise Manager Repository
      5568 TEXT                      Oracle Text
      7616 ULTRASEARCH               Oracle Ultra Search


     USAGE OCCUPANT_NAME             OCCUPANT_DESC
---------- ------------------------- -------------------------------------------------------
     12288 ULTRASEARCH_DEMO_USER     Oracle Ultra Search Demo User
      3968 EXPRESSION_FILTER         Expression Filter System
      1600 EM_MONITORING_USER        Enterprise Manager Monitoring User
       256 TSM                       Oracle Transparent Session Migration User
      1728 SQL_MANAGEMENT_BASE       SQL Management Base Schema
       320 AUTO_TASK                 Automated Maintenance Tasks
      1920 JOB_SCHEDULER             Unified Job Scheduler


29 rows selected.



6.3 Locally vs Dictionary Managed Tablespaces

Locally Managed Tablespaces are available since Oracle 8i, however they became the default starting from Oracle 9i. Locally Managed Tablespaces, also referred to as LMT, have some advantage over Data Dictionary managed tablespaces.

To verify which tablespace is Locally Managed or Dictionary Managed, you can run the following query:

SQL> select tablespace_name, extent_management
from dba_tablespaces;



6.4 Temporary Tablespace

Locally Managed Tablespaces use tempfiles to serve the temporary tablespace, whereas Dictionary Managed Tablespaces use a tablespace of the type temporary. When you are running an older version (pre Oracle 9i), then it is important to check the type of tablespace used to store the temporary segments. By default, all tablespaces are created as PERMANENT, therefore you should make sure that the tablespace dedicated for temporary segments is of the type TEMPORARY.

SQL> select tablespace_name, contents
  2  from dba_tablespaces;


TABLESPACE_NAME                CONTENTS
------------------------------ ---------
SYSTEM                         PERMANENT
SYSAUX                         PERMANENT
UNDOTBS1                       UNDO
TEMP                           TEMPORARY
USERS                          PERMANENT
DBSOURCE_1242130650845718      PERMANENT
RMAN_CATALOG                   PERMANENT


7 rows selected.


Make sure that the users on the database are assigned a tablespace of the type temporary. The following query lists all the users that have a permanent tablespace specified as their default temporary tablespace.


SQL> select u.username, t.tablespace_name
  2  from dba_users u, dba_tablespaces t
  3  where u.temporary_tablespace = t.tablespace_name
  4  and t.contents <> 'TEMPORARY';


no rows selected


Note: User SYS and SYSTEM will show the SYSTEM tablespace as there default temporary tablespace. This value can be altered as well to prevent fragmentation in the SYSTEM tablespace.


SQL> alter user SYSTEM temporary tablespace TEMP;


User altered.

The space allocated in the temporary tablespace is reused. This is done for performance reasons to avoid the bottleneck of constant allocating and de-allocating of extents and segments. Therefore when looking at the free space in the temporary tablespace, this may appear as full all the time. The following are a few queries that can be used to list more meaningful information about the temporary segment usage:

This will give the size of the temporary tablespace:


SQL> select tablespace_name, sum(bytes)/1024/1024 mb
  2  from dba_temp_files
  3  group by tablespace_name;


TABLESPACE_NAME                        MB
------------------------------ ----------
TEMP                                   39


This will give the "high water mark" of that temporary tablespace (= max used at one time):


SQL> select tablespace_name, sum(bytes_cached)/1024/1024 mb
  2  from v$temp_extent_pool
  3  group by tablespace_name;


TABLESPACE_NAME                        MB
------------------------------ ----------
TEMP                                   38


This will give current usage:


SQL> select ss.tablespace_name,
  2  sum((ss.used_blocks*ts.blocksize))/1024/1024 mb
  3  from gv$sort_segment ss, sys.ts$ ts
  4  where ss.tablespace_name = ts.name
  5  group by ss.tablespace_name;


TABLESPACE_NAME                         MB
------------------------------- ----------
TEMP                                     0


6.5 Tablespace Fragmentation

Heavly fragmented tablespaces can have an impact on the performance, especially when a lot of Full Table Scans are occurring on the system. Another disadvantage of fragmentation is that you can get out-of-space errors while the total sum of all free space is much more then you had requested.

The only way to resolve fragmentation is recreate the object. As of Oracle8i you can use the 'alter table .. move' command. Prior to Oracle8i you could use export/import.

If you need to defragment your system tablespace, you must rebuild the whole database since it is NOT possible to drop the system tablespace.


7. Objects

7.1 Number of Extents

While the performance hit on over extended objects is not significant, the aggregate effect on many over extended objects does impact performance. The following query will list all the objects that have allocated more extents than a specified minimum. Change the <--minext--> value by an actual number, in general objects allocating more then 100 a 200 extents can be recreated with larger extent sizes:



SQL> select owner, segment_type, segment_name, tablespace_name,
  2  count(blocks), SUM(bytes/1024) "BYTES K", SUM(blocks)
  3  from dba_extents
  4  where owner NOT IN ('SYS','SYSTEM')
  5  group by owner, segment_type, segment_name, tablespace_name
  6  having count(*) > 100 -- minext to configure here
  7  order by segment_type, segment_name;


no rows selected


7.2 Next extent

It is important that segments can grow and therefore allocate their next extent when needed. If there is not enough free space in the tablespace then the next extent can not be allocated and the object will fail to grow. The following query returns all the segments that are unable to allocate their next extent :


SQL> select s.owner, s.segment_name, s.segment_type,
  2  s.tablespace_name, s.next_extent
  3  from dba_segments s
  4  where s.next_extent > (select MAX(f.bytes)
  5  from dba_free_space f
  6  where f.tablespace_name = s.tablespace_name);


no rows selected

Note that if there is a lot of fragmentation in the tablespace, then this query may give you objects that still are able to grow. The above query is based on the largest free chunk in the tablespace available. If there are a lot of 'small' free chunks after each other, then Oracle will coalesce these to serve the extent allocation.

Therefore it can be interesting to adapt the script detect tablespace fragmentation to compare the next extent for each object with the 'contiguous' bytes (table space_temp) in the tablespace.


7.3 Indexes

The need to rebuild an index is very rare and often the coalescing the index is a better option.

There have been many discussions about whether rebuilding indexes is useful or not. Generally speaking, the need to rebuild b-tree indexes is very rare, basically because a b-tree index is largely self-managed or self-balanced.

The most common justifications given for rebuilding an index are:
- index becomes fragmented
- index grows and grows - deleted space is not re-used
- index clustering factor becomes out of sync

In fact most indexes remain both balanced and fragmentation-free because free leaf entries will be reused. Inserts/Updates and Deletes result in free slots being scattered around the index blocks, but these will typically be refilled.
The clustering factor reflects how sorted the table data is with respect to the given index key. Rebuilding an index never has an influence on the clustering factor but instead requires a table re-organization.

Secondly the impact of rebuilding the index can be quite significant, please read the following comments thoroughly:

1. Most scripts around depend on the index_stats dynamic table. This is populated by the command:

SQL> analyze index ... validate structure;

While this is a valid method to inspect the index, it grabs an exclusive table lock while analyzing the index. Especially for large indexes, this can be very dramatic, as DML operations on the table are not permitted during that time. While it can be run online without the locking considerations, it may consume additional time.


2. Redo activity and general performance may increase as a direct result of rebuilding an index.

Insert/update/delete causes the index to evolve over time as the index splits and grows. When the index is rebuild it will become more tightly packed; however as DML operations continue on the table the index splits have to be redone again until the index reaches it's equilibrium. As a result, the redo activity increases and the index splits are now more likely to impact performance directly as we consume more I/O, CPU, etc to serve the index restructuring. After a certain period of time the index may again experience 'issues' and may be re-flagged for a rebuild, causing the vicious cycle to continue. Therefore it is often better to leave the index in its natural equilibrium and/or at least prevent indexes from being rebuilt on a regular basis.


3. An index coalesce is often preferred instead of an index rebuild. It has the following advantages:
- does not require approximately 2 times the disk storage
- always online
- does not restructure the index, but combines index leaf blocks as much as possible, avoiding system overhead as explained in point 2.

Note: To re-allocate an index, to another tablespace for example a rebuild is required.

Due to the reasons listed above, it is strongly advised not to rebuild indexes on a regular basis but instead use proper diagnostics.

8. AUTO vs MANUAL undo

Starting from Oracle 9i we introduced a new way of managing the before-images. Previously this was achieved through the RollBack Segments or also referred to as manual undo. Automatic undo is used when the UNDO_MANAGEMENT parameter is set to AUTO. When not set or set to MANUAL then we use the 'old' rollback segment mechanism. Although both versions are still available in current release, automatic undo is preferred.

8.1 AUTO UNDO

There is little to no configuration involved to AUM (Automatic Undo Management). You basically define the amount of time the before image needs to be kept available. This is controlled through the parameter UNDO_RETENTION, defined in seconds. So a value of 900 indicates 15 minutes.

It is important to realize that this value is not honored when we are under space pressure in the undo tablespace.

Sizing an UNDO tablespace requires three pieces of data.

(UR) UNDO_RETENTION in seconds
(UPS) Number of undo data blocks generated per second
(DBS) Overhead varies based on extent and file size (db_block_size)

The undo space needed is calculated as:

UndoSpace = UR * (UPS * DBS)

Two of the pieces of information can be obtained from the instance configuration: UNDO_RETENTION and DB_BLOCK_SIZE. The third piece of the formula requires a query being run against the database. The maximum number of undo blocks generated per second can be acquired from V$UNDOSTAT.

The following formula calculates the peak undo blocks generated per second:


SQL> SELECT undoblks/((end_time-begin_time)*86400) "Peak Undo Block Generation"
  2  FROM v$undostat WHERE undoblks=(SELECT MAX(undoblks) FROM v$undostat);


Peak Undo Block Generation
--------------------------
                       .11
                       .11


Column END_TIME and BEGIN_TIME are DATE data types. When DATE data types are subtracted, the resulting value is the # of days between both dates. To convert days to seconds, you multiply by 86400, the number of seconds in a day (24 hours * 60 minutes * 60 seconds).

The following query calculates the number of bytes needed to handle a peak undo activity:


SQL> SELECT (UR * (UPS * DBS)) AS "Bytes"
  2       FROM (SELECT value AS UR FROM v$parameter WHERE name = 'undo_retention'),
  3            (SELECT undoblks/((end_time-begin_time)*86400) AS UPS
  4             FROM v$undostat
  5             WHERE undoblks = (SELECT MAX(undoblks) FROM v$undostat)),
  6            (SELECT block_size AS DBS
  7             FROM dba_tablespaces
  8             WHERE tablespace_name = (SELECT UPPER(value) FROM v$parameter WHERE name = 'undo_tablespace'));


     Bytes
----------
    811008
    811008

Starting from Oracle 10g, you may choose to use the GUARANTEE option, to make sure the undo information does not get overwritten before the defined undo_retention time.


8.2 MANUAL UNDO

Damaged rollback segments will prevent the instance to open the database. Only if names of rollback segments are known, corrective action can be taken. Therefore specify all the rollback segments in the 'rollback_segments' parameter in the init.ora

Too small or not enough rollback segments can have serious impact on the behavior of your database. Therefore several issues must be taken into account. The following query will show you if there are not enough rollback segments online or if the rollback segments are too small.


SQL> select d.segment_name, d.tablespace_name, s.waits, s.shrinks,
  2  s.wraps, s.status
  3  from v$rollstat s, dba_rollback_segs d
  4  where s.usn = d.segment_id
  5  order by 1;


SEGMENT_NAME            TABLESPACE_NAME      WAITS    SHRINKS      WRAPS STATUS
----------------------- --------------- ---------- ---------- ---------- --------
SYSTEM                  SYSTEM                   0          0          0 ONLINE
_SYSSMU10_1285749357$   UNDOTBS1                 0          0          0 ONLINE
_SYSSMU1_1285749357$    UNDOTBS1                 0          0          0 ONLINE
_SYSSMU2_1285749357$    UNDOTBS1                 0          0          0 ONLINE
_SYSSMU3_1285749357$    UNDOTBS1                 0          0          0 ONLINE
_SYSSMU4_1285749357$    UNDOTBS1                 0          0          3 ONLINE
_SYSSMU5_1285749357$    UNDOTBS1                 0          0          0 ONLINE
_SYSSMU6_1285749357$    UNDOTBS1                 0          0          0 ONLINE
_SYSSMU7_1285749357$    UNDOTBS1                 0          0          0 ONLINE
_SYSSMU8_1285749357$    UNDOTBS1                 0          0          1 ONLINE
_SYSSMU9_1285749357$    UNDOTBS1                 0          0          0 ONLINE


11 rows selected.


The WAITS indicates which rollback segment headers had waits for them. Typically you would want to reduce such contention by adding rollback segments.

If SHRINKS is non zero then the OPTIMAL parameter is set for that particular rollback segment, or a DBA explicitly issued a shrink on the rollback segment.
The number of shrinks indicates the number of times a rollback segment shrinked because a transaction has extended it beyond the OPTIMAL size. If this value is too high then the value of the OPTIMAL size should be increased as well as the overall size of the rollback segment (the value of minextents can be increased or the extent size itself, this depends mostly on the indications of the WRAPS column).

The WRAPS column indicate the number of times the rollback segment wrapped to another extent to serve the transaction. If this number is significant then you need to increase the extent size of the rollback segment.


9. Memory Management

This chapter is very version driven. Depending on which version you are running the option available will be different. Overtime Oracle has invested a great deal of time and effort in managing the memory more efficiently and transparently for the end-user. Therefore it is advisable to use the automation features as much as possible.

9.1 Pre Oracle 9i

The different memory components (SGA & PGA) needed to be defined at the startup of the database. These values were static. So if one of the memory components was too low the database needed to be restarted to make the changes effective.
How to determine the optimal or best value for the different memory components is not covered in this note, since this would lead us too far. However a parameter that was often misused in these versions is the sort_area_size.

The 'sort_area_size' parameter in the init.ora defines the amount of memory that can be used for sorting. This value should be chosen carefully since this is part of the User Global Area (UGA) and therefore is allocated for each user individually.
If there are a lot of concurrent users performing large sort operation on the database then the system can run out of memory.

E.g.: You have a sort_area_size of 1Mb, with 200 concurrent users on the database. Although this memory is allocated dynamically, it can allocate up to 200Mb and therefore can cause extensive swapping on the system.

9.2 Oracle 9i

Starting from Oracle 9i we introduced the parameters:

workarea_size_policy = [AUTO | MANUAL]
pga_aggregate_target =

This allows you define 1 pool for the PGA memory, which will be shared across sessions.
When you often receive ORA-4030 errors, then this can be an indication that this value is specified too low.

9.3 Oracle 10g

Automatic Shared Memory Management (ASMM) was introduced in 10g. The automatic shared memory
management feature is enabled by setting the SGA_TARGET parameter to a non-zero value.

This feature has the advantage that you can share memory resources among the different components.
Resources will be allocated and deallocated as needed by Oracle automatically.

Automatic PGA Memory management is still available through the 'workarea_size_policy' and
'pga_aggregate_target' parameters.

9.4 Oracle 11g

Automatic Memory Management (AMM) is being introduced in 11g. This enables automatic tuning
of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET.


10. Logging & Tracing

10.1 Alert File

The alert log file of the database is written chronologically. Data is always appended and therefore this file can grow to an enormous size. It should be cleared or truncated on a regular basis, as a large alert file occupies unnecessary disk space and can slow down OS write performance to the file.


Pre-11g:

SQL> show parameter background_dump_dest

NAME TYPE VALUE
------------------------------ ------- ----------------------------------
background_dump_dest string D:\Oradata\Admin\PROD\Trace\BDump


11g and above:

SQL> show parameter diagnostic_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
diagnostic_dest string C:\ORACLE


10.2 Max_dump_file_size

Oracle Server processes generate trace files for certain errors or conflicts. These trace files are of use for further analyzing the problem. The init.ora parameter 'max_dump_file_size' limits the size of these trace files. The value of this parameter should be specified in Operating System blocks.
Make sure the disk space can handle the maximum size specified, if not then this value should be changed.

SQL> show parameter max_dump_file_size

NAME TYPE VALUE
---------------------------------- ------- ---------------------
max_dump_file_size integer 10240


10.3 User and core dump size parameters

The parameters 'user_dump_dest' and 'core_dump_dest' can contain a lot of trace information.
It is important to clear this directory at regular times as this can take up a significant amount of space.

Note: starting from Oracle 11g, this location is controlled by the 'diagnostic_dest' parameter


10.4 Audit files

By default, every connection as SYS or SYSDBA is logged in an operating system file.
The location is controlled through the parameter 'audit_file_dest'. If this parameter is not set then the location defaults to $ORACLE_HOME/rdbms/audit.
Overtime this directory may contain a lot of auditing information and can take up a significant amount of space.