Oracle DBA, How To, Error, Cause and Action

Shrink The Datafile of Undo Tablespace

When the undo tablespace set to AUTOEXTEND ON MAXSIZE UNLIMITED this will help avoid 
ORA 1651 : unable to extend save undo segment by <num> in tablespace <name>

However this will cause the datafiles to growth way too big and we may running out of storage space.

When you try to alter database resize the files may be too fragmented to be resize and you may have following error:
ORA 3297 : file contains <num> blocks of data beyond requested RESIZE value.

The easier way to shrink this to create new one and drop the old one.

Following is the procedure

Create a new undo tablespace with a smaller size, assume is 100M

SQL> CREATE UNDO TABLESPACE undo02 DATAFILE ' undo02_01.dbf' SIZE 100M;

Set the new tablespace as the undo tablespace to be used:

SQL> ALTER SYSTEM SET undo_tablespace=undo02;

Drop the old undo tablespace, example the undo01 is the huge tablespace that you want to drop:

SQL> DROP TABLESPACE undo01 INCLUDING CONTENTS;.


Note:
  • Dropping the old tablespace may give ORA-30013: undo tablespace '%s' is currently in use. This error indicates you must wait for the undo tablespace to become unavailable. In other words, you must wait for existing transaction to commit or rollback.    
  • Also be aware that on some platforms, disk space is not freed to the OS until the database is restarted.  The disk space will remain "allocated" from the OS perspective until the database restart.


Points to Consider:
  • The value for UNDO_RETENTION also has a role in growth of undo tablespace. If there is no way to get the undo space for a new transaction, then the undo space (retention) will be reused. But, if the datafiles for undo tablespace are set to auto extensible, it will not reuse the space. In such scenarios new transaction will allocate a space and your undo tablespace will start growing.
  • Is big really bad? Overhead on larger file/tablespaces can theoretically impact the database and the OS. With a small file, the OS would have to do minimal I/O. Oracle would be able to cache the whole file and there would be less segments to manage. With AUM you get bitmapped files and all its (space management) performance benefits -- (number of) undo segments are automatically managed and are not related to the size of the tablespace. With the bigger file/tablespace you will have other overhead -- e.g. backup will take longer -- but as far as the undo management there should be no performance impact just because the file/tablespace is bigger. That said, it is important to monitor systems (e.g. with StatsPack) and watch for environment-specific issues.