About Me

The best place to start learning Oracle, Linux/Unix scripting, and Oracle administration skills. Currently, as Oracle DBA with 11g/10g/9i OCP certified, I want to help you with my background and knowledge. Hopefully, this info would help you as a job aid.

Monday, November 28, 2011

Oracle Fundamental I - Chapter 8: Maintaining Tablespaces and Data Files

In this chapter, we are going to discuss some physical database componet for the database. The Tablespaces and Data files.
Oracle stores data logically in tablespaces and physically in data files.
  • Tablespaces:
    • Can belong to only one database at a time
    • Consiste of one or more data files
    • Are further divided into logical unites of storage
  • Data files:
    • Can belong to only one tablespace and one database
    • Are a repository for schema object data
Type of tablespaces, the tablespaces can be basically categorized into two types:
  • System Tablespace
    • Created with the database
    • Contains the data dictionary
    • Contains the system undo segment
  • Non-system tablespace
    • Separate segments
    • Eases space administration
    • Controls amount of space allocated to a user
To create a tablespace, you issue the command as syntax below:
CREATE TABLESPACE tablespace
    [ DATAFILE clause]
    [ MINIMUM EXTENT integer[K|M]]
    [ BLOCKSIZE integer [K]]
    [ LOGGING|NOLOGGING]
    [ DEFAULT storage_clause]
    [ ONLINE|OFFLINE]
    [ PERMANENT|TEMPORARY]
    [extent_management_clause]
    [segment_management_clause]
Ex:
CREATE TABLESPACE data01 DATAFILE ‘/u01/oradata/data01.dbf’ SIZE 5M;

Space management in Tablespaces:
The space in datafile is managed either locally or by the data dictionary.
  • Locally managed tablespace
    • Free extents are managed in the tablespace
    • Bitmap is used to record free extents
    • Each bit corresponds to a block or group of blocks
    • Bit value indicates free or used
    • Since Oracle 9i, locally managed is the default
  • Dictioinary-managed tablespace
    • Free extents are managed by the data dictionary
    • Appropriate tables are updated when extents are allocated or deallocated
Locally Managed TAblespaces
  • Reduced contention on data dictionary tables
  • No undo generated when space allocation or deallocation occurs
  • No coalescing required
The tablespace is locally managed by default since 9i, and it's setup in extent clause:
    extent_management_clause:
      [ EXTENT MANAGEMENT [ DICTIONARY | LOCAL
      [ AUTOALLOCATE | uniform [  SIZE integer [K|M] ] ]
Ex:
CREATE TABLESPACE data01
DATAFILE '/u01/oradata/data01.dbf' size 500M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

Dictionary-Managed Tablespaces
  • Extents are managed in the data dictionary
  • Each segment stored i the tablespace can have a different storage clause
  • Coalescing is required
Ex:
CREATE TABLESPACE data01
DATAFILE '/u01/oradata/data01.dbf'
size 500M EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE
(initial 1m next 1m pctincrease 0);

To migrate a dictionary-managed SYSTEM tablespace, you need to use the Oracle supplied package to implement this:
Ex:
DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM');

  
Undo tablespace:
  • Used to store undo segments (Known as rollback segment in previous release of 8i).
  • Cannot contain any other objects
  • Extents are locally managed
  • Can only use the DATAFILE and EXTENT MANAGEMENT clauses
Ex:
CREATE UNDO TABLESPACE undo01
DATAFILE '/u01/oradata/undo01.dbf' size 500M;

Temporary Tablespaces
  • Ussed for sort operations
  • Can be shared by multiple users
  • Cannot contain any permanent objects
  • Locally managed extents recommended
CREATE TEMPORARY TABLESPACE temp01
TEMPFILE '/u01/oradata/temp01.dbf' size 200M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 4M;

Sort, or temporary segments are used when a segment is shared by multiple sort operations. Temporary tablespaces provide performance improvements when you have multiple sorts that are too large to fit into memory.
Note, non-standard block size cannot be specified when creatingn temporary tablespaces.

Default Temporary Tablespaces
  • Specifies a database-wide default temporary tablespace
  • Eliminates using SYSTEM tablespace for storing temporary data
  • Can be created by using:
    • CREATE DATABASE
    • ALTER DATABASE
Ex:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp01;

You can also specified and create the default temporary tablespace.
Ex:
CREATE DATABASE db01
LOGFILE
GROUP 1 ('/$HOME/ORADATA/u01/redo01.rdo') SIZE 100m,
GROUP 2 ('/$HOME/ORADATA/u02/redo02.rdo') SIZE 100m,
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
DATAFILE '/$HOME/ORADATA/u01/system01.dbf' SIZE 325M
UNDO TABLESPACE undo1
DATAFILE '/$HOME/ORADATA/u02/undo01.dbf' SIZE 200M
DEFAULT TEMPORARY TABLESPACE temp01
TEMPFILE '/$HOME/ORADATA/u03/temp01.dbf' SIZE 4M
CHARACTER SET US7ASCII;

Specifies the Default Temporary Tablespace after database creation:
Ex:
ALTER DATABASAE
DEFAULT TEMPORARY TABLESPACE default_temp2;

To find the default temporary tablespace, you can query the database properties:
Ex:
SELECT * FROM DATABASE_PROPERTIES;

Note: Default temporary tablespaces cannot be:
  • Dropped until after a new default is made available
  • Taken offline
  • Altered to a permanent tablespace

A regular tablespace can be placed in read-only mode:
ALTER TABLESPACE userdata01 READ ONLY;
This command would cause:
  • Causes a checkpoint
  • Data available only for read operations
  • Objects can be dropped from tablespace
However, it is possible to drop items, like tables and indexes, located in the read-only mode tablespace. Because the DROP command only changes data physically in the data dictionary, but not the database in the tablespace where those objects are located.

Taking a Tablespace Offline:
Whenever the database is open, a database administrator can take any tablespace, except the SYSTEM tablespace or any tablespace with active undo segments or temporary segments, offline as syntax below:
     ALTER TABLESPACE tablespace_name
         {ONLINE | OFFLINE [NORMAL | TEMPORARY | IMMEDIATE | FOR RECOVER]}
  • Not available for data access
  • Tablespaces that cannot be taken offline:
    • SYSTEM tablespace
    • TAblespaces with active undo segments
    • DEfault temporary tablespace
  • To take a tablespace offline:
ALTER TABLESPACE data01 OFFLINE;
  • To bring a tablespace online:
ALTER TABLESPACE data01 ONLINE;


 The size of tablespaces can be resized:
A tablespace can be resized by:
  • Changing the size fo a data file:
    • Automatically using AUTOEXTEND
    • Manually using ALTER DATABASE
  • Adding a data file using ALTER TABLESPACE
You can also specified the tablespaces to be resized automatically by enabling Automatic Extension of Data Files:
  • Can be resized automatically with the following commands:
  • CREATE DATABASE
  • CREATE TABLESPACE
  • ALTER TABLESPACE .... ADD DATAFILE
  • Query the DBA_DATA_FILES view to determine whether AUTOEXTEND is enabled.
Ex:

CREATE TABLESPACE data01
DATAFILE '/u01/oradata/data01.dbf' size 200M
AUTOEXTEND ON NEXT 10M MAXSIZE 500M;


You may also manually resize a data file:
  • Manually increasae or decrease a data file size using ALTER DATABASE.
  • Resizing a data file adds more space without adding more data files.
  • Manual resizing of a data file reclaims unused space in database.
                ALTER DATABASE [database]
                    DAATAFILE 'filename' [, 'filename']....
                    RESIZE integer [K|M]
Ex:
ALTER DATABASE
DATAFILE '/u01/oradata/data01.dbf'
RESIZE 400m;

Adding Data Files to a Tablespace
  • Increases the space allocated to a tablespace by adding additional data files
  • ADD DATAFILE clause is used to add a data file
Ex:
ALTER TABLESPACE data01
ADD DATAFILE '/u01/oradata/data02.dbf'
SIZE 400M;

Moving Data Files by using rename:
Two ways to accomplish this,
  • ALTER TABLESPACE
    • Tablespace must be offline
    • Target data files must exist
Ex:
ALTER TABLESPACE data01 RENAME
DATAFILE '/u01/oradata/data01.dbf'
TO '/u02/oradata/data01.dbf';

  • ALTER DATABASE
    • Database must be mounted
    • Target data file must exist
ALTER DATABASE RENAME
FILE '/u01/oradata/data01.dbf'
TO '/u02/oradata/data01.dbf';


Last, you may also drop tablespaces:
  • You cannot drop a tablespace if it:
    • Is the SYSTEM tablespace
    • has active segments
  • INCLUDING CONTENTS drops the segments.
  • INCLUDING CONTENTS AND DATAFILES deletes the physical data files
  • CASCADE CONSTRAINTS drops all referential integrity constraints

DROP TABLESPACE data01
INCLUDING CONTENTS AND DATAFILES;

Obtaining tablespace and datafile information can be obtained by querying the following:
  • Tablespace information:
    • DBA_TABLESPACES
    • V$TABLESPACE
  • Data file information:
    • DBA_DATA_FILES
    • V$DATAFILE
  • Temp file information:
    • DBA_TEMP_FILES
    • V$TEMPFILE

Friday, November 18, 2011

DB console does not find the listener error in 10g Cluster

The DBconsole does not work and with the error of error receive: ORA-12541: TNS:no listener (DBD ERROR: OCIServerAttach). On the home page of the DBconsole, it does not showing the information of the instance nor database. The snapshot is below:

The first thing we check is the listener status on all nodes.
(Since this is a three node RAC cluster, we check the lsnrctl status on all nodes in the cluster)
The output is like below ( the example is only one node, since it's the same on all other nodes without different node names):

racstress6:/opt/oracle> lsnrctl status listener_racstr23
LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 18-NOV-2011 14:28:05
Copyright (c) 1991, 2007, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=racstress6-vip.dm.cdm.com)(PORT=1526)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_RACSTR23
Version                   TNSLSNR for Linux: Version 10.2.0.4.0 - Production
Start Date                06-JUL-2011 13:08:30
Uptime                    135 days 2 hr. 19 min. 34 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/oracle/product/db/10.2.0.4/network/admin/listener.ora
Listener Log File         /opt/oracle/product/db/10.2.0.4/network/log/listener_racstr23.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=146.18.118.206)(PORT=1526)))
Services Summary...
Service "RACSTR2" has 1 instance(s).
  Instance "RACSTR23", status READY, has 26 handler(s) for this service...
Service "RACSTR23" has 1 instance(s).
  Instance "RACSTR23", status UNKNOWN, has 1 handler(s) for this service...
Service "RACSTR2_XPT" has 1 instance(s).
  Instance "RACSTR23", status READY, has 26 handler(s) for this service...
The command completed successfully
racstress6:/opt/oracle>

And we also check the connection strings using the hostname and hostname-vip. Ex:
$ sqlplus dbsnmp@'(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=racstress4-vip)(PORT=1526))(ADDRESS=(PROTOCOL=TCP)(HOST=racstress5-vip)(PORT=1526))(ADDRESS=(PROTOCOL=TCP)(HOST=racstress6-vip)(PORT=1526))(LOAD_BALANCE=yes))(CONNECT_DATA=(SERVICE_NAME=RACSTR2)))'

And again with hostname without -vip.
It turns out the hostname with -vip works correctly and the one without host does not work.

Next thing we try is the modify the host name information and clean out the old logfile generated by EM start, and then restart the EM.

Stop the dbconsole on all nodes.

cd $O_H/<host_sid>/sysman/config
cp emoms.properties emoms.properties.ORIG
vi emoms.properties
Locate the connect descrptior at the bottom and change the physical hostname to vip in the connect descriptor.
cd $ORACLE_HOME/<Host_SID>/sysman/emd
rm lastupld.xml
rm agntstmp.txt
rm protocol.ini
rm –fr state/* Dont Remove the Directory only data inside the state directory
rm –fr upload/* Dont Remove the Directory only data inside the state directory
cd $ORACLE_HOME/<Host_SID>/sysman/log
rm *
cd $ORACLE_HOME/bin
./emctl clearstate agent
./emctl start dbconsole
./emctl upload agent
./emctl status agent
./emctl status dbconsole.
 
But it still gets the same error the dbconsole webpage still does not shows up. So, we suspect it can be agent's problem on one of the node.
And the next action item is:
Step1:
Modify the hostname in the descriptor string in emoms.properties file.
cd to the $ORACLE_HOME/hostname_sid/sysman/config
vi emoms.properties
 
In the emoms.properties, find the following line and modify the connection string,
and change the hostname from hostname.domain to hostname-vip.domain:
Ex:
oracle.sysman.eml.mntr.emdRepConnectDescriptor=(DESCRIPTION\=(ADDRESS_LIST\=(ADDRESS\=(PROTOCOL\=TCP)(HOST\=hostname-vip.dm.cdm.com)(PORT\=port#)))(CONNECT_DATA\=(SERVICE_NAME\=SID)))
 
Step2:
Modify the emd.properties file as follows:
Change the parameter
AgentListenOnAllNICs=TRUE
to
AgentListenOnAllNICs=FALSE

Step 3:
cd $ORACLE_HOME/<Host_SID>/sysman/emd
rm lastupld.xml
rm agntstmp.txt
rm protocol.ini
rm –fr state/* Dont Remove the Directory only data inside the state directory
rm –fr upload/* Dont Remove the Directory only data inside the state directory
cd $ORACLE_HOME/<Host_SID>/sysman/log
rm *
Step 3:
Kill the agent process.
ps -ef |grep emagent  --> and find out the PID of the emagent process
$ kill pid
And try to start the dbconsole again, and the db console started successfully.
















Summary:
This error can be categorized into two points.
1. The hostname specified is not correct, due to the cluster environment using vip name instead of physicall host name.
2. The agent process has not been cleaned up.

The solution to fix the first problem is change the host name specified in the emoms.properties file. Change the hostname to be the virtual ip hostname. This phenomenon is pretty common happening in the cluster environment. And change the parameter 'AgentListenOnAllNICs' from TRUE to FALSE.
Clean up all the .log files to make sure the clean start.

The solution to fix the second problem is to find out the process of the agent. Due to the dbconsole was not started successfully, the emagent process has been started and corrupted. However, it's not cleaned up by itself cleanly. So, you need to manually find out the PID of that process and kill the process.

After implement the above two steps, then restart the dbconsole and emagent.
In my case, my dbconsole started successfully afterward.

Thursday, November 10, 2011

Oracle Fundamental I - Chapter 7: Maintaining Online Redo Log Files

In this chapter, we are talking about online redo log files,
The only redo log files play the important role to record the data transactions, and provide the recovery mechanism to ensure data integrity.

Online redo log files have the those characteristics:
  • Record all changes made to data
  • Provide a recovery mechanism
  • Can be organized into groups
  • At least two groups required
The online redo logs files provide the mean to redo transactionin the even to a database failure. When a transaction coming, it's written into the redo log buffer (Please refer to chapter 1 for the architecture); and then gets flushed to the online redo log files. When a media failure occurs, the redo log files provides the recovery mechanism to recover the transaction data. This includes not yet committed transactions, undo segment inforamtion, and schema and object management statements.
A transaction not being 'logged' into redo log files includes issuing 'NOLOGGING' clause in the statement or when using direct load insert.

Redo log files are group based, and at two online redo log groups are required.














Online Redo Log File Groups:
  • A set of identical copies of online redo log files is called an online redo log file group.
  • The LGWR background process concurrently writes the same information to all online redo log files in a group.
  • The Oracle server needs a minimum of two online redo log file groups for the normal operation of a database.
Online Redo Log File Memebers
  • Each online redo log file in a groiup is called a member.
  • Each member in a group has identical log sequence numbers and are of the same size. The log sequence number is assigned each time that the oracle server writes to a log group to uniquely identify each online redo log file. The current log sequence number is tored in the control file and in the header of all data files.
The initial set of online redo log file groups and members are created during the database creation:
Parameters below limit the number of online redo log files:
  • MAXLOGFILES: it's the parameter in CREATE DATABASE command specifies the absolute maximum of online redo log file groups.
  • The maximum and default value for MAXLOGFILES depends on operating system.
  • MAXLOGMEMBERS: the parameter in CREATE DATABASE command determines the maximum number of members per redo log file group.
You need to know the mechanism of how Online Redo Log Files work to be able to utilize it ensuing the data availability.
The Oracle server sequentially records all changes made to the database in the Redo Log Buffer. The redo entries are written from the Redo Log Buffer to the current online redo log file group by the LGWR process. LGWR writes under the following situations:
  • When a transaction commites
  • When the Redo LOg Buffer becomes one-third full
  • When there is more than a megabyte of changed records in the Redo Log Buffer
  • Before the DBWn writes modified blocks in the Database Buffer Cache to the data files
Log Switches
LGWR writes to the online redo log files sequenctially. When the current online redo log file group is filled, LGWR begins writing to the next group. This is called a log switch.

Checkpoints
During a checkpoint:
  • DBWn writes a number of dirty database buffers, which are covered by the logthat is being checkpointed, to the data files.
  • The checkpoint backgroud process CKPT updates the control file to reflect that it has completed a checkpoint successfully. If the checkpint is caused by a log switch, CKPT also updates the headers of the data files.
A checkpint occurs in the following sutuations:
  • At every log switch
  • When an instance has been shut down with the normal, trasactional, or immediate option
  • When forced by setting the FAST_START_MTTR_TARGET initialization parameter
  • When manually requested by the database administrator
  • When the ALTER TABLESPACE [OFFLINE NORMAL|READ ONLY|BEGIN BACKUP] command causes checkpointing on specific data files
You can force log switches and checkpoints as below:
  • Forcing a log switch:

SQL> ALTER SYSTEM SWITCH LOGFILE;


  • Checkpoints can be forced by setting FAST_START_MTTR_TARGET parameter, this parameter force the Oracle instance to reach the goal of that instance recovery should not take more than certain seconds, in this case 600 seconds:

FAST_START_MTTR_TARGET = 600


  • ALTER SYSTEM CHECKPOINT command

ALTER SYSTEM CHECKPOINT;


Adding Online Redo Log File Groups:
In some cases you might need to create additional log file groups. To create a new group of online redo log files, use the following SQL command:
         ALTER DATABASE [database]
            ADD LOGFILE [GROUP interger] filespec
               [, [GROUP integer] filespec]...]
Ex:

ALTER DATABASE ADD LOGFILE GROUP 3
(‘$HOME/ORADATA/u01/log3a.rdo’,
  ‘$HOME/ORADATA/u02/log3b.rdo’)
Size 1M;

Adding new member to exsiting online redo log group:
You can also add new members to existing online redo log file groups using the following command:
       
                 ALTER DATABASE [database]
                      ADD LOGFILE MEMBER
                         [ ‘FILENAME’ [REUSE] [, ‘FILENAME’ [REUSE]…
                       TO {GROUP integer
                       | (‘filename’ [, ‘filename’]…)
                      }  ]…
Ex:
 

ALTER DATABASE ADD LOGFILE MEMBER
‘$HOME/ORADATA/u04/log1c.rdo’ TO GROUP1,
‘$HOME/ORADATA/u04/log2c.rdo’ TO GROUP2,
‘$HOME/ORADATA/u04/log3c.rdo’ TO GROUP3;


Dropping Online Redo Log File Groups:
You can also drop the Online Redo Log File Groups with following command:
                   ALTER DATABASE [database]
                           DROP LOGFILE {GROUP integer| ('filename' [,'filename']...)}
                             [,{GROUP integer| ('filename'[,'filename']...)}]...
Ex:
 

ALTER DATABASE DROP LOGFILE GROUP 3;
Dropping Online Redo Log File Members:
You may drop an online redo log file member by following command:
            ALTER DATABASE [database]
                    DROP LOGFILE MEMBER 'filename' [, 'filename']...
Ex:
 

ALTER DATABASE DROP LOGFILE MEMBER ‘$HOME/ORADATA/u04/log3c.rdo’;
Relocating or Renaming Online Redo Log Files:
Some time you need to rename an online redo log file. To do this, you go through the following steps to implement this:
1. Shut down the database                                                                                                                                               SQL> shutdown;
2. Copy the online redo log fiels to the new location.
3. Startup the databae and mount, but do not open it.                                                                     
           SQL> CONNECT /  as SYSDBA
           SQL> STARTUP MOUNT;
4. Rename the online redo log memebers using the ALTER DATABASE RENAME FILE:
Ex:


ALTER DATABASE RENAME FILE
   ‘$HOME/ORADATA/u01/log2a.rdo’
     TO  ‘$HOME/ORADATA/u02/log1c.rdo’;
5. Open the database for normal operation:
      SQL> ALTER DATABASE OPEN;

Clearing Online Redo Log Files
ALTER DATABASE CLEAR LOGFILE command can be used to reinitialize an online redo log file.
Ex:
 

ALTER DATABASE CLEAR LOGFILE GROUP 2;
Use the UNARCHIVED keyword to avoid archiving the corrupted online reedo log file
Ex:
 

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2;


There are a coupld of dynamic views for online redo log files that you can retrive information of online redo log files,ex:
  • V$LOG
  • V$LOGFILE

Archived Redo Log Files
  • filled online redo log fiels can be archived.
  • There are two advantages in running the database in ARCHIVELOG mode and archiving online redo  log files:
    • Recovery: A database backup together with online and archived redo log fiels can quarantee recovery of all committed transactions.
    • Backup: This can be performed while the database is open.
  • By default, the database is created in NOARCHIVELOG mode.
  • Accomplished manually through SQL statements
  • When successfullyu archived:
    • An entry in the control file is made
    • Records: archive log name, log sequence number, and high and low system change number (SCN).
  • Filled online redo log files cannot be reused until:
    • A checkpoint has taken place
    • File has been archived by ARCn
  • Can be multiplexed
  • Maintained by the DBA
Note: The archived log files is one important point for backup and recovery, which will be discussed more detail in the Fundamental II.


    

Wednesday, November 9, 2011

Oracle Fundamental I - Chapter 6: Maintaining the Control Files

In this chapter, we are going to talk about how to manage the control files.
The control fiel is a small binary file necessary for the database to sgtart and operate successfully. Each control file is associated only one Oracle database. Before a database is opened, the control file is read to determine whether the database is in a valid state to use.
A control file is udpated continuously by the oracle server during database use, so it must be available for writing whenever the databasei s open. The information in the control fiel can be modified only by the Oracle server; no database administrator or end user can edit the control file.
If for some reason the control file is not acessible, the database does not function properly. If all copies of a database's control files are lost, the database must be recovered before it can be opened.

Control File contents:
  • Database name and identifier
  • Time stamp of database creation
  • Tablespace names
  • Names and locations of data files and online redo log files
  • Current online redo log file sequence number
  • Checkpoint information
  • Begin and end of undo segments
  • Redo log archive information
  • Backup Information
Multiplexing the Control File
To safeguard against a single pint of failure of the control file, it is strongly reommended that the control file be multiplexed, storing eah copy on a different physical disk.
Control files can be multiplexed up to eight times by:
  • Creating multiple control files when the database is created by including the control file names and full path in the initialization parameter file:
    • CONTROL_FILES=$HOME/ORADATA/data01/ctr101.ctl,$HOME/ORADATA/data01/ctr102.ctl
  • Addming a control file after the database is created
In Oracle Fundamental II Backup and Recovery, we will talk more detail about how to backup the control file.

You follow different steps to multiplexing control files in Oracle database using SPFILE and PFILE.
Using SPFILE,
1. Alter the SPFILE:
SQL> ALTER SYSTEM SET control_files =
          ‘$HOME/ORADATA/data01/ctr101.ctl’,
          ‘$HOME/ORADATA/data02/ctr102.ctl’ SCOPE=SPFILE;

2. Shutdown the database:

SQL> shutdown immediate;         


3. Create Additional control files in OS level by using the OS supported copy command:

$ cp $HOME/ORADATA/data01/ctr101.ctl $HOME/ORADATA/data01/ctr102.ctl

4. Start the database:

SQL> startup

Using PFILE,
1. Shutdown the database:

SQL> shutdown immediate;


2. Create Additional control files in OS level by using the OS supported copy command:

$ cp $HOME/ORADATA/data01/ctr101.ctl $HOME/ORADATA/data01/ctr102.ctl         


3. Add control file names to the PFILE:

CONTROL_FILES = $HOME/ORADATA/data01/ctr101.ctl,$HOME/ORADATA/data01/ctr102.ctl)

4. Start the database:

SQL> startup

Note: In Oracle Recover Manager, it provides a feature called replicate controlfile, which can accomplish the same task of multiplexing the control files. We will discuss this feature in Fundamental II.
To obtaining the control file information, there are a couple dynamic views that you can query. Few dynamic views below are example:
  • V$CONTROLFILE: lists the name and status of all control files associated with the instance.
  • V$PARAMETER: lists status and location of all parameters, in which you find the control file parameter
  • V$CONTROLFILE_RECORD_SECTION: Provides inforamtion about the control file record sections
  • Show PARAMETER CONTROL_FILES: lists the name, status, and location of control files.
 

Oracle Fundamental I - Chapter 5: Data Dictionary and Dynamic Performance Views

In this chapter, we will discuss the other system-built objects, that are central to the Oracle database, which provides the important information to manage the Oracle database and administration work.

Data Dictionary
One of the most important parts of an oracle database is its data dictionary, which is a read-only set of tables and views that provides information about its associated database. The data dictionary is updated by the oracle server whenever a data definition language (DDL) command is executed. In addition, data manipulation language (DML) commands, such as one that causes a table to extend, can update the data dictionary.
Not only is the data dictionary central to every oracle database, it is an important source of information for all users, from end users to application designers and database administrators.
SQL statements are  used to access the data dictionary.

The data dictionary contains two parts:
  • Base tables
    • stores description of the database
    • Created with CREATE DATABASE
  • Data dictionary views
    • Used to simplify the base table information
    • Accessed through public synonyms
    • Created with the catalog.sql script
The data dictionary provides information about:
  • Logical and physical database structures
  • Definitions and space allocations of objects
  • Integrity constraints
  • Users
  • Roles
  • Privileges
  • Auditing
There are three categories of Data Dictionary static views:
Distinguished by their scope:
DBA: What is in all the schemas
All: What the user can access
USER: What is in the user's schema

To query on all objects in the database, the DBA can issue the following statement:
SQL> SELECT owner, object_name, object_type FROM dba_objects;
The following example returns information about all the objects to which a user has access:
SQL> SELECT owner, object_name, object_type FROM all_objects;
The following query returns all the objects contained in the user's schema:
SQL> SELECT OBJECT_NAME, OBJECT_TYPE from users_objects;

Some Data Dictionary Examples below:
  • General overview: DICTIONARY, DICT_COLUMNS
  • Schema objects: DBA_TABLES, DBA_INDEXES, DBA_CONSTRAINTS
  • Space allocation: DBA_SEGMEN?TS, DBA_EXTENTS
  • Database structure: DBA_TABLESPACES, DBA_DATA_FILES
Dynamic Performance Tables:
Throughout its operation, the Oracle server records current database activity in a set of virtual tables called dynamic performance views. These virtual tables exist in memory only when the database is running, to reflect real-time conditions of the database operation. They point to actual sources of information in memory and the control file.
These tables are not true tables, and are not to be accessed by most users; however, DBA can query, grant the SELECT privilege, and create views on these views. These views are sometimes called fixed views because they cannot be altered or removed by the DBA.
The dynamic performance tables are owned by SYS, and their names all begin with V_$.
Views are creaetd on these tables, and then public synonyms are created for the views. The synonym names begin with V$. For example, the V$DATAFILE view contains information about the database's data files, adn the V$FIXED_TABLE view containts information about all of the dynami performance tables and views in the database.
Some examples: V$CONTROLFILE, V$DATABASE, V$DATAFILE, V$INSTANCE, V$PARAMETER, V$SESSION, V$SGA.

Summary:
There are more than hundreds of data dictionary views and dynamic views in Oracle database. It's almost impossible for any one to remember all the views in head. One general way to find out the data dictionary view is to issue the query like:
"select TABLE_NAME from DICTIONARY where TABLE_NAME like 'INDEX';"
It will return the data dictionary views which has INDEX in its name.

Tuesday, November 8, 2011

Oracle Fundamental I - Chapter 4: Creating a Database

In last chapter, we taked about how to manager an Oracle instance. As we discussed, the oracle DBMS is composed by two major parts, the Oracle instance and Oracle database. In this chapter, we will discuss how to manager the Oracle database.

Creation Prerequisites
To create a new database, you must have the following:
  • Aprivileged account authenticated by one of the following:
    • Operating system
    • Password file
  • Sufficient memory to start the instance
  • Sufficient disk space for the planned database
Using Password File Authentication
  • Create the password file using the password utility:

$ orapwd file=$ORACLE_HOME/dbs/orapwU15 password=admin entries=5
  • Set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE in initialization parameter file
  • Add users to the password file
  • Assign appropriate privileges to each user

GRANT SYSDBA TO HR;

An Oracle database can be created by:
  • Oracle Universal Installer
  • Oracle Database Configuration Assistant
    • Graphical user interface
    • Java-based
    • Launched by the Oracle Universal Installer
    • Can be used as a stand-alone application
  • The CREATE DATABASE command
In this chapter, we will discuss the procedure based on CREATE DATABASE command.

Before creating a database, some operating system environment must be properly configured:
ex: on the values
ORACLE_BASE: /opt/oracle
ORACLE_HOME: /opt/oracle/product/11.2
ORACLE_SID: DB1
ORA_NLS33: /$ORACLE_HOME/ocommon/nls/admin/data
PATH: /$ORACLE_HOME/bin
LD_LIBRARY_PATH: $ORACLE_HOME/bin

Creating a database manually:
  • choose a unique instance and database name
  • choose a database character set
  • Set operating system variables
  • Create the initialization parameter set
  • Start the instance in NOMOUNT stage
  • Create and execute the CREATE DATABASE command
  • Run scripts to generate the data dictionary and accomplish post-creation steps
    • catalog.sql: creates the views on the base tables and on the dynamic performance views, and their synonyms.
    • catproc.sql: creates the packages and procedures required to use PL/SQL.
    • pupbld.sql: creates the Product User Profile table and realted procedure, this scrip must be run as system.
  • Create additional tablespaces as needed
Oracle has a feature called Oracle Managed Files (OMF) to help you easily managing the location to data files.
  • OMF are established by setting two parameters:
    • DB_CREATE_FILE_DEST: Set to give the default locatio for data files
    • DB_CREATE_ONLINE_LOG_DEST_n: Set to give the default locations for online redo log fiels and control files
  • Maximum of five locations
  • Once specifying those two parameter, the create database command can be as simple as:

SQL> CREATE DATABASE dba01;





    Saturday, November 5, 2011

    Oracle Fundamental I - Chapter 3: Managing an Oracle Instance

    In this chapter, you will learn the fundamental skills of managing an Oracle instance.
    An Oracle instance is the bunch of server process, which plays the core processing to manage the Oracle database. It's composed by memory segments and server processes.

    The fist thing you need to manage is to create the Initialization Parameter file.
    Two types of parameters:
    • Explicit: Having an entry in the file
    • Implicit: No entry within the file, but assuming the Oracle default values;
    Multiple parameter file can exist.

    Parameter file (pfile):
    • pfile (parameter file): The text based parameter file to configure the Oracle instance.
      • modified with an operating system editor
      • Modifications made manually
      • changes take effect on the next start up
      • Only opened during instance start up
      • Default location is $ORACLE_HOME/dbs
    • Creating a PFILE
      • You create a pfile from the sameple pfile:
      • Sample is installed by the Oracle Universal Installer
      • Copy sample using operating system copy command
      • Uniquely identified by database SID. 
    cp init.ora $ORACLE_HOME/dbs/initdba01.ora

    • Modify the initSID.ora file
      • Edit the parameters
      • Specific to database needs
    init.ora file example:

    db_name = dba01
    instance_name = dba01
    control_files = ( /home/dba01/ORADATA/u01/control01dba01.ctl,
    /home/dba01/ORADATA/u02/control01dba02.ctl)
    db_block_size = 4096
    db_cache_size = 4M
    shared_pool_size = 50000000
    java_pool_size = 50000000
    max_dump_file_size = 10240
    background_dump_dest = /home/dba01/ADMIN/BDUMP
    user_dump_dest = /home/dba01/ADMIN/UDUMP
    core_dump_dest = /home/dba01/ADMIN/CDUMP
    undo_management = AUTO
    undo_tablespace = UNDOTBS
    . . .


    Server Parameter File (spfile):
    • spfile (server parameter file): The binary based parameter file, since 9i it's the default parameter file an Oracle instance will look at first
      • Binary based file
      • Maintained by the Oracle server
      • Always resides on the server side
      • Ability to make changes persistent across shut down and start up
      • Can self-tune parameter values
      • Can Have Recovery Manager support backing up to the initialization parameter file.
    • Creating an spfile
      • Created frmo a pfile file

    CREATE SPFILE = ‘$ORACLE_HOME/dbs/spfileDBA01.ora’ FROM PFILE = ‘$ORACLE_HOME/dbs/initDBA01.ora’;
    • where
      • SPFILE-NAME: SPFILE to be created
      • PFILE-NAME: PFILE creating the SPFILE
      • This command can be executed before and after instance start up.
    spfile example:

    * .background_dump_dest= ‘/home/dba01/ADMIN/BDUMP’
    *.compatible=’9.2.0′
    *.control_files=’/home/dba01/ORADATA/u01/ctrl01.ctl’ *.core_dump_dest= ‘/home/dba01/ADMIN/CDUMP’
    *.db_block_size=4096
    *.db_name=’dba01′
    *.db_domain= ‘world’
    *.global_names=TRUE
    *.instance_name=’dba01′
    *.remote_login_passwordfile=’exclusive’
    *.java_pool_size=50000000
    *.shared_pool_size=50000000
    *.undo_management=’AUTO’
    *.undo_tablespace=’UNDOTBS’

      
    If the SPFILE-NAME and PFILE-NAME are not included in the syntax, Oracle will use the default PFILE to generate an SPFILE with a system-generated name.


    SQL> CREATE SPFILE FROM PFILE;

    To modify a spfile, you need to issue the 'alter system set' command to change the value of an parameter through sqlplus command prompt.

    ALTER SYSTEM SET parameter_name = parameter_value
      [ COMMENT 'text' ]  [ SCOPE = MEMORY|SPFILE|BOTH ]
      [ SID = 'sid'|'*']

    'sid': Specific SID to be used in altering the SPFILE, usefully in RAC environment.
    '*': Uses the default SPFILE.

    example:
    • Changing parameter values

    ALTER SYSTEM SET undo_tablespace = UNDO2;
    • Specifying temporary or persistent changes

    ALTER SYSTEM SET undo_tablespace = UNDO2
    SCOPE=BOTH;

    • Deleting or resetting values

    ALTER SYSTEM SET undo_tablespace = UNDO2
    SCOPE=BOTH SID=’*’;

    Managing an Oracle instance,
    STARTUP command behavior

    • Order of precedence:
      • spfileSID.ora
      • Default SPFILE
      • initSID.ora
      • Default PFILE
    • Specified PFILE can override precedence.

    STARTUP PFILE = $ORACLE_HOME/dbs/initDBA1.ora

    There are four modes of Oracle instance:












    Starting the Instance (NOMOUNT):
    An instance would be started in the NOMOUNT stage only during database creation or the re-creation of control files.

    Starting an instance includes the following tasks:
    • Reading the initialization file from $ORACLE_HOME/dbs in the following order:
      • First spfileSID.ora
      • if not found then, spfile.ora
      • if not found not initSID.ora
    • Allocating the SGA
    • Starting the background processes
    • Opening the alertSID.log file and the trace files.
    Mounting the database (MOUNT):
    To perform specific maintenance operations, you start an instance and mount a database but do not open the database.
    For example, the database must be mounted but not open during the following tasks:
    • Renaming data files
    • Enabling and disabling online redo log file archiving options
    • Performing full database recovery
    Mounting a database includes the following tasks:
    • Associating a database with a previously started instance.
    • Locating and opening the control files specified in the parameter file
    • Reading the control files to obtain the names and status of the data files and online redo log files. However, no checkes are performed to verify the existence of the data files and online redo log files at this time.
    Opening the database (OPEN):
    Normal database operation means that an instance is started and the database is mounted and open. Tyicl database access and normal database operation are valid on the database at this time.
    Opening the database includes the following tasks:
    • Opening the online data files
    • Opening the online redo log files
    Open mode is the default startup command mode to start up the instance

    STARTUP


    STARTUP PFILE = $ORACLE_HOME/dbs/initDBA1.ora

    Startup command syntax:
    STARTUP [FORCE] [RESTRICT] [PFILE=filename]
         [OPEN  [RECOVER] [database]
         |MOUNT
         |NOMOUNT ]

    • FORCE: Aborts the running instance before performing a normal startup.
    • RESTRICT: Enablesonly users with RESTRICTED SESSION privilegeto access the dataabase.
    • RECOVER: Begins media recovery when the database starts.
    Shuting Down the Database:

    Shutdown Mode
    A
    I
    T
    N
     Allow new connections    
    No
    No
    No
    No
    Wait until current sessions end
    No
    No
    No
    Yes
    Wait until current transactions end
    No
    No
    Yes
    Yes
    Force a checkpoint and close files
    No
    Yes
    Yes
    yes

     shutdown mode:
    • A = Aboort
    • I  = Immediate
    • T = Transactional
    • N = Normal
    Shutdown Normal:
    Normal is the default shut down mode. Normal database shut down proceeds with the following conditions:
    • No new connection can be made
    • The Oracle server waits for all users to disconnect before completing the shutdown.
    Shutdown Transactional:
    A transactional shutdown prevents clients from losing work. A transactional database shutdown procees with the following conditions:
    • No client can start a new transactino on this particular instance.
    • A client is disconnected when the client ends the transaction that is in process.
    • When all transactions have finished, a shut down occurs immediately.
    Shutdown Immediate:
    Immediate database shut down proceeds with the following conditions:
    • Current SQL statements being processed by Oracle are not completed.
    • The Oracle server does not wait for the users, who are currently connected to the database, to disconnect.
    • Oracle rolls back active transactions and disconnencts all connected users.
    • Oracle closes and dismounts the database before shutting down the instance.
    Monitoring an Instance Using Diagnostic Files:
    • alertSID.log file
    • Background trace files
    • User trace files
    Alert Log File:
    • Location defined by BACKGROUND_DUMP_DEST
    • Must be managed  by DBA
    • Each entry has a time stamp associated with it
    • alertSID.ora file:
      • Records the commands
      • Records results of major events
      • Used to day-to-day operational information
      • Used for diagnosing database erros
    Background Trace Files:
    • Background trace files
      • Log erros detected by any background process
      • Are used to diagnose and troubleshoot erros
    • Created when a background process encounters an error
    • Location defined by BACKGROND_DUMP_DEST
    User Trace Files:
    • User trace files
      • Produced by the user process
      • Can be generated by a server process
      • Contain statistics for traced SQL statements
      • Contain user error messages
    • Created when a user encounters user session errors
    • Location is defined by USER_DUMP_DEST
    • Size defined by MAX_DUMP_FILE_SIZE