About Me

The best place to start learning Oracle, Linux/Unix scripting, and Oracle administration skills. Currently, as Oracle DBA with 11g/10g/9i OCP certified, I want to help you with my background and knowledge. Hopefully, this info would help you as a job aid.

Thursday, December 22, 2011

srvctl command job aid - I

In this article, it records some command prompts under 'srvctl' to manipulate the instances/databases under the GRID infrastructure environment. The example provided is based on a Oracle 11g Virtual Database environment. In this machine, we install the Oracle binary of 11.2.0.3, and CRS binary of 11.2.0.3.
(Note: The Grid Infrastrcture will be discussed in more detail in the lessons of Grid Infracsture of Oracle 11g.)

In this case, we are going to use srvctl to shutdown and restart the instance. In the VDB environment we have, we have only one instance for this database. So, it's pretty similar as the normal standalone database. But we treat it as the cluster database environment, so we utilize the srvctl to conduct the task instead of sqlplus.

Ex: Shutdown the database instance and reopen it to the archivelog mode.
1. Check the dbconsole is up or not. if it's up, then shutdown the dbconsole first.

> emctl status dbconsoleOracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.
https://Hostname:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /opt/oracle/product/11.2.0.3/hostname_dbname/sysman/log
> emctl stop dbconsoleOracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.
https://hostname:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
 ...  Stopped.

 2. Now you can either shutdown the listener before shutdown the database or you can simply issue the shutdown database via srvctl. srvctl will shutdown the listener if it's up before shutting the database. In this exmaple, we choose to shutdown the listener separately.

 > srvctl status listener   --> Check the listener status
Listener LISTENER_NAME is enabled
Listener LISTENER_NAME is running on node(s): nodename
> srvctl stop listner   --> shutdown the listener
> srvctl status listener
Listener LISTENER_NAME is enabled
Listener LISTENER_NAME is not running
>

3. Nowwe stop the instance/database
> srvctl stop database -d database_name  
--> we need to specify the option -d with the database name to shutdown
> srvctl status database -d database_nameDatabase is not running.
So, we are sure the database has been shutdown successfully now.

4. Bring back up the instance/database in archivelog mode.
    (note: in our example the database was in 'noarchivelog' mode before we shut it down)
> srvctl start database -d database_name -o mount 
--> we startup the database in 'mout' mode. Notice that the option of '-o' is where you can specify what mode your want to bring your instance to.
5. Open another session and login to the instance through sqlplus. And issue the 'alter database archivelog' command to turn the archivelog mode on.
> sqlplus
SQL*Plus: Release 11.2.0.3.0 Production on Thu Dec 22 21:26:25 2011
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Enter user-name: / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning and Automatic Storage Management options
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> exit

(5b. Turn on the database through srvctl, if no need to alter database archivelog)
> srvctl start database -d database_name -o open
--> the '-s' option is to specify the start option. In this case, we want to turn the database to be open, and which is the default, other options includes 'mount', 'nomoun'.
One thing to notice is that, once you use srvctl to start the database in 'nomount', 'mount' mode, you cannnot alter that database through srvctl directly to the 'open' mode. You need to shutdown the database first and then use 'srvctl start database -d database_name -o open' to bring it back on.

6. Check the database status and make sure listner is running.
> srvctl status database -d database_name
Database is running.
> srvctl status listener
Listener LISTENER_NAME is enabled
Listener LISTENER_NAME is running on node(s): node_name
8. Turn on the dbconsole
> emctl start dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.
https://hostname:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control .... started.
------------------------------------------------------------------
Logs are generated in directory /opt/oracle/product/11.2.0.3/hostname_dbname/sysman/log
The srvctl also has the -h option to help you find out the option available to use, which is a pretty handy tool to help you utilizing the srvctl.

ex:
> srvctl status listener -h
Displays the current state of the listener.
Usage: srvctl status listener [-l <lsnr_name>] [-v]
    -l <lsnr_name>           Listener name
    -v                       Verbose output
    -h                       Print usage

Friday, December 16, 2011

Sar command in Linux, retrieving cpu, memory information

'sar' command in linux is a command to return CPU usage and memory usage information.
You can issue repeated option to keep it running returning the stats with specified times.
Ex: >sar 5 150
this will return the result every 5 seconds for 150 times.
some example out put is below:
21:00:56          CPU     %user     %nice   %system   %iowait    %steal     %idle
21:01:11          all         2.21        0.00      0.85          6.13          0.00         90.81
21:01:26          all         3.71        0.00      1.35          3.03          0.00         91.91
21:01:41          all         0.93        0.00      0.68          4.33          0.00         94.05
21:01:56          all         0.88        0.00      0.77          4.41          0.00         93.94
21:02:11          all         0.83        0.00      0.63          4.15          0.00         94.39
Average:          all       15.29        0.00      4.77        32.46          0.00         47.48
Here is what you can do to extract specific time interval from sar data file:
Sar puts its data in daily logs in /var/log/sa. So for instance, if you want to extract from /var/log/sa/sa16 file, use the –f option and then –s for start and –e for end in hh:mm:ss format:

> sar -f sa16 -s 19:00:01  -e 20:27:01

It will display to standard out and the last line is always the Average. If you need to span across two files, you can either append one to the other making a new file (watch for any extraneous lines for averages), or you can get data from each file and average them together. Hope this helps.

Thursday, December 1, 2011

Oracle Fundamental I - Chapter 9: Storage Structure and Relationships

In this chapter, we are discussing the logical structure of the database, and segment types for their uses. The storage management is one important factor for DBA to manage the database and administrate the space usage in a efficient way.


The logical storage unit of Oracle database is like below
Tablespaces --> Segments --> Extents --> Blocks

Segments:
Table segment: A table is the most common means of storing data within a database. A table segment stores that data for a table that is neither clustered  nor partitioned. Data within a table segment is sotred in no particular order, and the DBA has very little control over the location of rows within the lbocks in a table. All the data in a table segment must be stored in one tablespace.

Table Partition segment: Scalability and availability are major concerns when there is a table in a database with high concurrent usage. In such cases, data within a table may be stored in several partitions, each of which resides in a different tablespace. The Oracle server currently supports partitioning by a range of key values, by a hashing algorithm, and by a lit of values. If a table is partitioned, each partition is a segment, and the storage parameters can be specified to control them independently. Using this type of segment requires the partitioning option within the Oracle enterprise edition.

Cluster segment: A cluster, like a table, is a type of data segment. Rows in a cluster are stored based on key column values. A cluster may contain one or more tables. TAbles in a cluster belong to the same segment and share the same storage characteristics. The rows in a clustered table can be accessed with an index or hashing algorithm.

Index segment: All the entries for a particular index are stored within one index segment. If a table has three indexes, three index segments are used. the purpose of this segment is to look up the location of rows in a table cased on a specified key.

Index-Organized Table segment: In an index-organied table, data is stored within the index based on the key value. An IOT does not need a table lookup, because all the data can be retrieved directly from the index tree.

Index-Partition segment: An index can be partitioned and spread across several tablespace. In this case, each partition in the index corresponds to a segment and cannot span multiple tablespaces. The primary use of a partitioned index is to minimize contention by spreading index input/output (I/O). Use of this type of segment requires the partitioning option within the Oracle9i Enterprise Edition.

Undo segment: An undo segment is used by a transaction that is making changes to a database. Before changing the data or index blocks, the old value is tored in the undo segment. This allows a user to undo changes made.

Temporary segment:
When a user executes commands such as CREATE INDEX, SELECT DISTINCT, and SELECT GROUP BY, the Oracle server tries to perform sorts in memory. When a sort needs more space than the space available in memory, intermecdiate results are written to the disk. Temporary segments are used to streo these intermediate results.

LOB segment:
One or more columns in a table can be used to store large objects (LOBs) such as text documents, images, or videos. If the column is large, the Oracle server stores these values in separate segmentsknown as LOB segments. The table contains only a locator or a pointer to the location of the corresponding LOB data.

Nested Table segment:
A column in a table may be made up of a user-defined table as in the case of items within an order. In such cases, the inner table, which is known as a nested table, is stored as a separate segment.

Bootstrap segment:
A bootstrap segment, also known as a cache segment, is created by the sql.bsq script when a database is created. This segment helps to initialize the data dictionary cache when the database isopened by an insance. The bootstrap segment cannot be queried or updated and does not require any maintenance by the database administrator.

A storage clause can be specified at the segment level to control how extents are allocated to a segment.
  • Any storage parameter specified at the segment level overrides the corresponding option set at the tablespace level, except for the MINIMUM EXTENT or UNIFORM SIZE tablespace parameter.
  • When storage parameters are not set explicitly at the segment level, they default to those at the tablespace level.
  • When storage parameters are not set explicitly at the tablespace level, the Oracle server system defaults are used.
  • If storage parameters are altered, the new options apply only to the extents not yet allocated.
Extent Allocation and Deallocation
  • An extent is a chunk of space used by a segment within a tablespace
  • An extent is allocated when the segment is:
    • Created
    • Exteneded
    • Altered
  • An extent is deallocated when the segment is:
    • Dropped
    • Altered
    • Truncated
Used and Free Exxtents
When a tablespace is created, the data files in the tablespace contains a header, which is the first block or blocks in the file.
As segments are created, they are allocated space from the free extents in a tablespace. Continuous space used by a segment is referred to as a used extent. When segments release space, the extents that are released are added to the pool of free extents available in the tablespace.


Database Block
  • Minimum unit of I/O
  • Consists of one or more operating system blocks. 
  • Set at tablespace creation
  • DB_BLOCK_SIZE is the default block size 
Since Oracle 9i, Oracle support multiple block sizes.
  • A database can be created with a standard block size and up to four nonstandard block sizes.
  • Block sies can have any power-of-two value between 2KB and 32KB.
  • Current block sizes supported includes 2, 4, 8, 16, 32KB.
  • The block size of the SYSTEM tablespace is termed the standard block size.
Standard Block Size
  • Set at database creation using the DB_BLOCK_SIZE parameter: cannot be changed without recreating the database.
  • Used for SYSTEM and TEMPORARY TABLESPACES
  • db_CACHE_SIZE specifies the size of the DEFAULT buffer cache for standard block size:
    • Minimum size = one granule (4MB or 16 MB)
    • Default value = 48MB
Nonstandard Block Size
  • Configure additional caches with the following dynamic parameters:
    • DB_2K_CACHE_SIZE for 2KB blocks
    • DB_4K_CACHE_SIZE for 4KB blocks
    • DB_8K_CACHE_SIZE for 8KB blocks
    • DB_16K_CACHE_SIZE for 16KB blocks
    • DB_32K_CACHE_SIZE for 32KB blocks
  • DB_nK_CACHE_SIZE is not allowed if nK is the standard block sie
  • Minimum size for each cache isone granule
Creating Non-standard block size tablespace:
Ex:
CREATE TABLESPACE data01
DATAFILE ‘/oradata/u01/data01.dbf’
SIZE 10m BLOCKSIZE 4K;


Some rules of multiple block sizing:
  • All partitions of a partitioned object must reside in tablespaces of the same block size.
  • All temporary tablespaces, including the permanent ones that are being used as default temporary tablespaces, must be of standard block size.
  • Index-organized table overflow and out-of-line LOB segments can be stored in a tablespace with a block size different from teh base table.

  • Block header: The header contains the data block address, table directory, row directory, and transaction slots that are used when transactions make changes to rows in the block. Block headers grow from the top down.
  • Data space: Row data is inserted into the block from the bottom up.
  • Free space: The free space in a block is in the middle of the block. Thus both the deader and the row data space can grow when necessary. The free space in a block is contiguousinitially. However, deletions and upddates may fragment the free space in the block. The free space in the block is coalesced by the Oracle server when necessary.
Block space utilization paramters:
Some space utilization parameters like INITRANS, MAXTRANS, PCTFREE and PCTUSED.

INITRANS and MAXTRANS: Specify the initial and the maximum number of transaction slots that are created in an index or a data block.

PCTFREE: Specify for a data segment, the percentage of space in each data block that is reserved for growth resulting from updates to rows in the block. The default for PCTFREE is 10%

PCTUSED: For a data segment, this parameter represents the minimum percentage of used space that the Oracle server tries to maintain for each data block of the table. The default is $40%.


Data Block management:
There are two ways to managing data blocks:
Automatic segment-space management
  • It is a method of managing free space inside database segments
  • Tracking in-segment free and used space is done using bitmaps as opposed to using free lists
  • This method provides:
    • Ease of management
    • Better space utiliation
    • Better performance for concurrent INSERT operations
  • Bitmap segments contain a bitmap that describes the status of each blocks in the segment with respect to its available space.
  • The map is contained in a separate set of blocks referred to as bitmapped blocks (BMBs).
  • When inserting a new row, the server searches the map for a block with sufficient space.
  • As the amount of space available in a block changes, its new state is reflected in the bitmap.
  • Automatic segment-space management can be enabled at the tablespace level only, for locally managed tablespaces
EX:
CREATE TABLESPACE data01
DATAFILE ‘/oradata/u01/data01.dbf’ SIZE 10M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K
SEGMENT SPACE MANAGEMENT AUTO;



Manual Data Block management:
  • Allows you to configure data blocks manually using parameters such as:
    • PCTFREE
    • PCTUSED
    • FREELIST
The example makes it easier to understand how PCTFREE and PCTUSED to work:
PCTFREE=20%, PCTUSED=40%
1. Rows are inserted into block until the free space in the block is equal to or less than 20%. The block is no longer available for inserts when rows occupy 80% (100 - PCTFREE) or more of the available data space in the block.
2. The remaining 20% can be used when the size of a row increases. For example, a column that was originally NULL is updated to be assigned a value. Thus block utiliation may be in excess of 80% as a result of updates.
3. If rows are deleted in the block or if rows decrease in sie as a result of udpates, block utilization may fall below 80%. HOwever, a block is not used for inserts until the utilization falls below PCTUSED, which in this example, is 40%
4. When the utiliation falls below PCTUSED, the block is available for inserts. As rows are inserted into the bloc, the utiliation of the block increases and the cycle repeats starting with step 1.

Some views are very useful for obtaining storage inforamtion:
  • DBA_EXTENTS
  • DBA_SEGMENETS
  • DBATABLESPACES
  • DBA_DATA_FILES
  • DBA_FREE_SPACE
Ex: DBA_SEGMENTS
SQL> select segment_name, tablespace_name, extents, blocks from dba_segments;
SEGMENT_NAME   TABLESPACE_NAME      EXTENTS   BLOCKS
------------------------ ------------------------------- -------------  ----------
REGIONS                  SAMPLE                              79               8192
LOCATIONS             SAMPLE                              79               8192
JOBS                          SAMPLE                              79               8192

Monday, November 28, 2011

Oracle Fundamental I - Chapter 8: Maintaining Tablespaces and Data Files

In this chapter, we are going to discuss some physical database componet for the database. The Tablespaces and Data files.
Oracle stores data logically in tablespaces and physically in data files.
  • Tablespaces:
    • Can belong to only one database at a time
    • Consiste of one or more data files
    • Are further divided into logical unites of storage
  • Data files:
    • Can belong to only one tablespace and one database
    • Are a repository for schema object data
Type of tablespaces, the tablespaces can be basically categorized into two types:
  • System Tablespace
    • Created with the database
    • Contains the data dictionary
    • Contains the system undo segment
  • Non-system tablespace
    • Separate segments
    • Eases space administration
    • Controls amount of space allocated to a user
To create a tablespace, you issue the command as syntax below:
CREATE TABLESPACE tablespace
    [ DATAFILE clause]
    [ MINIMUM EXTENT integer[K|M]]
    [ BLOCKSIZE integer [K]]
    [ LOGGING|NOLOGGING]
    [ DEFAULT storage_clause]
    [ ONLINE|OFFLINE]
    [ PERMANENT|TEMPORARY]
    [extent_management_clause]
    [segment_management_clause]
Ex:
CREATE TABLESPACE data01 DATAFILE ‘/u01/oradata/data01.dbf’ SIZE 5M;

Space management in Tablespaces:
The space in datafile is managed either locally or by the data dictionary.
  • Locally managed tablespace
    • Free extents are managed in the tablespace
    • Bitmap is used to record free extents
    • Each bit corresponds to a block or group of blocks
    • Bit value indicates free or used
    • Since Oracle 9i, locally managed is the default
  • Dictioinary-managed tablespace
    • Free extents are managed by the data dictionary
    • Appropriate tables are updated when extents are allocated or deallocated
Locally Managed TAblespaces
  • Reduced contention on data dictionary tables
  • No undo generated when space allocation or deallocation occurs
  • No coalescing required
The tablespace is locally managed by default since 9i, and it's setup in extent clause:
    extent_management_clause:
      [ EXTENT MANAGEMENT [ DICTIONARY | LOCAL
      [ AUTOALLOCATE | uniform [  SIZE integer [K|M] ] ]
Ex:
CREATE TABLESPACE data01
DATAFILE '/u01/oradata/data01.dbf' size 500M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

Dictionary-Managed Tablespaces
  • Extents are managed in the data dictionary
  • Each segment stored i the tablespace can have a different storage clause
  • Coalescing is required
Ex:
CREATE TABLESPACE data01
DATAFILE '/u01/oradata/data01.dbf'
size 500M EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE
(initial 1m next 1m pctincrease 0);

To migrate a dictionary-managed SYSTEM tablespace, you need to use the Oracle supplied package to implement this:
Ex:
DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM');

  
Undo tablespace:
  • Used to store undo segments (Known as rollback segment in previous release of 8i).
  • Cannot contain any other objects
  • Extents are locally managed
  • Can only use the DATAFILE and EXTENT MANAGEMENT clauses
Ex:
CREATE UNDO TABLESPACE undo01
DATAFILE '/u01/oradata/undo01.dbf' size 500M;

Temporary Tablespaces
  • Ussed for sort operations
  • Can be shared by multiple users
  • Cannot contain any permanent objects
  • Locally managed extents recommended
CREATE TEMPORARY TABLESPACE temp01
TEMPFILE '/u01/oradata/temp01.dbf' size 200M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 4M;

Sort, or temporary segments are used when a segment is shared by multiple sort operations. Temporary tablespaces provide performance improvements when you have multiple sorts that are too large to fit into memory.
Note, non-standard block size cannot be specified when creatingn temporary tablespaces.

Default Temporary Tablespaces
  • Specifies a database-wide default temporary tablespace
  • Eliminates using SYSTEM tablespace for storing temporary data
  • Can be created by using:
    • CREATE DATABASE
    • ALTER DATABASE
Ex:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp01;

You can also specified and create the default temporary tablespace.
Ex:
CREATE DATABASE db01
LOGFILE
GROUP 1 ('/$HOME/ORADATA/u01/redo01.rdo') SIZE 100m,
GROUP 2 ('/$HOME/ORADATA/u02/redo02.rdo') SIZE 100m,
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
DATAFILE '/$HOME/ORADATA/u01/system01.dbf' SIZE 325M
UNDO TABLESPACE undo1
DATAFILE '/$HOME/ORADATA/u02/undo01.dbf' SIZE 200M
DEFAULT TEMPORARY TABLESPACE temp01
TEMPFILE '/$HOME/ORADATA/u03/temp01.dbf' SIZE 4M
CHARACTER SET US7ASCII;

Specifies the Default Temporary Tablespace after database creation:
Ex:
ALTER DATABASAE
DEFAULT TEMPORARY TABLESPACE default_temp2;

To find the default temporary tablespace, you can query the database properties:
Ex:
SELECT * FROM DATABASE_PROPERTIES;

Note: Default temporary tablespaces cannot be:
  • Dropped until after a new default is made available
  • Taken offline
  • Altered to a permanent tablespace

A regular tablespace can be placed in read-only mode:
ALTER TABLESPACE userdata01 READ ONLY;
This command would cause:
  • Causes a checkpoint
  • Data available only for read operations
  • Objects can be dropped from tablespace
However, it is possible to drop items, like tables and indexes, located in the read-only mode tablespace. Because the DROP command only changes data physically in the data dictionary, but not the database in the tablespace where those objects are located.

Taking a Tablespace Offline:
Whenever the database is open, a database administrator can take any tablespace, except the SYSTEM tablespace or any tablespace with active undo segments or temporary segments, offline as syntax below:
     ALTER TABLESPACE tablespace_name
         {ONLINE | OFFLINE [NORMAL | TEMPORARY | IMMEDIATE | FOR RECOVER]}
  • Not available for data access
  • Tablespaces that cannot be taken offline:
    • SYSTEM tablespace
    • TAblespaces with active undo segments
    • DEfault temporary tablespace
  • To take a tablespace offline:
ALTER TABLESPACE data01 OFFLINE;
  • To bring a tablespace online:
ALTER TABLESPACE data01 ONLINE;


 The size of tablespaces can be resized:
A tablespace can be resized by:
  • Changing the size fo a data file:
    • Automatically using AUTOEXTEND
    • Manually using ALTER DATABASE
  • Adding a data file using ALTER TABLESPACE
You can also specified the tablespaces to be resized automatically by enabling Automatic Extension of Data Files:
  • Can be resized automatically with the following commands:
  • CREATE DATABASE
  • CREATE TABLESPACE
  • ALTER TABLESPACE .... ADD DATAFILE
  • Query the DBA_DATA_FILES view to determine whether AUTOEXTEND is enabled.
Ex:

CREATE TABLESPACE data01
DATAFILE '/u01/oradata/data01.dbf' size 200M
AUTOEXTEND ON NEXT 10M MAXSIZE 500M;


You may also manually resize a data file:
  • Manually increasae or decrease a data file size using ALTER DATABASE.
  • Resizing a data file adds more space without adding more data files.
  • Manual resizing of a data file reclaims unused space in database.
                ALTER DATABASE [database]
                    DAATAFILE 'filename' [, 'filename']....
                    RESIZE integer [K|M]
Ex:
ALTER DATABASE
DATAFILE '/u01/oradata/data01.dbf'
RESIZE 400m;

Adding Data Files to a Tablespace
  • Increases the space allocated to a tablespace by adding additional data files
  • ADD DATAFILE clause is used to add a data file
Ex:
ALTER TABLESPACE data01
ADD DATAFILE '/u01/oradata/data02.dbf'
SIZE 400M;

Moving Data Files by using rename:
Two ways to accomplish this,
  • ALTER TABLESPACE
    • Tablespace must be offline
    • Target data files must exist
Ex:
ALTER TABLESPACE data01 RENAME
DATAFILE '/u01/oradata/data01.dbf'
TO '/u02/oradata/data01.dbf';

  • ALTER DATABASE
    • Database must be mounted
    • Target data file must exist
ALTER DATABASE RENAME
FILE '/u01/oradata/data01.dbf'
TO '/u02/oradata/data01.dbf';


Last, you may also drop tablespaces:
  • You cannot drop a tablespace if it:
    • Is the SYSTEM tablespace
    • has active segments
  • INCLUDING CONTENTS drops the segments.
  • INCLUDING CONTENTS AND DATAFILES deletes the physical data files
  • CASCADE CONSTRAINTS drops all referential integrity constraints

DROP TABLESPACE data01
INCLUDING CONTENTS AND DATAFILES;

Obtaining tablespace and datafile information can be obtained by querying the following:
  • Tablespace information:
    • DBA_TABLESPACES
    • V$TABLESPACE
  • Data file information:
    • DBA_DATA_FILES
    • V$DATAFILE
  • Temp file information:
    • DBA_TEMP_FILES
    • V$TEMPFILE

Friday, November 18, 2011

DB console does not find the listener error in 10g Cluster

The DBconsole does not work and with the error of error receive: ORA-12541: TNS:no listener (DBD ERROR: OCIServerAttach). On the home page of the DBconsole, it does not showing the information of the instance nor database. The snapshot is below:

The first thing we check is the listener status on all nodes.
(Since this is a three node RAC cluster, we check the lsnrctl status on all nodes in the cluster)
The output is like below ( the example is only one node, since it's the same on all other nodes without different node names):

racstress6:/opt/oracle> lsnrctl status listener_racstr23
LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 18-NOV-2011 14:28:05
Copyright (c) 1991, 2007, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=racstress6-vip.dm.cdm.com)(PORT=1526)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_RACSTR23
Version                   TNSLSNR for Linux: Version 10.2.0.4.0 - Production
Start Date                06-JUL-2011 13:08:30
Uptime                    135 days 2 hr. 19 min. 34 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/oracle/product/db/10.2.0.4/network/admin/listener.ora
Listener Log File         /opt/oracle/product/db/10.2.0.4/network/log/listener_racstr23.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=146.18.118.206)(PORT=1526)))
Services Summary...
Service "RACSTR2" has 1 instance(s).
  Instance "RACSTR23", status READY, has 26 handler(s) for this service...
Service "RACSTR23" has 1 instance(s).
  Instance "RACSTR23", status UNKNOWN, has 1 handler(s) for this service...
Service "RACSTR2_XPT" has 1 instance(s).
  Instance "RACSTR23", status READY, has 26 handler(s) for this service...
The command completed successfully
racstress6:/opt/oracle>

And we also check the connection strings using the hostname and hostname-vip. Ex:
$ sqlplus dbsnmp@'(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=racstress4-vip)(PORT=1526))(ADDRESS=(PROTOCOL=TCP)(HOST=racstress5-vip)(PORT=1526))(ADDRESS=(PROTOCOL=TCP)(HOST=racstress6-vip)(PORT=1526))(LOAD_BALANCE=yes))(CONNECT_DATA=(SERVICE_NAME=RACSTR2)))'

And again with hostname without -vip.
It turns out the hostname with -vip works correctly and the one without host does not work.

Next thing we try is the modify the host name information and clean out the old logfile generated by EM start, and then restart the EM.

Stop the dbconsole on all nodes.

cd $O_H/<host_sid>/sysman/config
cp emoms.properties emoms.properties.ORIG
vi emoms.properties
Locate the connect descrptior at the bottom and change the physical hostname to vip in the connect descriptor.
cd $ORACLE_HOME/<Host_SID>/sysman/emd
rm lastupld.xml
rm agntstmp.txt
rm protocol.ini
rm –fr state/* Dont Remove the Directory only data inside the state directory
rm –fr upload/* Dont Remove the Directory only data inside the state directory
cd $ORACLE_HOME/<Host_SID>/sysman/log
rm *
cd $ORACLE_HOME/bin
./emctl clearstate agent
./emctl start dbconsole
./emctl upload agent
./emctl status agent
./emctl status dbconsole.
 
But it still gets the same error the dbconsole webpage still does not shows up. So, we suspect it can be agent's problem on one of the node.
And the next action item is:
Step1:
Modify the hostname in the descriptor string in emoms.properties file.
cd to the $ORACLE_HOME/hostname_sid/sysman/config
vi emoms.properties
 
In the emoms.properties, find the following line and modify the connection string,
and change the hostname from hostname.domain to hostname-vip.domain:
Ex:
oracle.sysman.eml.mntr.emdRepConnectDescriptor=(DESCRIPTION\=(ADDRESS_LIST\=(ADDRESS\=(PROTOCOL\=TCP)(HOST\=hostname-vip.dm.cdm.com)(PORT\=port#)))(CONNECT_DATA\=(SERVICE_NAME\=SID)))
 
Step2:
Modify the emd.properties file as follows:
Change the parameter
AgentListenOnAllNICs=TRUE
to
AgentListenOnAllNICs=FALSE

Step 3:
cd $ORACLE_HOME/<Host_SID>/sysman/emd
rm lastupld.xml
rm agntstmp.txt
rm protocol.ini
rm –fr state/* Dont Remove the Directory only data inside the state directory
rm –fr upload/* Dont Remove the Directory only data inside the state directory
cd $ORACLE_HOME/<Host_SID>/sysman/log
rm *
Step 3:
Kill the agent process.
ps -ef |grep emagent  --> and find out the PID of the emagent process
$ kill pid
And try to start the dbconsole again, and the db console started successfully.
















Summary:
This error can be categorized into two points.
1. The hostname specified is not correct, due to the cluster environment using vip name instead of physicall host name.
2. The agent process has not been cleaned up.

The solution to fix the first problem is change the host name specified in the emoms.properties file. Change the hostname to be the virtual ip hostname. This phenomenon is pretty common happening in the cluster environment. And change the parameter 'AgentListenOnAllNICs' from TRUE to FALSE.
Clean up all the .log files to make sure the clean start.

The solution to fix the second problem is to find out the process of the agent. Due to the dbconsole was not started successfully, the emagent process has been started and corrupted. However, it's not cleaned up by itself cleanly. So, you need to manually find out the PID of that process and kill the process.

After implement the above two steps, then restart the dbconsole and emagent.
In my case, my dbconsole started successfully afterward.

Thursday, November 10, 2011

Oracle Fundamental I - Chapter 7: Maintaining Online Redo Log Files

In this chapter, we are talking about online redo log files,
The only redo log files play the important role to record the data transactions, and provide the recovery mechanism to ensure data integrity.

Online redo log files have the those characteristics:
  • Record all changes made to data
  • Provide a recovery mechanism
  • Can be organized into groups
  • At least two groups required
The online redo logs files provide the mean to redo transactionin the even to a database failure. When a transaction coming, it's written into the redo log buffer (Please refer to chapter 1 for the architecture); and then gets flushed to the online redo log files. When a media failure occurs, the redo log files provides the recovery mechanism to recover the transaction data. This includes not yet committed transactions, undo segment inforamtion, and schema and object management statements.
A transaction not being 'logged' into redo log files includes issuing 'NOLOGGING' clause in the statement or when using direct load insert.

Redo log files are group based, and at two online redo log groups are required.














Online Redo Log File Groups:
  • A set of identical copies of online redo log files is called an online redo log file group.
  • The LGWR background process concurrently writes the same information to all online redo log files in a group.
  • The Oracle server needs a minimum of two online redo log file groups for the normal operation of a database.
Online Redo Log File Memebers
  • Each online redo log file in a groiup is called a member.
  • Each member in a group has identical log sequence numbers and are of the same size. The log sequence number is assigned each time that the oracle server writes to a log group to uniquely identify each online redo log file. The current log sequence number is tored in the control file and in the header of all data files.
The initial set of online redo log file groups and members are created during the database creation:
Parameters below limit the number of online redo log files:
  • MAXLOGFILES: it's the parameter in CREATE DATABASE command specifies the absolute maximum of online redo log file groups.
  • The maximum and default value for MAXLOGFILES depends on operating system.
  • MAXLOGMEMBERS: the parameter in CREATE DATABASE command determines the maximum number of members per redo log file group.
You need to know the mechanism of how Online Redo Log Files work to be able to utilize it ensuing the data availability.
The Oracle server sequentially records all changes made to the database in the Redo Log Buffer. The redo entries are written from the Redo Log Buffer to the current online redo log file group by the LGWR process. LGWR writes under the following situations:
  • When a transaction commites
  • When the Redo LOg Buffer becomes one-third full
  • When there is more than a megabyte of changed records in the Redo Log Buffer
  • Before the DBWn writes modified blocks in the Database Buffer Cache to the data files
Log Switches
LGWR writes to the online redo log files sequenctially. When the current online redo log file group is filled, LGWR begins writing to the next group. This is called a log switch.

Checkpoints
During a checkpoint:
  • DBWn writes a number of dirty database buffers, which are covered by the logthat is being checkpointed, to the data files.
  • The checkpoint backgroud process CKPT updates the control file to reflect that it has completed a checkpoint successfully. If the checkpint is caused by a log switch, CKPT also updates the headers of the data files.
A checkpint occurs in the following sutuations:
  • At every log switch
  • When an instance has been shut down with the normal, trasactional, or immediate option
  • When forced by setting the FAST_START_MTTR_TARGET initialization parameter
  • When manually requested by the database administrator
  • When the ALTER TABLESPACE [OFFLINE NORMAL|READ ONLY|BEGIN BACKUP] command causes checkpointing on specific data files
You can force log switches and checkpoints as below:
  • Forcing a log switch:

SQL> ALTER SYSTEM SWITCH LOGFILE;


  • Checkpoints can be forced by setting FAST_START_MTTR_TARGET parameter, this parameter force the Oracle instance to reach the goal of that instance recovery should not take more than certain seconds, in this case 600 seconds:

FAST_START_MTTR_TARGET = 600


  • ALTER SYSTEM CHECKPOINT command

ALTER SYSTEM CHECKPOINT;


Adding Online Redo Log File Groups:
In some cases you might need to create additional log file groups. To create a new group of online redo log files, use the following SQL command:
         ALTER DATABASE [database]
            ADD LOGFILE [GROUP interger] filespec
               [, [GROUP integer] filespec]...]
Ex:

ALTER DATABASE ADD LOGFILE GROUP 3
(‘$HOME/ORADATA/u01/log3a.rdo’,
  ‘$HOME/ORADATA/u02/log3b.rdo’)
Size 1M;

Adding new member to exsiting online redo log group:
You can also add new members to existing online redo log file groups using the following command:
       
                 ALTER DATABASE [database]
                      ADD LOGFILE MEMBER
                         [ ‘FILENAME’ [REUSE] [, ‘FILENAME’ [REUSE]…
                       TO {GROUP integer
                       | (‘filename’ [, ‘filename’]…)
                      }  ]…
Ex:
 

ALTER DATABASE ADD LOGFILE MEMBER
‘$HOME/ORADATA/u04/log1c.rdo’ TO GROUP1,
‘$HOME/ORADATA/u04/log2c.rdo’ TO GROUP2,
‘$HOME/ORADATA/u04/log3c.rdo’ TO GROUP3;


Dropping Online Redo Log File Groups:
You can also drop the Online Redo Log File Groups with following command:
                   ALTER DATABASE [database]
                           DROP LOGFILE {GROUP integer| ('filename' [,'filename']...)}
                             [,{GROUP integer| ('filename'[,'filename']...)}]...
Ex:
 

ALTER DATABASE DROP LOGFILE GROUP 3;
Dropping Online Redo Log File Members:
You may drop an online redo log file member by following command:
            ALTER DATABASE [database]
                    DROP LOGFILE MEMBER 'filename' [, 'filename']...
Ex:
 

ALTER DATABASE DROP LOGFILE MEMBER ‘$HOME/ORADATA/u04/log3c.rdo’;
Relocating or Renaming Online Redo Log Files:
Some time you need to rename an online redo log file. To do this, you go through the following steps to implement this:
1. Shut down the database                                                                                                                                               SQL> shutdown;
2. Copy the online redo log fiels to the new location.
3. Startup the databae and mount, but do not open it.                                                                     
           SQL> CONNECT /  as SYSDBA
           SQL> STARTUP MOUNT;
4. Rename the online redo log memebers using the ALTER DATABASE RENAME FILE:
Ex:


ALTER DATABASE RENAME FILE
   ‘$HOME/ORADATA/u01/log2a.rdo’
     TO  ‘$HOME/ORADATA/u02/log1c.rdo’;
5. Open the database for normal operation:
      SQL> ALTER DATABASE OPEN;

Clearing Online Redo Log Files
ALTER DATABASE CLEAR LOGFILE command can be used to reinitialize an online redo log file.
Ex:
 

ALTER DATABASE CLEAR LOGFILE GROUP 2;
Use the UNARCHIVED keyword to avoid archiving the corrupted online reedo log file
Ex:
 

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2;


There are a coupld of dynamic views for online redo log files that you can retrive information of online redo log files,ex:
  • V$LOG
  • V$LOGFILE

Archived Redo Log Files
  • filled online redo log fiels can be archived.
  • There are two advantages in running the database in ARCHIVELOG mode and archiving online redo  log files:
    • Recovery: A database backup together with online and archived redo log fiels can quarantee recovery of all committed transactions.
    • Backup: This can be performed while the database is open.
  • By default, the database is created in NOARCHIVELOG mode.
  • Accomplished manually through SQL statements
  • When successfullyu archived:
    • An entry in the control file is made
    • Records: archive log name, log sequence number, and high and low system change number (SCN).
  • Filled online redo log files cannot be reused until:
    • A checkpoint has taken place
    • File has been archived by ARCn
  • Can be multiplexed
  • Maintained by the DBA
Note: The archived log files is one important point for backup and recovery, which will be discussed more detail in the Fundamental II.


    

Wednesday, November 9, 2011

Oracle Fundamental I - Chapter 6: Maintaining the Control Files

In this chapter, we are going to talk about how to manage the control files.
The control fiel is a small binary file necessary for the database to sgtart and operate successfully. Each control file is associated only one Oracle database. Before a database is opened, the control file is read to determine whether the database is in a valid state to use.
A control file is udpated continuously by the oracle server during database use, so it must be available for writing whenever the databasei s open. The information in the control fiel can be modified only by the Oracle server; no database administrator or end user can edit the control file.
If for some reason the control file is not acessible, the database does not function properly. If all copies of a database's control files are lost, the database must be recovered before it can be opened.

Control File contents:
  • Database name and identifier
  • Time stamp of database creation
  • Tablespace names
  • Names and locations of data files and online redo log files
  • Current online redo log file sequence number
  • Checkpoint information
  • Begin and end of undo segments
  • Redo log archive information
  • Backup Information
Multiplexing the Control File
To safeguard against a single pint of failure of the control file, it is strongly reommended that the control file be multiplexed, storing eah copy on a different physical disk.
Control files can be multiplexed up to eight times by:
  • Creating multiple control files when the database is created by including the control file names and full path in the initialization parameter file:
    • CONTROL_FILES=$HOME/ORADATA/data01/ctr101.ctl,$HOME/ORADATA/data01/ctr102.ctl
  • Addming a control file after the database is created
In Oracle Fundamental II Backup and Recovery, we will talk more detail about how to backup the control file.

You follow different steps to multiplexing control files in Oracle database using SPFILE and PFILE.
Using SPFILE,
1. Alter the SPFILE:
SQL> ALTER SYSTEM SET control_files =
          ‘$HOME/ORADATA/data01/ctr101.ctl’,
          ‘$HOME/ORADATA/data02/ctr102.ctl’ SCOPE=SPFILE;

2. Shutdown the database:

SQL> shutdown immediate;         


3. Create Additional control files in OS level by using the OS supported copy command:

$ cp $HOME/ORADATA/data01/ctr101.ctl $HOME/ORADATA/data01/ctr102.ctl

4. Start the database:

SQL> startup

Using PFILE,
1. Shutdown the database:

SQL> shutdown immediate;


2. Create Additional control files in OS level by using the OS supported copy command:

$ cp $HOME/ORADATA/data01/ctr101.ctl $HOME/ORADATA/data01/ctr102.ctl         


3. Add control file names to the PFILE:

CONTROL_FILES = $HOME/ORADATA/data01/ctr101.ctl,$HOME/ORADATA/data01/ctr102.ctl)

4. Start the database:

SQL> startup

Note: In Oracle Recover Manager, it provides a feature called replicate controlfile, which can accomplish the same task of multiplexing the control files. We will discuss this feature in Fundamental II.
To obtaining the control file information, there are a couple dynamic views that you can query. Few dynamic views below are example:
  • V$CONTROLFILE: lists the name and status of all control files associated with the instance.
  • V$PARAMETER: lists status and location of all parameters, in which you find the control file parameter
  • V$CONTROLFILE_RECORD_SECTION: Provides inforamtion about the control file record sections
  • Show PARAMETER CONTROL_FILES: lists the name, status, and location of control files.