Views expressed here are solely that of my own. Please make sure that you test the code/queries that appear on my blog before applying them to the production environment.

Wednesday, October 3, 2007

Oracle database keeps the old dropped tablespace info in database registry for a partitoned table

Hi,

We have just upgraded our Oracle production database from 9i(9.2.0.4) to 10g(10.2.0.3).
I will prepare and publish a blog post about this subject in the following days.
While I struggle with the CBO subjects that are changed in 10g. I realised
something that I was aware of before but I am very shocked that it is not corrected in 10g.

The case is related to a partitioned table which is originally created together with its partitions in
"tablespace_1" but for some time later, as a reorganization need, its partitions are moved online to
"tablespace_2" and old "tablespace_1" is dropped from the database.

I know from my earlier experiences that in 9i, in a case like above, Oracle 9i keeps the old dropped
tablespace "tablespace_1" info in database registry as a part of the partitioned table's description.

Lets give an example...

conn dbuser01@prod10g

select username, default_tablespace from dba_users where username = 'DBUSER01';
/*
username|default_tablespace
DBUSER01|USER_DATA
*/

CREATE TABLESPACE USER_DATA_TEST DATAFILE 
'/data2/oracle3/oradata/oradb3/user_data_test.dbf' SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 100M
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;

ALTER USER DBUSER01 DEFAULT TABLESPACE "USER_DATA_TEST";

select username, default_tablespace from dba_users where username = 'DBUSER01';
/*
username|default_tablespace
DBUSER01|USER_DATA_TEST
*/

--DROP TABLE DBUSER01.EGTEST1;

CREATE TABLE DBUSER01.EGTEST1
(
COL1           CHAR(21 BYTE),
COL2           CHAR(1 BYTE),
COL3           DATE
)
PARTITION BY RANGE (COL3) 
(  
PARTITION P1 VALUES LESS THAN (TO_DATE(' 2003-12-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE TB_DATA
,  
PARTITION P2 VALUES LESS THAN (MAXVALUE)
TABLESPACE TB_DATA
);

ALTER USER DBUSER01 DEFAULT TABLESPACE "USER_DATA";

select username, default_tablespace from dba_users where username = 'DBUSER01';
/*
username|default_tablespace
DBUSER01|USER_DATA
*/

drop tablespace USER_DATA_TEST INCLUDING CONTENTS AND DATAFILES; 

try to export this table
user01@test3:/data1/oracle3>exp dbuser01 file=egtest1 log=egtest1 tables=egtest1 rows=n

Export: Release 10.2.0.3.0 - Production on Thu Sep 27 11:21:45 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported

About to export specified tables via Conventional Path ...
. . exporting table                 EGTEST1
Export terminated successfully without warnings.

DROP TABLE DBUSER01.EGTEST1;

try to import this table
user01@test3:/data1/oracle3>imp dbuser01 file=egtest1 log=egtest1_imp fromuser=dbuser01 touser=dbuser01

Import: Release 10.2.0.3.0 - Production on Thu Sep 27 11:24:17 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set
. importing DBUSER01's objects into DBUSER01
IMP-00017: following statement failed with ORACLE error 959:
"CREATE TABLE "EGTEST1" ("COL1" CHAR(21), "COL2" CHAR(1), "COL3" DATE)"  
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 TABLESPACE "USER_DATA_TEST" LOGGING P"
"ARTITION BY RANGE ("COL3" )  (PARTITION "P1" VALUES LESS THAN (TO_DATE('"
" 2003-12-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))"
"  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREEL"
"ISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "TB_DATA" LOGGING "
"NOCOMPRESS, PARTITION "P2" VALUES LESS THAN (MAXVALUE)  PCTFREE 10 PCTUSED "
"40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUP"
"S 1 BUFFER_POOL DEFAULT) TABLESPACE "TB_DATA" LOGGING NOCOMPRESS )"
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'USER_DATA_TEST' does not exist
Import terminated successfully with warnings.

import can not find the already dropped tablespace and can not import the table

---------------------------
testing the normal behaviour with a non-partitioned table.

CREATE TABLESPACE USER_DATA_TEST DATAFILE 
'/data2/oracle3/oradata/oradb3/user_data_test.dbf' SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 100M
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;

create table test_tbl (col1 number) tablespace USER_DATA_TEST;
insert into test_tbl values (1);
commit;

select table_name,tablespace_name from user_tables where table_name = 'TEST_TBL';
/*
table_name|tablespace_name
TEST_TBL|USER_DATA_TEST
*/
drop tablespace USER_DATA_TEST INCLUDING CONTENTS AND DATAFILES; 
select table_name,tablespace_name from user_tables where table_name = 'TEST_TBL';
--No rows returned.
---------------------------

In normal database behaviour when we drop the tablespace, the tables, that resides in the dropped tablespace,
dropped also. Why the database behaves differently in case of parititioned tables ?

You can see this oddity when you try to extract the DDL script of such a parititoned table with any database tool,
because the Oracle database continues to keep this already dropped tablespace info in the database registry.

Oracle Advanced Resolution Team of Oracle Worldwide Support considered the issue not as a bug and
they accept it only as an enhancement request which is done in expdp/impdp (Oracle Data Pump Export/Import)
on 10G. When you try to import (impdp) a partitioned table like in the above case, which is exported with expdp,
you can use "remap_tablespace=user_data_test:an_existing_tblspace" parameter to change the old dropped
tablespace name to a tablespace that exists in the database.

So, in our case, since the related partitioned tables are very big and the database operates in archivelog mode
we do not want to export them with expdp and import them again into database with impdp by using the
"remap_tablespace" parameter. Nevermind our case, but the strange thing is that , Oracle still have this
oddity even in 10g database. I mean if you do not export and then import them, then you have to live with it.
Any tool (including expdp) extracts the DDL of such a partitioned table with wrong old tablespace,
then you will have to manually edit this generated script (except impdp) to change this non-existing
tablespace from the DDL script.

One more point, since the impdb can import only the export dump files that are taken with expdp, if you
just upgraded your database from 9i to 10g, I thing you should convert your export scripts to use the expdp,
if any. Otherwise, if, from time-to-time, you are importing some tables into any of your test databases,
there will be some manual extra works waiting for you, like for these partitioned tables containing old dropped
tablespace info you will have to extract the script from export file, change the tablespace part to point to an
existing tablespace,pre-create the tables and then import them.


Regards
Ural Ural

Thursday, July 12, 2007

Oracle CBO chooses the wrong index although the statistics are collected on the related table

Hi,

Our system analysis and support team called us, the Oracle DBA team, for a problem. The problem was a report program was running
OK before but it runs very slowly now, but they said that nothing has changed about the program except the data that it selects.
So we examined the situation and these are the things we did.

We traced the situation and found the guilty SQL. It is a simple one as it looks. The table TABLE01 has about 65 million rows.

SELECT SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;
We looked at the explain plan of that SQL statement,
Plan
SELECT STATEMENT  CHOOSECost: 350  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 350  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_2X Cost: 347  Cardinality: 9  
When we run this query like that, although this SQL statement should not return any rows, we waited for 1 hour but it did not finish, so we killed the session because this is too much time for a report and apperantly there was a problem. Then we looked at the indexes of this table and there was another candidate index for this index and we tried the SQL by giving a hint for this second index.
SELECT /*+ index(t1 TABLE01_4X)*/
SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;
OOOPPPS !!!, it just finished in 15 miliseconds. So we looked at the explain plan of this SQL statement.
Plan
SELECT STATEMENT  CHOOSECost: 542  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 542  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_4X Cost: 501  Cardinality: 1,452  
Although the cost of the second query (hinted) is higher than than the first one, the second SQL runs much faster. So how can this be, is the Oracle CBO missing something in calculating the cost ? We will see... Lets look at the structures of related indexes.
CREATE INDEX TABLE01_2X ON TABLE01
(htno1, dkod1, skod1);

CREATE INDEX TABLE01_4X ON TABLE01
(TAR1, skod1);
We collect statistics for the tables in our Oracle database, the Oracle CBO (Cost Base Optimizer) calculates the cost of a SQL statement false and chooses the wrong index, I mean the one that is less efficient. As a result of this behaviour these SQL statements does not finish in a resonable time, causing loss of business. How we collect the statistics is like that, we collect statistics for tables by using the following command :
exec dbms_stats.gather_table_stats(ownname => 'DBUSER01', tabname => 'TABLE01', estimate_percent => 33, cascade => TRUE);
After collecting the statistics we will examine how the CBO calculates the cost. According to Jonathan Lewis, Oracle CBO calculates the cost like below : cost = blevel + ceiling(leaf_blocks * effective index selectivity) + ceiling(clustering_factor * effective table selectivity) Lets calculate the cost of the following SQL
SELECT SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;
---------------------------------------------------------
Plan
SELECT STATEMENT  CHOOSECost: 350  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 350  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_2X Cost: 347  Cardinality: 9   
---------------------------------------------------------

select owner, index_name, blevel, leaf_blocks, clustering_factor from dba_indexes where index_name = 'TABLE01_2X';
---------------------------------------------------------
OWNER|INDEX_NAME|BLEVEL|LEAF_BLOCKS|CLUSTERING_FACTOR
DBUSER01|TABLE01_2X|3|331491|16303842
effective index selectivity = density of "HTNO1" column effective table selectivity = (density of "DKOD1" column) * (density of "HTNO1" column) * (density of "SKOD1" column)
select column_name, num_distinct, density, num_buckets from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('DKOD1', 'HTNO1', 'SKOD1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|NUM_BUCKETS
DKOD1|12|0.0833333333333333|1
HTNO1|965|0.00103626943005181|1
SKOD1|73|0.0136986301369863|1
Since the " AND 9000 + dkod1 = 9001" condition equals "function(col) = constant" is has the static selectivity of %1 (0.001) instead of 0.0833333333333333. cost = blevel + ceiling(leaf_blocks * effective index selectivity) + ceiling(clustering_factor * effective table selectivity) cost = 3 + ceiling(331491 * 0.00103626943005181) + ceiling(16303842 * (0.01 * 0.00103626943005181 * 0.0136986301369863)) = 350 ----------------------- Lets calculate the cost of the following SQL
SELECT /*+ index(t1 TABLE01_4X)*/
SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;
---------------------------------------------------------
Plan
SELECT STATEMENT  CHOOSECost: 542  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 542  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_4X Cost: 501  Cardinality: 1,452  
---------------------------------------------------------

select owner, index_name, blevel, leaf_blocks, clustering_factor from dba_indexes where index_name = 'TABLE01_4X';
---------------------------------------------------------
OWNER|INDEX_NAME|BLEVEL|LEAF_BLOCKS|CLUSTERING_FACTOR
DBUSER01|TABLE01_4X|3|302597|1791039
effective index selectivity = density of "TAR1" column effective table selectivity = (density of "TAR1" column) * (density of "SKOD1" column)
select column_name, num_distinct, density, num_buckets from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('TAR1', 'SKOD1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|NUM_BUCKETS
TAR1|608|0.00164473684210526|1
SKOD1|73|0.0136986301369863|1
cost = blevel + ceiling(leaf_blocks * effective index selectivity) + ceiling(clustering_factor * effective table selectivity) cost = 3 + ceiling(302597 * 0.00164473684210526) + ceiling(1791039 * (0.00164473684210526 * 0.0136986301369863)) = 542 ------------------------------ It is obvious that CBO can not choose the optimal index for this SQL, that is "TABLE01_4X". Why ? The answer is that we did not give the CBO the enough info about the structure of the data in table and the indexes. So the solution is, we should collect more data for this table about the distribution of data in that table. What should we do ? We should collect histogram data for this table, lets see the result of collecting histograms for this table. Lets collect histograms for this table.
exec dbms_stats.gather_table_stats('DBUSER01','TABLE01', estimate_percent=>33, cascade => TRUE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE 10');
Lets look at the explain plans : First select is just finished in 15 miliseconds, and the second one does not finish so the problem is solved, our SQL now finishes in 15 msecs.
SELECT SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;

Plan
SELECT STATEMENT  CHOOSECost: 621  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 621  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_4X Cost: 519  Cardinality: 3,648  

SELECT /*+ index(t1 TABLE01_2X)*/
SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;

Plan
SELECT STATEMENT  CHOOSECost: 11,248  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 11,248  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_2X Cost: 11,066  Cardinality: 716  
------------------------------ Lets examine the costs of the following SQLs
SELECT SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;
---------------------------------------------------------
Plan
SELECT STATEMENT  CHOOSECost: 621  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 621  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_4X Cost: 519  Cardinality: 3,648  
---------------------------------------------------------

select owner, index_name, blevel, leaf_blocks, clustering_factor from dba_indexes where index_name = 'TABLE01_4X';
---------------------------------------------------------
OWNER|INDEX_NAME|BLEVEL|LEAF_BLOCKS|CLUSTERING_FACTOR
DBUSER01|TABLE01_4X|3|303888|1796569
effective index selectivity = density of "TAR1" column effective table selectivity = (density of "TAR1" column) * (density of "SKOD1" column)
select column_name, num_distinct, density, num_buckets from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('TAR1', 'SKOD1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|NUM_BUCKETS
TAR1|608|0.00169779286926995|10
SKOD1|73|0.0333333333333333|8
cost = blevel + ceiling(leaf_blocks * effective index selectivity) + ceiling(clustering_factor * effective table selectivity) cost = 3 + ceiling(303888 * 0.00169779286926995) + ceiling(1796569 * (0.00169779286926995 * 0.0333333333333333)) = 621 ----------------------
SELECT /*+ index(t1 TABLE01_2X)*/
SUM (DECODE (b1, 'A', tut1, 0)), SUM (DECODE (b1, 'B', tut1, 0))
FROM table01 t1
WHERE tar1 = to_date('29.06.2007','dd.mm.yyyy')
AND skod1 = NVL(NULL, skod1)
AND htno1 = '22001901'
AND 9000 + dkod1 = 9001
AND NVL (dtkod1, '*') = NVL (NULL, '*')
AND eno1 < 9000;
---------------------------------------------------------
Plan
SELECT STATEMENT  CHOOSECost: 11,248  Bytes: 36  Cardinality: 1     
3 SORT AGGREGATE  Bytes: 36  Cardinality: 1    
2 TABLE ACCESS BY INDEX ROWID DBUSER01.TABLE01 Cost: 11,248  Bytes: 36  Cardinality: 1   
1 INDEX RANGE SCAN NON-UNIQUE DBUSER01.TABLE01_2X Cost: 11,066  Cardinality: 716  
---------------------------------------------------------

select owner, index_name, blevel, leaf_blocks, clustering_factor from dba_indexes where index_name = 'TABLE01_2X';
---------------------------------------------------------
OWNER|INDEX_NAME|BLEVEL|LEAF_BLOCKS|CLUSTERING_FACTOR
DBUSER01|TABLE01_2X|3|331864|16304224
effective index selectivity = density of "HTNO1" column effective table selectivity = (density of "DKOD1" column) * (density of "HTNO1" column) * (density of "SKOD1" column)
select column_name, num_distinct, density, num_buckets from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('DKOD1', 'HTNO1', 'SKOD1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|NUM_BUCKETS
DKOD1|12|0.0384615384615385|2
HTNO1|968|0.0333333333333333|9
SKOD1|73|0.0333333333333333|8
Since the " AND 9000 + dkod1 = 9001" condition equals "function(col) = constant" is has the static selectivity of %1 (0.001) instead of 0.0833333333333333. cost = blevel + ceiling(leaf_blocks * effective index selectivity) + ceiling(clustering_factor * effective table selectivity) cost = 3 + ceiling(331864 * 0.0333333333333333) + ceiling(16304224 * (0.01 * 0.0333333333333333 * 0.0333333333333333)) = 11248 -------------------------- How the histograms caused the CBO to choose the righr index ? Lets look at the histogram data.
select * from dba_tab_histograms t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('TAR1','HTNO1', 'DKOD1', 'SKOD1')
order by t1.column_name, t1.endpoint_number ;
---------------------------------------------
OWNER|TABLE_NAME|COLUMN_NAME|ENDPOINT_NUMBER|ENDPOINT_VALUE|ENDPOINT_ACTUAL_VALUE
DBUSER01|TABLE01|DKOD1|8|1|
DBUSER01|TABLE01|DKOD1|9|2|
DBUSER01|TABLE01|DKOD1|10|21|
DBUSER01|TABLE01|HTNO1|0|2.50227984373754E35|
DBUSER01|TABLE01|HTNO1|1|2.60632781273513E35|
DBUSER01|TABLE01|HTNO1|2|2.60632860505268E35|
DBUSER01|TABLE01|HTNO1|3|2.60775139366763E35|
DBUSER01|TABLE01|HTNO1|5|2.65906207772846E35|
DBUSER01|TABLE01|HTNO1|6|2.76351807179888E35|
DBUSER01|TABLE01|HTNO1|7|2.76351807179912E35|
DBUSER01|TABLE01|HTNO1|8|2.81462815943683E35|
DBUSER01|TABLE01|HTNO1|9|2.81462815943707E35|
DBUSER01|TABLE01|HTNO1|10|2.97121551998981E35|
DBUSER01|TABLE01|TAR1|0|2453374|
DBUSER01|TABLE01|TAR1|1|2453502|
DBUSER01|TABLE01|TAR1|2|2453599|
DBUSER01|TABLE01|TAR1|3|2453689|
DBUSER01|TABLE01|TAR1|4|2453790|
DBUSER01|TABLE01|TAR1|5|2453889|
DBUSER01|TABLE01|TAR1|6|2453973|
DBUSER01|TABLE01|TAR1|7|2454060|
DBUSER01|TABLE01|TAR1|8|2454141|
DBUSER01|TABLE01|TAR1|9|2454218|
DBUSER01|TABLE01|TAR1|10|2454292|
DBUSER01|TABLE01|SKOD1|0|8345|
DBUSER01|TABLE01|SKOD1|3|9000|
DBUSER01|TABLE01|SKOD1|4|9050|
DBUSER01|TABLE01|SKOD1|5|9159|
DBUSER01|TABLE01|SKOD1|6|9170|
DBUSER01|TABLE01|SKOD1|7|9300|
DBUSER01|TABLE01|SKOD1|8|9412|
DBUSER01|TABLE01|SKOD1|9|9631|
DBUSER01|TABLE01|SKOD1|10|9892|
Lets look at the densities before and after the histograms. Before
select owner, index_name, blevel, leaf_blocks, clustering_factor from dba_indexes where index_name = 'TABLE01_2X';
---------------------------------------------------------
OWNER|INDEX_NAME|BLEVEL|LEAF_BLOCKS|CLUSTERING_FACTOR
DBUSER01|TABLE01_2X|3|331491|16303842
DBUSER01|TABLE01_4X|3|302597|1791039

select column_name, num_distinct, density, num_buckets from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('DKOD1', 'HTNO1', 'SKOD1', 'TAR1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|NUM_BUCKETS
DKOD1|12|0.0833333333333333|1
HTNO1|965|0.00103626943005181|1
TAR1|608|0.00164473684210526|1
SKOD1|73|0.0136986301369863|1
After
select owner, index_name, blevel, leaf_blocks, clustering_factor from dba_indexes where index_name = 'TABLE01_2X';
---------------------------------------------------------
OWNER|INDEX_NAME|BLEVEL|LEAF_BLOCKS|CLUSTERING_FACTOR
DBUSER01|TABLE01_2X|3|331864|16304224
DBUSER01|TABLE01_4X|3|303888|1796569

select column_name, num_distinct, density, num_buckets from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('DKOD1', 'HTNO1', 'SKOD1', 'TAR1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|NUM_BUCKETS
DKOD1|12|0.0384615384615385|2
HTNO1|968|0.0333333333333333|9
TAR1|608|0.00169779286926995|10
SKOD1|73|0.0333333333333333|8
As we can see the leaf_blocks and clustering_factor values of the indexes are changed a bit, but these are not so important in cost calculation, but the densities of the indexed columns are much more effective in cost calculation. Lets examine them closer. Before the histograms the densities of all the colums are equal to 1/num_distinct of the respective columns.
select column_name, num_distinct, density, 1/numdistinct density_calculated from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('DKOD1', 'HTNO1', 'SKOD1', 'TAR1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|DENSITY_CALCULATED
DKOD1|12|0.0833333333333333|0.0833333333333333
HTNO1|965|0.00103626943005181|0.00103626943005181
TAR1|608|0.00164473684210526|0.00164473684210526
SKOD1|73|0.0136986301369863|0.0136986301369863
After collecting the histograms the densities of all the colums are NOT equal to 1/num_distinct of the respective columns.
select column_name, num_distinct, density, 1/numdistinct density_calculated from dba_tab_columns t1 where t1.table_name = 'TABLE01' 
and t1.column_name in ('DKOD1', 'HTNO1', 'SKOD1', 'TAR1')
order by t1.column_name;
---------------------------------------------------------
COLUMN_NAME|NUM_DISTINCT|DENSITY|DENSITY_CALCULATED
DKOD1|12|0.0384615384615385|0.0833333333333333
HTNO1|968|0.0333333333333333|0.00103305785123967
TAR1|608|0.00169779286926995|0.00164473684210526
SKOD1|73|0.0333333333333333|0.0136986301369863

Since the "DKOD1" column has the static selectivity of 0.01, the selectivities of the other columns
are effective role in the calculation of cost. As we see after the histograms, the density of "HTNO1"
and "SKOD1" columns are increased, the density of "TAR1" is decreased a little bit. Since the "SKOD1"
column exists in both indexes we should examine the densities of the "HTNO1" and "TAR1" columns.

In normal case, I mean statistics without histograms, the CBO assumes a smooth distribution of data values of
a column between minimum and maximum values. But in reality there are fluctuation in the distribution of data
for some columns. I mean some values appears rarely but some data appears frequently in distribution. So the CBO
can not determine this distribution without having histogram data on these columns. The calculation of density of
these columns in case of histograms is a very complex algorithm so we should not bother about it.

This case show the importance of histograms where there is skewed data in indexed columns.

Regards
Ural Ural

Friday, April 20, 2007

PGA memory hungry plsql codes and inevitable "ORA-04030" error

Hi,
Today, we found out memory eating, carnivore unix processes in our
production Oracle database server's operating system.
When we trace them, we realized
that they are Oracle client processes coming from Internet banking
connections. We analyzed the case and these sessions were growing in PGA
Memory usage. When they hit the Operating System memory limit they got
"ORA-04030" error but they did not stop , they continued to eat up the
memory (PGA). So we killed some of them but there were new sessions coming
from the same system behaving similarly.We examined the situation with the
people from development team and they found out that one day ago they
added a new dynamic parameter to one of the database packages generating
html output, they generating the html code by combining data from
different tables but one of these cursors that selects form these tables
gone into an infinite loop and caused that problem, so they corrected the
data that caused this infinite loop and they put extra controls to the
plsql code and user interface programs to prevent the error happening
again.

But after that we, as Oracle DBA team, decided to develop a database
procedure and database job that repeatedly checks for any sessions that
consumes PGA memory and have it email these sessions info to the DBA
Group for being able take early corrective action.

The following is the code that we developed and use for this purpose.

-----------------
select * from v$statname where statistic# = 20
/*
STATISTIC#|NAME|CLASS
20|session pga memory|1
*/

CREATE OR REPLACE PROCEDURE DBUSER01.Dba_Sess_Pga_Mem_Control IS
CURSOR c1 IS
SELECT t1.username, t1.SID, t1.serial#,
TRUNC (a_sess_pga_mem / 1024 / 1024) sess_pga_mem_mb
FROM v$session t1,
(SELECT *
FROM (SELECT   SID, VALUE a_sess_pga_mem
FROM v$sesstat
WHERE statistic# = 20 --session pga memory
AND TRUNC(VALUE / 1024 / 1024) >= 50
ORDER BY VALUE DESC)
WHERE ROWNUM <= 1) t2
WHERE t1.SID = t2.SID;

v_message   VARCHAR2(2000);
v_delimiter VARCHAR2(2);
r1          c1%ROWTYPE;
BEGIN
IF m_tatil_bul2(TRUNC(SYSDATE), 'B') = 0 THEN
-- If it is not on weekend

v_delimiter := '';
OPEN c1;
LOOP
FETCH c1
INTO r1;
EXIT WHEN c1%NOTFOUND;

v_message := v_message || v_delimiter;
v_message := v_message || 'Username '        || CHR(9) || CHR(9) || ' : ' || r1.username                 || CHR(10)
|| 'Sid,Serial# '     || CHR(9) || CHR(9) || ' : ' || r1.sid || ',' || r1.serial# || CHR(10)
|| 'PGA Memory (MB) '           || CHR(9) || ' : ' || r1.sess_pga_mem_mb          || CHR(10)
|| '------------------------------------------------------';
v_delimiter := CHR(10);

END LOOP;
CLOSE c1;

IF v_delimiter = CHR(10) THEN
DBUSER01.smtp.mail('dba@company01.com.tr',
'dba@company01.com.tr',
'There are sessions consuming too much PGA MEMORY (PGA Memory >= 50 MB) Please check !...',
v_message);
END IF;
END IF;
END;
/

Create the job that will call the above procedure in the specified intervals (3 mins in our case)

DECLARE
X NUMBER;
BEGIN
SYS.DBMS_JOB.SUBMIT
( job       => X 
,what      => 'DBUSER01.Dba_Sess_Pga_Mem_Control;'
,next_date => SYSDATE
,interval  => 'SYSDATE + 3/(24*60)'
,no_parse  => TRUE
);
SYS.DBMS_OUTPUT.PUT_LINE('Job Number is: ' || to_char(x));
END;
/

commit;

I hope you do not have any PGA memory hungry sessions.

Thursday, April 19, 2007

Getting "ORA-01034" and "ORA-27123" errors when connecting to Oracle 9i database with another unix user other than "oracle" user (database owner)

The problem is when you try to connect to the oracle locally by using another user other than oracle (database owner) we get "ORA-01034" and "ORA-27123" errors. When you use "TWO_TASK" environment variable, it is OK, since it is going through network I mean over the listener.
But locally we are not able to connect to the database whithout using "TWO_TASK".

There may be others who have seen the same error, so I am writing the solution in my case here.

user1@host1:/data3/oracle/user1 > echo $TWO_TASK
user1@host1:/data3/oracle/user1 > sqlplus db_user1

SQL*Plus: Release 9.2.0.4.0 - Production on Wed Apr 18 23:37:00 2007

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Enter password:
ERROR:
ORA-01034: ORACLE not available
ORA-27123: unable to attach to shared memory segment
IBM AIX RISC System/6000 Error: 13: Permission denied


Enter user-name: ^C

ORACLE_SID and ORACLE_HOME are checked and they are what they should be for this database so what is the problem then ?

Solution :

After searching for the solution for some time in Internet,
I finally found the following a solution that solved my problem.

oracle@host1:/data3/oracle > cd $ORACLE_HOME/bin
oracle@host1:/data3/oracle/bin > ls -l oracle
-rwxr-xrwx   1 oracle   dba        74053740 Sep 04 2003  oracle

When we look at the file permissions of "oracle" executable it looks strange
because it normally should be like "-rwsr-s--x", so we are going to make it so

First of all we close the database
This is very important that you should close the database before doing that

oracle@host1:/data3/oracle > sqlplus "/as sysdba"

SQL*Plus: Release 9.2.0.4.0 - Production on Thu Apr 19 00:17:08 2007

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

SQL> alter system checkpoint;

System altered.

SQL> shutdown abort
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 -
64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

Then we change the file permissions of "oracle" executable

oracle@host1:/data3/oracle/bin > chmod 6751 oracle
oracle@host1:/data3/oracle/bin > ls -l oracle
-rwsr-s--x   1 oracle   dba        74053740 Sep 04 2003  oracle

Then we open the database

oracle@host1:/data3/oracle/bin > sqlplus "/as sysdba"

SQL*Plus: Release 9.2.0.4.0 - Production on Thu Apr 19 00:18:10 2007

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  622298928 bytes
Fixed Size                   743216 bytes
Variable Size             486539264 bytes
Database Buffers          134217728 bytes
Redo Buffers                 798720 bytes
Database mounted.
Database opened.
SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 -
64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

We switch to the user where the connection problem occurs

oracle@host1:/data3/oracle/bin > su - user1
user1's Password:
user1@host1:/data3/oracle/user1 > sqlplus db_user1

SQL*Plus: Release 9.2.0.4.0 - Production on Thu Apr 19 00:18:46 2007

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Enter password:

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

SQL>

Finally the problem is solved.

Monday, April 16, 2007

Resizing Undo Tablespaces

How do you resize undo tablepaces ? This is how I do it in an Oracle 9i database.

Problem is the undo tablespace has grown up to the limit of its maxsize
and we need to compact it or just we want to make it smaller due to a space problem

Find undo info
SELECT NAME, VALUE
FROM v$parameter
WHERE LOWER (NAME) LIKE '%undo%';
/*
NAME|VALUE
undo_management|AUTO
undo_tablespace|UNDOTBS1
*/

Find undo tablespace datafile info
SELECT   file_name, BYTES / 1024 / 1024 mb
FROM dba_data_files
WHERE tablespace_name = 'UNDOTBS1'
ORDER BY file_name;
/*
FILE_NAME|MB
/datac3/oradata/andlp2/undotbs1_01.dbf|1793
/datac6/oradata/andlp2/undotbs1_02.dbf|235
/datac7/oradata/andlp2/undotbs1_03.dbf|1679
*/

Execute the following query to determine who is using a particular UNDO or Rollback Segment:
SELECT TO_CHAR (s.SID) || ',' || TO_CHAR (s.serial#) sid_serial,
NVL (s.username, 'None') orauser, s.program, r.NAME undoseg,
t.used_ublk * TO_NUMBER (x.VALUE) / 1024 || 'K' "Undo"
FROM SYS.v_$rollname r,
SYS.v_$session s,
SYS.v_$transaction t,
SYS.v_$parameter x
WHERE s.taddr = t.addr 
AND r.usn = t.xidusn(+) 
AND x.NAME = 'db_block_size'
--No rows returned

Create a second temporary undo tablespace
CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 
'/datac7/oradata/andlp2/undotbs2_01.dbf' SIZE 500M AUTOEXTEND ON NEXT 16M MAXSIZE 5000M
ONLINE
BLOCKSIZE 8K;

Set this new undo tablespace as undo tablespace of the database
ALTER SYSTEM SET undo_tablespace = UNDOTBS2;

You can monitor the undo extents of all the undo tablespaces by using the following query
SELECT tablespace_name, segment_name, SUM (blocks), SUM (BYTES) / 1024
FROM dba_undo_extents
GROUP BY tablespace_name, segment_name;

Since we are going to drop the old undo tablespace we need to know if any session is
using any undo extent of this undo tablespace
SELECT TO_CHAR (s.SID) || ',' || TO_CHAR (s.serial#) sid_serial,
NVL (s.username, 'None') orauser, s.program, r.NAME undoseg,
t.used_ublk * TO_NUMBER (x.VALUE) / 1024 || 'K' "Undo",
t1.tablespace_name
FROM SYS.v_$rollname r,
SYS.v_$session s,
SYS.v_$transaction t,
SYS.v_$parameter x,
dba_rollback_segs t1
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.NAME = 'db_block_size'
AND t1.segment_id = r.usn
AND t1.tablespace_name = 'UNDOTBS1'  
/*
SID_SERIAL|ORAUSER|PROGRAM|UNDOSEG|Undo|TABLESPACE_NAME
615,998|USER1|runform30x@host1 (TNS interface)|_SYSSMU102$|8K|UNDOTBS1
*/

You can get more info about the undo extent that is online (used)
SELECT *
FROM dba_rollback_segs
WHERE status = 'ONLINE' 
AND tablespace_name = 'UNDOTBS1'
ORDER BY tablespace_name, segment_name
/*
SEGMENT_NAME|OWNER|TABLESPACE_NAME|SEGMENT_ID|FILE_ID|BLOCK_ID|INITIAL_EXTENT|NEXT_EXTENT|MIN_EXTENTS|MAX_EXTENTS|PCT_INCREASE|STATUS|INSTANCE_NUM|RELATIVE_FNO
_SYSSMU102$|PUBLIC|UNDOTBS1|102|33|4473|131072||2|32765||ONLINE||33
*/

Monitor the status of online undo extents
SELECT segment_name, xacts, v.status
FROM v$rollstat v, dba_rollback_segs
WHERE tablespace_name = 'UNDOTBS1' 
AND segment_id = usn;
/*
SEGMENT_NAME|XACTS|STATUS
_SYSSMU102$|1|PENDING OFFLINE
*/

See it is in "PENDING OFFLINE" status until the transaction that is using it ends.

See all the rollback info
SELECT n.NAME, s.status, s.extents, s.rssize, s.hwmsize, s.xacts
FROM v$rollname n, v$rollstat s
WHERE n.usn = s.usn;
/*
NAME|STATUS|EXTENTS|RSSIZE|HWMSIZE|XACTS
SYSTEM|ONLINE|6|385024|385024|0
_SYSSMU102$|PENDING OFFLINE|42|42065920|42065920|1
_SYSSMU262$|ONLINE|4|253952|253952|0
_SYSSMU263$|ONLINE|2|122880|122880|0
_SYSSMU264$|ONLINE|3|188416|188416|0
_SYSSMU265$|ONLINE|3|188416|188416|0
_SYSSMU266$|ONLINE|4|253952|253952|0
_SYSSMU267$|ONLINE|2|122880|122880|0
_SYSSMU268$|ONLINE|2|122880|122880|0
_SYSSMU269$|ONLINE|2|122880|122880|0
_SYSSMU270$|ONLINE|2|122880|122880|0
_SYSSMU271$|ONLINE|2|122880|122880|0
*/

Find the active transactions using undo extents
SELECT s.username, t.xidusn, t.ubafil, t.ubablk, t.used_ublk
FROM v$session s, v$transaction t
WHERE s.saddr = t.ses_addr;
/*
USERNAME|XIDUSN|UBAFIL|UBABLK|USED_UBLK
MARDATA|102|30|153176|1
*/

After all active transactions are committed, the status of Undo segment
_SYSSMU102$ is automatically switched from PENDING OFFLINE mode to OFFLINE mode.
PENDING OFFLINE mode does not allow any deletion process of tablespace.
You can delete the tablespace in OFFLINE mode.

If no segment is selected after you execute the SQL statement below,
it means the segment is already switched to OFFLINE mode.

SQL> SELECT SEGMENT_NAME, XACTS, V.STATUS
FROM V$ROLLSTAT V, DBA_ROLLBACK_SEGS
WHERE TABLESPACE_NAME = 'UNDOTBS_02'
AND SEGMENT_ID = USN;

If the segment is selected and XACTS points to zero (0), rollback segment
does not contain any pending transactions, and therefore,
the segment is switched to OFFLINE mode immediately

Wait for the active transaction to finish then monitor again
SELECT n.NAME, s.status, s.extents, s.rssize, s.hwmsize, s.xacts
FROM v$rollname n, v$rollstat s
WHERE n.usn = s.usn;
/*
NAME|STATUS|EXTENTS|RSSIZE|HWMSIZE|XACTS
SYSTEM|ONLINE|6|385024|385024|0
_SYSSMU262$|ONLINE|5|319488|319488|0
_SYSSMU263$|ONLINE|2|122880|122880|0
_SYSSMU264$|ONLINE|3|188416|188416|0
_SYSSMU265$|ONLINE|3|188416|188416|0
_SYSSMU266$|ONLINE|4|253952|253952|0
_SYSSMU267$|ONLINE|2|122880|122880|0
_SYSSMU268$|ONLINE|2|122880|122880|0
_SYSSMU269$|ONLINE|2|122880|122880|0
_SYSSMU270$|ONLINE|3|188416|188416|0
_SYSSMU271$|ONLINE|2|122880|122880|0
*/

As you can see all undo extents of the old undo tablespace UNDOTBS1 are dropped already now we can drop the tablespace
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES ;

Recreate the original configuration :
After that we can create again the original configuration, so recreate the old undo again
and switch to it

CREATE UNDO TABLESPACE UNDOTBS1 DATAFILE 
'/datac3/oradata/andlp2/undotbs1_01.dbf' SIZE 500M AUTOEXTEND ON NEXT 16M MAXSIZE 5000M,
'/datac6/oradata/andlp2/undotbs1_02.dbf' SIZE 500M AUTOEXTEND ON NEXT 16M MAXSIZE 5000M,
'/datac7/oradata/andlp2/undotbs1_03.dbf' SIZE 100M AUTOEXTEND OFF
ONLINE
BLOCKSIZE 8K;

ALTER SYSTEM SET undo_tablespace = UNDOTBS1;

See if there are any active undo extents of UNDOTBS2
SELECT n.NAME, s.status, s.extents, s.rssize, s.hwmsize, s.xacts
FROM v$rollname n, v$rollstat s
WHERE n.usn = s.usn;
/*
NAME|STATUS|EXTENTS|RSSIZE|HWMSIZE|XACTS
SYSTEM|ONLINE|6|385024|385024|0
_SYSSMU1$|ONLINE|2|122880|122880|0
_SYSSMU2$|ONLINE|2|122880|122880|0
_SYSSMU3$|ONLINE|2|122880|122880|0
_SYSSMU4$|ONLINE|2|122880|122880|0
_SYSSMU5$|ONLINE|2|122880|122880|0
_SYSSMU6$|ONLINE|2|122880|122880|0
_SYSSMU7$|ONLINE|2|122880|122880|0
_SYSSMU8$|ONLINE|2|122880|122880|0
_SYSSMU9$|ONLINE|2|122880|122880|0
_SYSSMU10$|ONLINE|2|122880|122880|0
*/

See if there is any rollback segments of UNDOTBS2
SELECT segment_name, xacts, v.status
FROM v$rollstat v, dba_rollback_segs
WHERE tablespace_name = 'UNDOTBS2' 
AND segment_id = usn;
--No rows returned

DROP TABLESPACE undotbs2 INCLUDING CONTENTS AND DATAFILES ;     

Gathering Table Statistics By Using Database Jobs

In Oracle9i Database, sometimes you need to gather statistics for a big table, since it will take long time
you may not want to wait for it to finish or maybe you do not want to be disconnected by your company's firewall
in the middle of the stat gathering work, in a situation like that you can create a database job for that purpose
and let the job do the work at the background. You can use the some idea for your other long running processes too,
convert them to database jobs, ideally using packaged logged procedures or functions, by doing that you can monitor the
work from log records. I use the same idea in most of my work.
Connect as SYS to the database and run the example code below, put the relevant parameters for the literal parts.

DECLARE
x NUMBER;
BEGIN
DBMS_JOB.submit
(job => x,
what => 'DBMS_STATS.gather_table_stats(ownname => ''OWNER1'', tabname => ''BIGTABLE1'', estimate_percent => 10, cascade => TRUE);',
next_date => SYSDATE,
interval => NULL,
no_parse => TRUE
);
DBMS_OUTPUT.put_line ('Job Number is: '  TO_CHAR (x));
COMMIT;
END;
/

Sunday, April 15, 2007

Resizing Temporary Tablespaces

How do you resize temporary tablepaces ? This is how I do it in a 9.2.0.4 Oracle database.

Problem is the default temporary tablespace consists of only one tempfile
and that tempfile has grown up to the limit of its maxsize
and we need to compact it.

The following is our default temporary tablespace
SELECT property_value
FROM database_properties
WHERE property_name = 'DEFAULT_TEMP_TABLESPACE';
/*
PROPERTY_VALUE
TEMP
*/

The creation script of the temporary tablespace TEMP
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE
'/datac6/oradata/andlp2/temp_1.dbf' SIZE 500M AUTOEXTEND ON NEXT 128M MAXSIZE 8000M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

This script is just for your information, to know the structure of the temporary tablespace

Find the tempfile of this temporary tablespace
SELECT tablespace_name, file_name, maxbytes, BYTES
FROM dba_temp_files
WHERE tablespace_name = 'TEMP';
/*
TABLESPACE_NAMEFILE_NAMEMAXBYTESBYTES
TEMP/datac6/oradata/andlp2/temp_1.dbf83886080008388608000
*/

As you can see from the maxbytes and bytes columns this datafile has grown up to its maxsize of 8000M

First of all, we create another temporary tablespace
CREATE TEMPORARY TABLESPACE TEMP1 TEMPFILE
'/datac7/oradata/andlp2/temp1_1.dbf' SIZE 500M AUTOEXTEND ON NEXT 128M MAXSIZE 8000M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

Make the new temprary tablespace TEMP1 the default temporary tablespace of the database
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp1;

Change the default temporary teblespaces of users to use he new TEMP1 temporary tablespace which are using the TEMP temporary tablespace before
SELECT 'ALTER USER '  username  ' TEMPORARY TABLESPACE temp1;'
FROM dba_users
WHERE temporary_tablespace = 'TEMP'
ORDER BY username;

Run the output of the above query as a script like below
ALTER USER USER1 TEMPORARY TABLESPACE temp1;
ALTER USER USER2 TEMPORARY TABLESPACE temp1;
ALTER USER USER3 TEMPORARY TABLESPACE temp1;
...

Check if any of the active sessions use the TEMP temporary tablespace
since there should not be any session using it, in order to change the size of it
You can examine the sessions using it and decide to kill them or wait for them to finish their jobs and disconnect
select *
from v$session
where saddr in (select session_addr
from v$tempseg_usage v
where v.tablespace = 'TEMP')

You can kill these sessions in your operating system with "kill -9" command (since the OS I am working on is IBM AIX)
select 'kill -9 'spid
from v$process
where addr in (select paddr
from v$session
where saddr in (select session_addr
from v$tempseg_usage v
where v.tablespace = 'TEMP'))

After killing the sessions you can drop the old TEMP temporary tablespace
DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;

Then to make the structure the same as before, we need to recreate the TEMP temporary tablespace and drop the TEMP1 temporary tablespace
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE
'/datac6/oradata/andlp2/temp_1.dbf' SIZE 500M AUTOEXTEND ON NEXT 128M MAXSIZE 8000M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

select 'ALTER USER 'username' TEMPORARY TABLESPACE TEMP;'
from dba_users
where temporary_tablespace = 'TEMP1'
order by username;
--run the output as script

DROP TABLESPACE temp1 INCLUDING CONTENTS AND DATAFILES;

Thursday, April 12, 2007

Sqlplus - pagesize

Everyday we learn something new about Oracle databases that we missedbefore or may be we did not needed. Below is one of them for me, but Iused it recently in a Unix script to generate an Sqlplus report. Ipublish it here so that there may be others that missed the samepoint.

Maximum value of the Sqlplus pagesize parameter that you can set is 50000.

set pagesize 50000

Wednesday, April 11, 2007

Hi to the blogging community !

My name is Ural Ural and I've been working with Oracle databases for about 12 years. I am a senior Oracle DBA. I am going to share all the information I learned about the Oracle databases and database administration here. I think, the more we share the more we learn new things about the Information Technology. So lets share our knowledge with the blogging community...

Regards
Ural