Friday, January 29, 2010

Generating rows using level keyword

A nice way of generating rows using level keyword.

with t as
(select 'Hello hi' stri
from dual
)
select substr(stri, i, 1)
from t
, (select level i
from t
connect by level <= length(t.stri)
)

or

with strr as
(select 'Hello Hi' str from dual)
select level,substr(str,level,1) from strr
connect by level

Thursday, January 28, 2010

How DDL works

How DDL works

DDL is executed logically like this:
begin
COMMIT;
do the ddl;
COMMIT;
exception
when others then
ROLLBACK;
RAISE;
end;

Wednesday, January 27, 2010

Calling procedure form the SQL

Calling procedure form the SQL:
Can we call a procedure from SQL

The answer is no we cant, we need to use function for that.

Nulls at the top or bottom

Nulls at the top or bottom

order by asc --> by default arranges the null values at the bottom
order by desc --> by default arranges the null values at the top
we place null values at the top when 'order by asc'
and nulls at the bottom when 'order by desc' is used by using the keyword 'nulls first or nulls last' in the order by caluse

Tuesday, January 12, 2010

Purity level of functions

Purity level of functions:

CREATE OR REPLACE FUNCTION MY_FUNC RETURN INTEGER AS
BEGIN
MY_PACKAGE.MY_INTEGER := NVL(MY_PACK.MY_INTEGER,0) + 1;
RETURN(MY_PACK.MY_INTEGER);
END;

Then we use it in the following query:

SELECT a.*, my_func()
FROM my_table;

It is pretty easy to see that each time we run this query in a single session, we get a different result. This means the function is NON-DETERMINISTIC. The thing that makes it non-deterministic is that it UPDATES the PACKAGE STATE.

This is usually fine though, because SYSDATE is also non-deterministic, and we can use that in most places.

However, there are restrictions with non-deterministic functions. For example, you cannot use them in a Materialised View.

The Purity Level of a function tells us what it does and does not do, such as read/write database state, or read/write package state. Generally, a function that writes neither database nor package state is usually deterministic (unless it uses SYSDATE or some other non-deterministic feature).

Functions can be used in Materialised Views, Index Types, Function-based indexes, user-defined aggregation functions, and probably lots of other stuff.

Depending on the purity-level of the function, it may be banned from use in one or more of these areas, or it may be permitted in a limited fashion (eg. may not parallelise).

Deterministic function and Non Deerministic function

Deterministic function and Non Deerministic function
Deterministic functions always return the same result any time they are called with a specific set of input values.

Nondeterministic functions may return different results each time they are called with a specific set of input values.

Saturday, January 2, 2010

Performance Tuning tools

There are some mandatory tools that every should know atleast to think about performance tuning in production environment.
These tools, i say are equivalent to a 'stethoscope' for a doctor or a bat for a cricket player for that matter with out which you cant do anything.
1. sql* plus
2. explain plan
3. auto trace
4. sql trace with TKPROF
5. statspack

SQL*PLUS
1. sql* plus is a good scripting tool.

4. SQL trace with TKPROF
sql_trace and tkprof are the tools used to tune queries.
sql_trace generally provides raw data about the SQL queries that your application used and its performance statistics.
tkprof is a tool which formats the raw data provided by the sql_trace to readable format.

In general, TKPROF output provides the statistics of the query, waiting events, query plan and bind variable values.
There is a difference between the query plan generated between through TKPROF and EXPLAN PLAN. Infact, TKPROF provides the actual plan that is used where as the EXPLAN PLAN query plan changes based on the session settings. It shows the query plan as if we execute the query right now. So, TKPROF query plan is the acutal one used and is the one reliable.

There are some mandatory steps that needs to be performed to enable SQL_TRACE.

1. alter session set timed_statistics=true;
2. alter session set events '1024 trace name context forever, level 12';


timed_statistics is a parameter which tells the server to collect the timed activities (cpu time, elapsed time etc) for various activities in the database.

timed_statistics can be set at the session
Alter session set timed_statistics=true;
or system level.
Alter system set timed_statistics=true;
or global level.
For global level, the init.ora file should have the following entry
timed_statistics=true
To take this effect the database needs to be restarted.

sql_trace can also be enabled at the system or session level or global level.
sql_trace produces lot of data. It accepts either true or false values. The setting for global level can be done at init.ora file.
USER_DUMP_DEST in init.ora file shows the path for the rawdata.
MAX_DUMP_FILE_SIZE is an another parameter for this regard. It controls the max size of the file generated by the sql_trace.
If a numeric value is assigned, then it takes the value as the db blocks.
It can also assigned using K or M bytes.
We can also give UNLIMITED for this parameter. But ingeneral it is not adviced to give unlimited as it loads lots of data with it. A max size of 50 or 100MB is advicable.

The ways to enable sql_trace:
1. Alter session set sql_trace=true|false this is equivalent to
alter session set events '1024 trace name context forever,level 1' so we can use eiether of it
There are certain advantages if we go for level 4,8 and 12
level 4: enables sql_trace and also captures the bind variable values in the trace file
level 8: enables sql_trace and also captures wait events in the trace file
level 12: enables sql_trace and also captures wait events and bind variable values.

The overall syntax is as follows..
alter session set events ‘10046 trace name context forever, level
alter session set events ‘10046 trace name context off’--to stop tracing

2. sys.dbms_system.set_sql_trace_in_session: This allows to set sql_trace on or off for the existing session. All needs to get is the session id and serial# for the session which can be obtained from the v$session.


The trace file generated contains the session’s data irrespective of the query.
So we must close the session or stop tracing after the needed query processed but before that we need to identify where the trace file is located.

The following block gives the process server id:
Trace file contains the server id in the file name

SELECT p.spid
FROM v$process p, v$session s
WHERE p.addr = s.paddr AND s.audsid = USERENV ('sessionid')

The following blocks gives the full file path of the folder where trace fiels are sits.

declare
l_intval number;
l_strval varchar2(2000);
l_type number;
begin
l_type := dbms_utility.get_parameter_value
('user_dump_dest', l_intval, l_strval);
dbms_output.put_line(l_strval );
end;
/

By making use of the above two queries we can find out the trace file.

Now we are done with generation of trace file and finding the path for it.
But how to read the trace file? TKPROF is a tool used to rearrange the trace file contents in a readble fashion.

To read the trace file we need to have the privileges that are given by
_trace_files_public = true

In command line we need to give,
tkprof inputfilename(.trc) outputfilename(.txt)
Thats all now we have reable trace file at the outputfilename(.txt)

Here are the steps that i did to generate a trace file

SQL> alter session set timed_statistics=true
2 ;
Session altered.
SQL> alter session set sql_trace=true
2 ;
Session altered.
SQL> select owner,count(*) from dba_objects group by owner;
SQL> select p.spid from v$process p, v$session s where p.addr=s.paddr and s.audsid=userenv('sessionid')
/
SPID
------------
3916

Friday, January 1, 2010

Clustter Table

A cluster is a schema object that contains one or more tables that all have one or more columns in common. Rows of one or more tables that share the same value in these common columns are physically stored together within the database.

Generally, you should only cluster tables that are frequently joined on the cluster key columns in SQL statements. Clustering multiple tables improves the performance of joins, but it is likely to reduce the performance of full table scans, INSERT statements, and UPDATE statements that modify cluster key values. Before clustering, consider its

Cluster Keys
The columns defined by the CREATE CLUSTER command make up the cluster key. These cluster columns must correspond in both datatype and size to columns in each of the clustered tables, although they need not correspond in name.

Types of Clusters
A cluster can be either an indexed cluster or a hash cluster.

Indexed Clusters
In an indexed cluster, Oracle stores together rows having the same cluster key value. Each distinct cluster key value is stored only once in each data block, regardless of the number of tables and rows in which it occurs. This saves disk space and improves performance for many operations.

You may want to use indexed clusters in the following cases:

·Your queries retrieve rows over a range of cluster key values.

·Your clustered tables may grow unpredictably.

After you create an indexed cluster, you must create an index on the cluster key before you can issue any data manipulation language (DML) statements against a table in the cluster. This index is called the cluster index.

A cluster index provides quick access to rows within a cluster based on the cluster key. If you issue a SQL statement that searches for a row in the cluster based on its cluster key value, Oracle searches the cluster index for the cluster key value and then locates the row in the cluster based on its ROWID.

Hash Clusters
In a hash cluster, Oracle stores together rows that have the same hash key value. The hash value for a row is the value returned by the cluster's hash function. When you create a hash cluster, you can either specify a hash function or use the Oracle internal hash function. Hash values are not actually stored in the cluster, although cluster key values are stored for every row in the cluster.

You may want to use hash clusters in the following cases:

·Your queries retrieve rows based on equality conditions involving all cluster key columns.

·Your clustered tables are static or you can determine the maximum number of rows and the maximum amount of space required by the cluster when you create the cluster.

The hash function provides access to rows in the table based on the cluster key value. If you issue a SQL statement that locates a row in the cluster based on its cluster key value, Oracle applies the hash function to the given cluster key value and uses the resulting hash value to locate the matching rows. Because multiple cluster key values can map to the same hash value, Oracle must also check the row's cluster key value. This process often results in less I/O than the process for the indexed cluster, because the index search is not required.

Oracle's internal hash function returns values ranging from 0 to the value of HASHKEYS - 1. If you specify a column with the HASH IS clause, the column values need not fall into this range. Oracle divides the column value by the HASHKEYS value and uses the remainder as the hash value. The hash value for null is HASHKEYS - 1. Oracle also rounds the HASHKEYS value up to the nearest prime number to obtain the actual number of hash values. This rounding reduces the likelihood of hash collisions, or multiple cluster key values having the same hash value.

You cannot create a cluster index for a hash cluster, and you need not create an index on a hash cluster key.

If you cannot fit all rows for one hash value into a data block, do not use hash clusters. Performance is very poor in this circumstance because an insert or update of a row in a hash cluster with a size exceeding the data block size fills the block and performs row chaining to contain the rest of the row.

Cluster Size
Oracle uses the value of the SIZE parameter to determine the space reserved for rows corresponding to one cluster key value or one hash value. This space then determines the maximum number of cluster or hash values stored in a data block. If the SIZE value is not a divisor of the data block size, Oracle uses the next largest divisor. If the SIZE value is larger than the data block size, Oracle uses the operating system block size, reserving at least one data block per cluster or hash value.

Oracle also considers the length of the cluster key when determining how much space to reserve for the rows having a cluster key value. Larger cluster keys require larger sizes. To see the actual size, query the KEY_SIZE column of the USER_CLUSTERS data dictionary view. This does not apply to hash clusters because hash values are not actually stored in the cluster.

Although the maximum number of cluster and hash key values per data block is fixed on a per `-cluster basis, Oracle does not reserve an equal amount of space for each cluster or hash key value. Varying this space stores data more efficiently, because the data stored per cluster or hash key value is rarely fixed.

A SIZE value smaller than the space needed by the average cluster or hash key value may require the data for one cluster key or hash key value to occupy multiple data blocks. A SIZE value much larger results in wasted space.

When you create a hash cluster, Oracle immediately allocates space for the cluster based on the values of the SIZE and HASHKEYS parameters.



Adding Tables to a Cluster
You can add tables to an existing cluster by issuing a CREATE TABLE statement with the CLUSTER clause. A cluster can contain as many as 32 tables, although the performance gains of clustering are often lost in clusters of more than four or five tables.

All tables in the cluster have the cluster's storage characteristics as specified by the PCTUSED, PCTFREE, INITRANS, MAXTRANS, TABLESPACE, and STORAGE parameters.

Example I
The following statement creates an indexed cluster named PERSONNEL with the cluster key column DEPARTMENT_NUMBER, a cluster size of 512 bytes, and storage parameter values:

CREATE CLUSTER personnel

( department_number NUMBER(2) )

SIZE 512

STORAGE (INITIAL 100K NEXT 50K PCTINCREASE 10);

The following statements add the EMP and DEPT tables to the cluster:

CREATE TABLE emp

(empno NUMBER PRIMARY KEY,

ename VARCHAR2(10) NOT NULL

CHECK (ename = UPPER(ename)),

job VARCHAR2(9),

mgr NUMBER REFERENCES scott.emp(empno),

hiredate DATE CHECK (hiredate >= SYSDATE),

sal NUMBER(10,2) CHECK (sal > 500),

comm NUMBER(9,0) DEFAULT NULL,

deptno NUMBER(2) NOT NULL )

CLUSTER personnel (deptno);

CREATE TABLE dept

(deptno NUMBER(2),

dname VARCHAR2(9),

loc VARCHAR2(9))

CLUSTER personnel (deptno);

The following statement creates the cluster index on the cluster key of PERSONNEL:

CREATE INDEX idx_personnel ON CLUSTER personnel;

After creating the cluster index, you can insert rows into either the EMP or DEPT tables.

Example II
The following statement creates a hash cluster named PERSONNEL with the cluster key column DEPARTMENT_NUMBER, a maximum of 503 hash key values, each of size 512 bytes, and storage parameter values:

CREATE CLUSTER personnel

( department_number NUMBER )

SIZE 512 HASHKEYS 500

STORAGE (INITIAL 100K NEXT 50K PCTINCREASE 10);


Because the above statement omits the HASH IS clause, Oracle uses the internal hash function for the cluster.

Example III
The following statement creates a hash cluster named PERSONNEL with the cluster key made up of the columns HOME_AREA_CODE and HOME_PREFIX, and uses a SQL expression containing these columns for the hash function:

CREATE CLUSTER personnel

( home_area_code NUMBER,

home_prefix NUMBER )

HASHKEYS 20

HASH IS MOD(home_area_code + home_prefix, 101);

Quoting string literals in 10g

Quoting string literals in 10g

This short article introduces Oracle's new quoting mechanism in PL/SQL. This is a new feature of 10g that enables us to embed single-quotes in literal strings without having to resort to double, triple or sometimes quadruple quote characters. This is particularly useful for building dynamic SQL statements that contain quoted literals.

The mechanism is invoked with a simple "q" in PL/SQL only. The syntax is q'[...]', where the "[" and "]" characters can be any of the following as long as they do not already appear in the string.

!
[ ]
{ }
( )
< >
Note that at the time of writing, the quoting mechanism only appears to work with 10g clients/OCI. If used with any software that uses an older Oracle client it fails with ORA-01756: quoted string not properly terminated (confirmed with sqlplus, TOAD and PL/SQL Developer).

a simple example

The following example demonstrates how simple it is to use this mechanism when building a string that contains single quotes.

SQL> DECLARE
2 v VARCHAR2(1024);
3 BEGIN
4 v := q'[It's a string with embedded quotes...]';
5 DBMS_OUTPUT.PUT_LINE(v);
6 END;
7 /
It's a string with embedded quotes...

PL/SQL procedure successfully completed.
using the quoting mechanism in dynamic sql

Many developers will be familiar with dynamic SQL. While providing a flexible solution to many programming problems, dynamic SQL can also be difficult to build, debug and support. The quoting mechanism doesn't make these problems go away, but it at least makes dynamic SQL a little bit easier to transfer between SQL editors and packages during development.

The following is a contrived example of a dynamic SQL statement that includes literal quotes in the string. The sharp-witted amongst us will notice that this statement doesn't in fact need to be dynamic (and even if it did, it should be using bind variables). However, in the interest of demonstrating the quoting mechanism in as simple and short an example as possible, we'll stick with it.

SQL> DECLARE
2 v_sql VARCHAR2(1024);
3 v_cnt PLS_INTEGER;
4 BEGIN
5 v_sql := q'[SELECT COUNT(*) FROM user_objects WHERE object_type = 'TABLE']';
6 EXECUTE IMMEDIATE v_sql INTO v_cnt;
7 DBMS_OUTPUT.PUT_LINE(
8 TO_CHAR(v_cnt) || ' tables in USER_OBJECTS.'
9 );
10 END;
11 /
4 tables in USER_OBJECTS.

PL/SQL procedure successfully completed.