Dion Cho – Oracle Performance Storyteller

We are natural born scientists

Logging in PL/SQL

with 3 comments

What is the best debugging tool ever invented in the history of the programming langauge? We have a choice of many powerful and cutting-edge tools these days, but still, the most powerful and handy debugging tool is logging which is written by developers themselves.

But unfortunately, PL/SQL has a weaking functionality on the logging and this is why somebody is trying to implement the fully functioning logging ability with PL/SQL.

Anyway, the most widely used way of logging is absolutely the DBMS_OUTPUT package. But it has a critical limitation that you can’t see the logs until the PL/SQL execution completes.

set serveroutput on

	for idx in 1 .. 10 loop
		dbms_output.put_line(sysdate || ', ' || idx || 'th insertion');
	end loop;

2009/11/10 13:33:18, 1th insertion
2009/11/10 13:33:19, 2th insertion
2009/11/10 13:33:20, 3th insertion
2009/11/10 13:33:21, 4th insertion
2009/11/10 13:33:22, 5th insertion
2009/11/10 13:33:23, 6th insertion
2009/11/10 13:33:24, 7th insertion
2009/11/10 13:33:25, 8th insertion
2009/11/10 13:33:26, 9th insertion
2009/11/10 13:33:27, 10th insertion

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.00

Another trick is to use DBMS_SYSTEM package with KSDWRT procedure. This procedure enables us to write a text to trace file(1), alert log(2) or both(3). With UTL_FILE package, you can leave a log to any text file. But you need to have a network access(telnet and/or ftp) to the database server to read those files, which is also a big limitation.

	for idx in 1 .. 10 loop
		sys.dbms_system.ksdwrt(1, sysdate || ', ' || idx || 'th insertion');
	end loop;

-- C:\ORACLE\ADMIN\UKJA1021\UDUMP\ukja1021_ora_1448.trc
*** 2009-11-10 13:37:15.015
*** ACTION NAME:() 2009-11-10 13:37:15.000
*** MODULE NAME:(SQL*Plus) 2009-11-10 13:37:15.000
*** SERVICE NAME:(UKJA1021) 2009-11-10 13:37:15.000
*** SESSION ID:(140.1169) 2009-11-10 13:37:15.000
2009/11/10 13:37:15, 1th insertion
2009/11/10 13:37:16, 2th insertion
2009/11/10 13:37:17, 3th insertion
2009/11/10 13:37:18, 4th insertion
2009/11/10 13:37:19, 5th insertion
2009/11/10 13:37:20, 6th insertion
2009/11/10 13:37:21, 7th insertion
2009/11/10 13:37:22, 8th insertion
2009/11/10 13:37:23, 9th insertion
2009/11/10 13:37:24, 10th insertion

A little programming skill would be a big help on the situation like this. For instance, following example relies on the pipelined tablen function and DBMS_PIPE package to overcome the limitations mentioned above. While session A logs, another session B can read it simultaneously.

create or replace package pkg_log
  type log_array is table of varchar2(4000);
	procedure log(message in varchar2);
	procedure flush;
	function get_log return log_array pipelined;

show err

create or replace package body pkg_log
	procedure log(message in varchar2) is
		v_status number;
		dbms_pipe.pack_message(sysdate || ', ' || message);
    v_status := dbms_pipe.send_message('log');
	end log;
	procedure flush is
		v_status number;
		v_status := dbms_pipe.send_message('log');
	function get_log return log_array pipelined is
		v_status  number;
		v_message varchar2(4000);
		while true loop
			v_status := dbms_pipe.receive_message('log');
    	if v_status = 0 then
      	if v_message = '$$END$$' then
      	end if;
      	pipe row(v_message);
      	pipe row('');
    	end if;
		end loop;
	end get_log;

show err

It works like this.

-- session #1
	for idx in 1 .. 10 loop
		pkg_log.log(idx || 'th insertion');
	end loop;

-- session #2
set array 2
set pages 100
select * from table(pkg_log.get_log);


2009/11/10 13:30:12, 1th insertion

2009/11/10 13:30:13, 2th insertion

2009/11/10 13:30:14, 3th insertion

2009/11/10 13:30:15, 4th insertion

2009/11/10 13:30:16, 5th insertion

2009/11/10 13:30:17, 6th insertion

2009/11/10 13:30:18, 7th insertion

2009/11/10 13:30:19, 8th insertion

2009/11/10 13:30:20, 9th insertion

2009/11/10 13:30:21, 10th insertion

This is a situation where a little knowledge can play an important role to solve the problem.


Written by Dion Cho

November 10, 2009 at 8:40 am

Posted in Programming

Tagged with

3 Responses

Subscribe to comments with RSS.

  1. Dion,

    Thanks for the link. I’m considering a number of those options for future versions of logger so in addition to logging to a table, you can also log to a file or pipe. Personally, I think a table is still the most useful place. I also have to weigh the privs needed for dbms_system and / or dbms_pipe. Currently, logger only needs “create view, create job, create table, create sequence, create trigger, create procedure, create any context”.

    Any thoughts on features you would like to see?


    Tyler Muth

    November 10, 2009 at 12:47 pm

    • Tyler, thanks for visiting.

      As for now, I have not much suggestion. But if I have a chance to use it in my next project, I would make a bunch of suggestion! :)

      Dion Cho

      November 12, 2009 at 1:39 am

  2. log to table have a problem..

    It need to commit the log transaction then can we read the log. So sometimes,we may need to use autonomous transactions to do the table logging.


    December 2, 2009 at 7:02 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: