Friday, August 31, 2012

Surprises

Surprises:
I’ve given examples in the past of how you can be suprised by Oracle when a mechanism that has “always worked” in the past suddenly just doesn’t work because some unexpected internal coding mechanism hits a boundary condition. One example I gave of this was rebuilding indexes online – where a limitation on the key size of index organized tables made it impossible to do an online rebuild of an index on a heap table because of an ORA-01450 (maximum key length) error that was raised against the (index-organized) journal table that  Oracle creates internally to support the rebuild.
Slightly more recently the topic of the sys_op_map_nonnull() function came up in some comments about the ora_hash function – and I made a passing comment about the function appearing in materialized view refreshes and then never followed it up. Fortunately I got an email from Jared Still and Don Seiler a few days ago which contained a nice short demonstration of the problem so, with their permission, I’ve published it below.
We start by creating a table with a varchar2(4000) column – and in my case I can do this because my database character set is a single-byte character set – then build a materialized view log on the table:

create table frbug
as
select
t.owner,
t.table_name,
cast('x' as varchar2(4000)) data
from
all_tables t
where
rownum <= 2000
;

alter table frbug modify data not null;

create index frbug_pk_idx on frbug(owner,table_name);
alter table frbug add constraint frbug_pk primary key (owner,table_name);

create materialized view log on frbug
with primary key, sequence, rowid
(data)
including new values
;



You’ll notice that the code creates a non-unique index to support the primary key constraint. I haven’t checked to see if this makes any difference to the outcome of the demonstration, but anyone who wants to repeat the test can do this.
After creating the log, the next piece of code creates an aggregate materialized view on a pre-built table.

create table frbug_mv
as
select
owner,
table_name,
data,
1 mvcount
from
frbug
;

create index frbug_mv_pk_idx on frbug_mv(owner,table_name);
alter table frbug_mv add constraint frbug_mv_pk primary key (owner,table_name);

create materialized view frbug_mv
on prebuilt table
refresh fast
enable query rewrite
as
select
owner, table_name, data, count(*) mvcount
from
frbug
group by
owner, table_name, data
;



The primary key constraint on the materialized view is also protected by a non-unique index but in this case this is a requirement of Oracle’s implementation – if you support the primary (or unique) constraint with a unique index then a fast refresh may, in general, end up generating a “duplicate key in index” error.
Finally we insert a row into the base table and call for a fast refresh:

insert into frbug(owner, table_name, data)
values('JKSTILL','BIG_TABLE',rpad('is this too big?', &m_length,'X'))
;

commit;

exec dbms_mview.refresh('FRBUG_MV', method => 'F')

*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01706: user function result value was too large
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2254
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2460
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2429
ORA-06512: at line 1



You’ll note that in my insert statement I’ve used a substitution variable ( &m_length ) to dictate the length of the string I insert for column data. If I set this to 4,000 the refresh fails; any smaller value and the refresh succeeds. (Note: the line numbers reported in the ORA-06512 errors will vary with version of Oracle, but the ORA-01706 error is always the same.)
You can find out what’s gone wrong when you re-run the refresh with sql_trace enabled – in the trace file you’ll find a statement like the following (which I’ve tidied up considerably):

MERGE INTO TEST_USER.FRBUG_MV SNA$
USING (
SELECT /*+ OPT_ESTIMATE(QUERY_BLOCK MAX=1000) */
DLT$0.OWNER GB0, DLT$0.TABLE_NAME GB1, DLT$0.DATA GB2, SUM(1) D0
FROM (
SELECT
CHARTOROWID(MAS$.M_ROW$) RID$, MAS$.DATA, MAS$.TABLE_NAME, MAS$.OWNER
FROM
TEST_USER.MLOG$_FRBUG MAS$
WHERE
MAS$.SNAPTIME$ > :1
) AS OF SNAPSHOT (:2) DLT$0
GROUP BY
DLT$0.OWNER, DLT$0.TABLE_NAME, DLT$0.DATA
)AV$
ON
(
SYS_OP_MAP_NONNULL(SNA$.OWNER) = SYS_OP_MAP_NONNULL(AV$.GB0)
AND SYS_OP_MAP_NONNULL(SNA$.TABLE_NAME) = SYS_OP_MAP_NONNULL(AV$.GB1)
AND SYS_OP_MAP_NONNULL(SNA$.DATA) = SYS_OP_MAP_NONNULL(AV$.GB2)
)
WHEN MATCHED THEN
UPDATE SET SNA$.MVCOUNT = SNA$.MVCOUNT+AV$.D0
WHEN NOT MATCHED THEN
INSERT (SNA$.OWNER, SNA$.TABLE_NAME, SNA$.DATA,SNA$.MVCOUNT)
VALUES (AV$.GB0, AV$.GB1, AV$.GB2, AV$.D0)
;


Spot the problem – it’s the use of the sys_op_map_nonnull() function. As it says in the comments I mentioned above, this function adds one byte to the existing value – so if the input is already 4,000 bytes long the output exceeds the legal limit and results in a ORA-01706. (Try to select sys_op_map_nonnull(rpad(‘x’,4000,’x')) from dual if you want a simpler demonstration.)
The worrying thing about this particular example is that you could run a production system with these definitions and everything could work perfectly for ages, until the day that someone inserted data that used the full 4,000 byte length of the column declaration – and then your materialized view can’t be refreshed.
And here’s an annoying detail that applies in this case – I altered the data column to declare it as not null, which means the sys_op_map_nonnull() call is redundant anyway. That’s the drawback, I guess, of reusable code – sometimes you re-use it when perhaps you ought to “special case” it.
Footnote: it is possible that Oracle has some code that decides whether or not to use the merge command to do the refresh – it’s possible, though I haven’t checked it, that Oracle could choose between merge and a “delete/insert” cycle; if you find that Oracle doesn’t crash when you try this test with the 4,000 byte character string then check the trace file to see whether Oracle has used a different strategy for the refresh.
Teaser: for an even stranger behaviour relating to fast refresh materialized views, see next weeks exciting episode.



DIGITAL JUICE

No comments:

Post a Comment

Thank's!