Thursday, July 26, 2012

Questions – 1

Questions – 1:
A recent question on the OTN Database forum:

If the block size of the database is 8K and average row length is 2K and if we  select all column of the table the I/O show that it had read more blocks then compare to specific column of the same table. Why is this?


Secondly if Oracle brings a complete block is the db buffer cache, while reading block from disk, they why there is a difference of block count in two queries. This difference reveals to me when I check the EXPLAIN PLAN for two different queries against the same table but one select all columns and the other one select specific column.


Kindly help me in clearing this confusion.

I can’t see anything subtle and complex in the problem as stated, so why doesn’t the OP give a clear explanation of what’s puzzling them?
When I say that, it’s not a demand for a better use of English (I am aware of the fact that the majority of Oracle DBAs have English as a second (or Nth) language so problems of grammar, syntax, punctuation and spelling don’t worry me), it’s a statement of bafflement that someone can’t simply say: “This is what I see, how can I explain this difference.”
It’s difficult to describe the difference between a good and a bad question, and if you were asked to give marks out of 10 for the quality and clarity of a question you’d probably be hard-pushed to justify any sort of precision in your marking (why 7/10 rather than 6 or 8)? But basic guidelines would be:

  • what’s the background

  • describe the problem

  • check and resolve any ambiguities in your description

  • present the evidence

  • check that the presentation is clear and easy to read


Let’s apply the guidelines to the question above – and I’ll be inventing examples of the sort of things the OP should have said:

I’m running oracle 9.2.0.8 on a 32-bit Windows system, and I’ve been using SQL*Plus to run some tests.

Okay – so 9.2.0.8 is a pretty old version (although there are still plenty of production systems still running it) and may behave differently from newer versions; plus there may be some “volume” oddities of 32 bit Oracle that don’t reproduce in a 64-bit system. It’s nice to know that the testing tool is SQL*Plus, because it’s easy to repeat tests in such a simple environment; it’s not nice when someone tells you what they think they’re seeing and you subsequently discover they’re using some graphic tool that confuses the issue by opening 3 background sessions when your attempt to reproduce the problem on SQL*Plus is using a single session.



I’ve created a table with the following definition


create table t1(
v001 varchar2(10),
v002 varchar2(10),
...
v199 varchar2(10),
v200 varchar2(10)
);

insert into t1
select lpad(rownum, 10), lpad(rownum,10), .... lpad(rownum,10)
from all_objects
where rownum <= 20000
;

commit;



The table is in a locally managed tablespace that uses an 8KB block size, uniform extents of 1MB, and ASSM.

Note how I’ve picked up a couple of points about the storage of the object that might be relevant – ASSM (segment space management) and extent management can produce interesting variations in behaviour. I’ve also show that I’ve got some very long rows – but although I’ve said the rows are 2K,  the rows I’ve actually defined aren’t quite what I said (there are 2,000 bytes for data – or maybe more if it’s a multibyte character set that I’ve forgotten to mention) plus 200 bytes for column lengths, plus 7 bytes for basic row overhead.
In this example I’ve decided to show you that I have 200 hundred columns, but I’ve avoided filling a page with garbage by indicating the repetitious nature of the column definitions and data. Obviously this isn’t always possible – but where you can indicate such things without packing the page it’s rather helpful – a thoughtful attempt to create a readable description encourages people to read the thing. It’s important, of course, to worry about the table definition – in this particular case it might be highly significant that the “2KB” rows don’t include a LOB or long column; if we noticed that the number of columns was more than the critical 254 – leading to intra-block row-piece chaining – that could be important.



I run two queries, “select * from t1″ and “select v001, v010, v100 from t1″. When I run these two queries with autotrace enabled (set autotrace traceonly) I see that the execution plan is the same for both queries, but the number of block visits is dramatically different:


SQL> select member_id from member;

20000 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3441279308

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20000 | 40000 | 341 (0)| 00:00:05 |
| 1 | TABLE ACCESS FULL| MEMBER | 20000 | 40000 | 341 (0)| 00:00:05 |
----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2004 consistent gets
15 physical reads
0 redo size
201918 bytes sent via SQL*Net to client
9204 bytes received via SQL*Net from client
801 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
20000 rows processed

SQL> select member_id , block_age_max_nbr from member;

20000 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3441279308

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20000 | 80000 | 345 (2)| 00:00:05 |
| 1 | TABLE ACCESS FULL| MEMBER | 20000 | 80000 | 345 (2)| 00:00:05 |
----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
22004 consistent gets
6 physical reads
0 redo size
201988 bytes sent via SQL*Net to client
9204 bytes received via SQL*Net from client
801 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
20000 rows processed




Why do we see so many more consistent gets in the second test ?


You’ll notice, by the way, that this SQL*Plus extract ISN’T for the table I’ve described above – it’s actually reflecting the behaviour you get in 11.1 from a tablescan where you’re selecting the first and last columns of a table with 260 columns (i.e. over the “magic” row-piece limit).
I’ve taken care to ensure that the output is tidy – there are not line wraps in the middle of the execution plans  (set linesize), and there are no page breaks half way through plans or tables of results (set pagesize 60, plus a little editing).  I’ve deleted a few blank lines to reduce reading length, and in many cases I would go further and delete some of the other redundant lines (e.g. the little group of “Execution Plan” lines) and some of the lines which I was confident carried no useful information (e.g. the statistics lines with zero – although in this case I would comment that I had deleted the zero stats). I might even go so far as deleting the stats which were identical across the two sets of data.
And when I’d done all this – I still wouldn’t be surprised if someone asked me for further details about my demonstration.


Summary


If you have observed something you don’t understand and want to ask a question about it, you need to make it easy for other people to “see” what you are seeing, and “know” what you already know about the working environment. If you can’t manage to present your problem clearly there’s a fair chance you won’t get an answer – let alone one that is good, relevant, or even correct.


Addendum:


Here’s another question that came up a couple of days ago:

i have one table with INITRANS as 1… this table is updated by many users. on a peak time locks are getting increase on this table this cause increase in database lock…. can u please advice me the best practise to resolve this issue.


This table has 16 records and which gets checked in the loop if found then update and insert in almost all the cases its update… looking forward for your help

The OP is clearly not a native English speaker – so why not follow my guidelines, which conceniently let the database help with the explanation.



At peak time I see the following locking:


SQL> select * from v$lock where type in ('TX','TM') and (block = 1 or request != 0) order by sid, type desc;

ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
2020D0AC 2020D0EC 34 TX 262158 12134 6 0 363 1
... etc ...




Table XYZ is the table with ID1 = xxxx, and it has 16 rows in it etc…





DIGITAL JUICE

No comments:

Post a Comment

Thank's!