Question Fixed extents - why bother (vs. Variable only)

Chris Hughes

ProgressTalk.com Sponsor
Hi

A shocking title I know but now I have your attention please let me explain....

I live (for the absolute most part) in 10.2B+ soon 11 on various OS's, with large files enabled.

I know prior to enablelargefiles and various other OS limitations 2GB was as much as we could go in a single file. But now we don't have that limitation...

So to make this simple I'm inventing a scenario, lets say the raw db data is 5GB, its stored on its own drive that is 50GB (no chance of running out of space). Its type 2 with 9 area's row per block sizes 1,2,4,8,16,32,64,128,256 all clustered to 512.

If I create a single fixed extent of 1GB on each and 1 variable now I use 9GB to hold to hold 5GB of data...

So why don't I / we / all just only use variable extents now?

Thanks for reading.

Chris.
 
Last edited:

TomBascom

Curmudgeon
If your db size is modest there is no point in using fixed extents.

"Modest" is in the eye of the beholder. For this beholder, on this date, it's modest so long as it is less than 100GB.

During normal operations almost nobody will ever notice a difference.

There are a few times when you can tell the difference -- basically any high growth scenario (GB per minute). Like the load portion of dump & load. But, again, if your db is modest in size there is no point in getting excited about it.
 

Chris Hughes

ProgressTalk.com Sponsor
Cheers Tom - you are confirming what I have been thinking for a while.

Out of interest (as some customer DB's can be 200GB+) what would you do these days, have single 20GB fixed extents and then let the variables grow - or not?

I'm not aware that the products I work with have any large GB take ons on a regular basis.

Thanks
 

TomBascom

Curmudgeon
You are at 10.2B+ so I assume that you have defined at least a few type 2 areas.

Usually there are a handful of big tables that account for the bulk of the db -- I usually have these in dedicated storage areas. Those areas I would create with a small number of fixed extents ("small" like 2 to 4). I like powers of two so I'd probably create them using some nearby power of 2 sort of number ;)

And if I'm going to be bothered to created fixed extents for those areas I'll likely create some initial extents for the others too. Because it isn't much extra effort and if I'm going to run prostrct create I may as well go "whole hog"...

But then I'll stop worrying about it.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor

Chris Hughes

ProgressTalk.com Sponsor
So what follows is not really your structure (I hope).

No not at all very much invented, I did do a DB once with an 8,32,64,128 - 512 structure though. I see little justification for smaller cluster sizes when storage is so cheap!

Out of interest if the recs per block were calculated correctly and the DB was bigger why would this be so bad?

Thanks
 

TomBascom

Curmudgeon
Its type 2 with 9 area's block sizes 1,2,4,8,16,32,64,128,256 all clustered to 512.

You probably meant rows per block, not block size.

Assuming an 8k block -- a minimum of 512 blocks is 4MB per table/index/LOB. Most databases have a lot of tables and even more indexes that are much smaller than that. In fact a lot of databases have a lot of tables that are empty and unused. Yes, disk space is cheap -- but 1,000 tables and 2,000 indexes would require 12GB just for the *empty* db. That seems a bit excessive. But maybe in today's terabyte disk world it's not that big a deal....

OTOH -- if you're going to go to the trouble of calculating the "right" rows-per-block size you may as well take a pass at the table & index size and put the little guys in an cluster size of 8.

Or, if you want a general "one sizes fits most" default -- go with 8 for small databases and increase to 512 when it makes sense.
 

Chris Hughes

ProgressTalk.com Sponsor
Assuming an 8k block -- a minimum of 512 blocks is 4MB per table/index/LOB. Most databases have a lot of tables and even more indexes that are much smaller than that. In fact a lot of databases have a lot of tables that are empty and unused. Yes, disk space is cheap -- but 1,000 tables and 2,000 indexes would require 12GB just for the *empty* db. That seems a bit excessive. But maybe in today's terabyte disk world it's not that big a deal....

OTOH -- if you're going to go to the trouble of calculating the "right" rows-per-block size you may as well take a pass at the table & index size and put the little guys in an cluster size of 8.

Or, if you want a general "one sizes fits most" default -- go with 8 for small databases and increase to 512 when it makes sense.

Yes granted when you reach 1000's of objects it gets worse. The DB's I'm working with are in the 100's for the most part though.

I didn't size the indexes to 512 if I recall because of this.

I'm going to build a 8 and 512 db to check out the size difference with some dummy data.

Cheers
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Tom's description matches my typical DBs: a few really large tables that dominate in size and/or I/O profile, and many that are tiny or empty. I put the small stuff in multi-object areas (separating tables from indexes, of course), with RPB 128 and cluster size 8. For the fast-growing tables I tailor RPB to the data and use cluster size 512.
 
Top