It would be helpful to know your current version and any imminent upgrade plans. Alternate Buffer Pool (ABP) is a pretty solid feature but there are one or two version-specific gotchas.
Also, it matters what your structure looks like. If you're considering ABP then you're on 10.2B or later so you should also all application tables, indexes, and LOB columns in Type II storage areas. And if you do then you have more flexibility with ABP assignments. You can assign entire areas to ABP but you can also assign at the object level for objects in Type II areas. For Type I areas not object-level assignment is possible. You can only assign the entire area to ABP.
Yes, the schema area is a good candidate for ABP: it's small, pretty static, and often heavily read.
Once an area is "enabled" - does that attribute stick? I mean does it persist through shutting-down and starting-up the DB broker?
Area-level assignments are stored in _Area which is located in the control area (dbname.db). Unlike some other area attributes however, ABP assignments are not reflected in the structure file. Object-level assignments are stored in _StorageObject and are reflected in a schema dump (.df). Such .df files have a "bufpool=yes" entry in the trailer and they have 'BUFFER POOL "Alternate"' in UPDATE TABLE statements.
I think it's important to know these plumbing details so you can reason about the caveats and why they exist.
- ABP assignments, both objects and areas, stick through a DB restart.
- In a dump & load you create a new target from scratch with a .st, .df, and data files. For the reasons stated above, in this scenario object-level assignments stick (assuming of course they're in the .df) and area-level assignments do not. Make sure you update your D&L checklist and add "compare ABP assignments on source & target", e.g. by diffing proutil viewB2 reports from each. I've seen people allocate memory to -B2 and fail to realize they had no assignments in their newly-created DB.
- In a prorest (of a backup with ABP assignments) overtop of an existing DB with area-level assignments, the control area is retained so area-level assignments stick. And as _StorageObject is one of the tables that was backed up, object-level assignments also stick.
- In a prorest (of a backup with ABP assignments) that is not overtop of an existing database, object-level assignments stick and area-level do not.
- If you are upgrading your database schema, e.g. from one version of an application schema to another, be careful of how you create the incremental .df. For example if you do it by dumping the prod .df and diffing it with a version master schema-holder (that may not have ABP assignments in it), the resulting incremental .df will contain UPDATE TABLE/INDEX statements to remove the object-level ABP assignments. Fortunately these are fairly easy to find as they come at the end of the .df. Just as you should watch for potential data-loss statements like DROP TABLE and DROP FIELD in incremental .dfs, you should also watch for BUFFER POOL "Primary" in UPDATE TABLE/INDEX statements.
- ABP currently isn't available, and -B2 has no effect, in an OE Replication target.
Area-level assignments are done, as you know, with proutil enableb2; this must be done offline. Object-level assignments can be done in two ways: with the (tedious, confusing, fairly awful) Data Dictionary interface, or by creating a .df with the desired UPDATE TABLE/INDEX statements and applying it to the DB. Either approach can be done online or offline.
There are two performance benefits to using ABP: reducing LRU latch contention in the primary buffer pool and (hopefully) ensuring that the objects or areas you assign remain memory-resident in the ABP, even if you run queries against very large tables that would otherwise flush that valuable data out of memory.
Since 10.2B06/11.1, there is an easier and more comprehensive way to reduce LRU latch contention: use the -lruskips parameter. You can start your DB with -lruskips 100 and reduce your LRU latch locks to a fraction of what they used to be, given the same logical I/O. Nice and easy. So I would say that reducing latch contention shouldn't be your primary motivation for using ABP. But if configured appropriately it will still provide some marginal benefit there, so that's nice.
The primary reason for using ABP is to keep "hot" data memory-resident. I said "hopefully" above because it is up to you, the DBA, to ensure this is the case and remains so. This means you need to do some planning, specifically about what data to assign to ABP and therefore, worst-case, how large -B2 needs to be. It's not as easy as it should be or could be, and you should revisit your performance metrics in promon from time to time to ensure -B2 is still large enough.
What to assign to ABP:
- the schema area
- small, static, frequently-read tables; many applications have small reference tables (countries, tax codes, companies, etc.) that are very heavily-read
- indexes of those small hot tables
- TDE-encrypted tables and indexes, if they are heavily read; keeping the data in memory minimizes the overhead of encryption on disk write and decryption on disk read
- (perhaps) the relatively small and slow-growing indexes of large and heavily-read, fast-growing tables that are themselves too large to assign to ABP
How to find heavily-read objects:
- Set the statistics startup parameters (-tablerangesize/-indexrangesize) high enough, i.e. above the highest-numbered application table and application index numbers respectively.
- Read the data in _TableStat and _IndexStat. Analyze it in descending order of reads to find the most-read objects. Cross-reference this with record counts from a dbanalys report so you have an understanding of table sizes. Also look at the create/update/delete activity to get a sense of which tables may be growing over time, and at what rate. For this purpose it's valuable to have several recent reports, perhaps monthly, so you can look at growth trends.
Conduct a sizing exercise:
- Schema area: easy; get its size on disk and divide by DB block size. Add that to -B2.
- Any other area: get its area HWM (in blocks) from _AreaStatus._AreaStatus-hiwater. Add that to -B2.
- Indexes: easy (if a bit tedious): get their size on disk, in DB blocks, directly from a dbanalys/idxanalys report. Add all those numbers to -B2.
- Tables: very tedious! There's no block count in dbanalys, unless the table is alone in its own area, which is unlikely for ABP-candidate tables. My approach in v11 is:
- Create a copy of the production DB via probkup.
- Once the tables to be assigned have been chosen, write a program like this:
for each table1 table-scan: end.
for each table2 table-scan: end.
etc.
- Start the test DB with a fairly large -B.
- Connect to it in a procedure editor, load the program.
- Note the value of "used buffers" in promon R&D 1 7.
- Run the program.
- Note used buffers again, subtracting the first value from the second.
- Add this to -B2.
The sum of these numbers is your minimum -B2. Increase that number by some factor to allow for growth of those objects over time, say 20%. You can pick a number you're comfortable with based on your CRUD stats and how often you'll be checking in on this DB. Remember that if you let the total number of blocks read in these objects exceed -B2 then you will have (a) blocks evicted from ABP and (b) the LRU replacement mechanism enabled in ABP (and contention for the LRU2 latch) until you increase -B2 and manually disable the mechanism in promon.
There's more to say but that's the thrust of it. I did a presentation on ABP last year at my local PUG meeting. It would be worth it for you to review it to get up to speed and to learn a few tips and gotchas.
This is a link to the slide decks and audio recordings from that day in one zip file. It contains my presentation and the others (on different topics) by Tom Bascom, Mike Furgal, and Rich Banville.
Let me know if you have any other questions.