Search results

  1. catch.saravana

    Ai Roll Forward Error

    Any specific reason, @TomBascom ? In most of the cases I will be using probkup approach but in scenarios where we have time constraint (be it prod, preprod, dev) am sure we would love to go with the copy extents approach which is lot of faster. If you feel it's fragile please let me know what...
  2. catch.saravana

    Ai Roll Forward Error

    @Rob Fitzpatrick I used RETRY option and Roll Forward ran successfully. [sbalasub@dev-xyz ~]$ /opt/dlc/bin/_rfutil /opt/dbba/hotspare1pp -C roll forward retry -a /netappxyz_preprod/dbadmin/aiprocess/aiclone/forhotspare1/preprodxyz.20170322170414 After-image dates for this after-image file...
  3. catch.saravana

    Ai Roll Forward Error

    @Rob Fitzpatrick No specific reason for using prostrct builddb over prostrct repair. Recently created db multiple times from zfs snapshot where once we get the snapshot I delete/rename the .db, .lg files and run the prostrct builddb to get the control area (db) recreated. I guess both prostrct...
  4. catch.saravana

    Ai Roll Forward Error

    In our case, probkup - 7 hrs prorest 1. net new location (fresh db) - 13 hrs 2. pregrown db - 8 hrs even if we consider the db is pregrown and we are doing a restore overall to recreate a hotspare db it will take around 15 hrs (7 for backup + 8 for restore). We have 2 hotspare db's and...
  5. catch.saravana

    Ai Roll Forward Error

    I used this approach (prostrct builddb) on test db with same .st file but no data in the tables and it worked fine for me. That's when I was confident of doing the same in preprod but failed. The only difference is while running the test on test db I didn't open the hotspare db. Switch and Roll...
  6. catch.saravana

    Ai Roll Forward Error

    Version: 11.6 OS: Linux CentOS7 Below are the steps I did and am facing error during AI Roll Forward; 1. Create a new database, let's say 'devdb' 2. Binary D&L complete 3. Index Rebuild complete 4. Cross verified DB Analysis Report 5. All good and was able to start the db and query the...
  7. catch.saravana

    Index Build

    Thanks @Rob Fitzpatrick. I agree.
  8. catch.saravana

    Index Build

    Thanks @TomBascom
  9. catch.saravana

    Index Build

    In this case, yes I can do a restore but I don't want to do that until I understand this issue. The reason basically is as I pointed before the Disk Read says 10 mb/sec and Disk Write says 2 mb/sec (don't know the reason yet) whereas the other location on the same disk has 95 mb/sec read and 32...
  10. catch.saravana

    Backup Of A Pregrown Empty Db

    Thanks @Rob Fitzpatrick - I just started creating a db with fixed extents; will do the binary load/index build and will come up with a comparison chart. Thanks @TomBascom - I agree, Snapshot is a good option. We were using ZFS snapshot on Sun Solaris for application that was running on 9.1E...
  11. catch.saravana

    Backup Of A Pregrown Empty Db

    I am sure there wont be any proper explanation but unless I could show them concrete evidence of gain in time or performance they wouldn't like to change something that works 15 years this way - got used to hearing this... :) Very true. I will grow .dn files with fixed extents and see how long...
  12. catch.saravana

    Index Build

    No probs, Rob - I am quite happy to wait. :) In this case I feel like I can do a restore from hotspare database which will hardly take 6 hrs rather than rebuilding the index for 32 hrs. I would like to know what will be your approach if you are in my situation. Please advise. Also would like...
  13. catch.saravana

    Backup Of A Pregrown Empty Db

    I haven't tried growing .dn files using Fixed Extent. I have made a note of how large each area can grow and have it handy. If I have understood in a right way, your suggestion is to have 1 fixed extent and 1 variable extent per area and the fixed extent will be of the size that I have noted...
  14. catch.saravana

    Backup Of A Pregrown Empty Db

    We use all Variable Extents and loading the DF hardly takes < 10 mins on netapp and ~1 min on localdisk. Prod will be running on Pure Array which will be faster and can expect it to be somewhere between 1 to 10 mins. So in my case taking a backup after step 2 is not that worthy. That's how I...
  15. catch.saravana

    Backup Of A Pregrown Empty Db

    @ step 2 the db may be hardly 25 MB. The exercise am trying to do is not for repeating on the same db. Let's say for example, I will perform step 1 to 5 on /netappxyz/dev/db/xyz.db (data files will be under /netappxyz/dev/datafiles/*.dn) and take a backup after step 5 to restore and build a...
  16. catch.saravana

    Backup Of A Pregrown Empty Db

    Version: 11.6 OS: Linux CentOS7 Can we take a backup of a pregrown empty db? Below are the steps are I tried but couldn't achieve what I want. 1. Created an empty DB (Type II Storage Area with ~70 areas) 2. Applied DF 3. Binary load of 963 tables 4. Ran Index Build (now DB size is 1.5 TB -...
  17. catch.saravana

    Index Build

    Little more update, over the weekend I had to build a new environment on the same local disk. A fresh load of 1.5 TB DB took 6 hrs and Index Build got completed in 5.7 hrs - it's same parameters for load and index build and almost close to the same set of data. As per previous logs you could see...
  18. catch.saravana

    Index Build

    Sorry for the confusion, Rob. as i mentioned before, this is day before yesterday's log [sudo /opt/dlc/bin/proutil /localdisk/xyzdb/xyz.db -C idxbuild all -i -TB 64 -TM 32 -TMB 512 -SG 64 -thread 1 -TF 80 -datascanthreads 12 -mergethreads 8 -T /localdisk/tmpForidx –B 5000000] new ones as per...
  19. catch.saravana

    Index Build

    As per the current design, It's a Type II storage and I can say for sure data, index and word index are all in different areas. If I am right I can see this for the first area only after 8 hours for the current run (it's still running). I will keep you posted on this. If I have to segregate...
  20. catch.saravana

    Index Build

    Thanks Rob, as usual excellent explanation to datascanthread/mergethread parameters and how to set optimal value for it. On question 3: sudo /opt/dlc/bin/proutil /localdisk/xyzdb/xyz.db -C idxbuild all -i -TB 64 -TM 32 -TMB 512 -SG 64 -thread 1 -TF 80 -datascanthreads 12 -mergethreads 8...
Back
Top