Search results

  1. catch.saravana

    Index Build

    Version: 11.6 OS: Linux CentOS 7 CPU: 4 cpu with 2 core each I have 3 questions here, sorry if I should have put it as separate threads. Question 1: How do I come up with optimal values for -datascanthreads and -mergescanthreads? Question 2: Does -B have any effect on index build? - the...
  2. catch.saravana

    _mprshut - Why?

    Thanks @Rob Fitzpatrick, got it! Any thoughts on question #1?
  3. catch.saravana

    _mprshut - Why?

    Does this mean all of those 4 processes are sharing only 16 GB but depending on OS all 16 GB might be assigned to one process at a given point of time? If yes before running these 4 processes the RAM had 56 GB free space whereas it came down to less than 1 GB in about 10 to 15 minutes. I was...
  4. catch.saravana

    _mprshut - Why?

    Hello Everyone, I ran a round of dump test yesterday and got couple of questions, thought of getting them clarified; Progress Version: 9.1E OS: Sun Solaris (Unix) I ran 4 sessions of proutil dump and was monitoring the same. Below is the statistics from top command (pardon me for changing the...
  5. catch.saravana

    Large Table Dump & Load

    Thanks for the headsup!
  6. catch.saravana

    Large Table Dump & Load

    Thanks Everyone! Load is multi threaded which gets completed in 4 to 5 hrs which is on par with rest of the sessions that I have kicked in parallel. The issue is mainly on the dump side, the dump of this table takes 15 to 16 hrs which is hampering the downtime window and couldn't fit. As...
  7. catch.saravana

    Publish Web Service

    I understand we need license for consuming web services. Do we need any appserver or webspeed license to publish web services from progress side?
  8. catch.saravana

    Large Table Dump & Load

    - Progress 9.1E [Unix - Sun Solaris] to OE 11.6 [Linux]. - Server Locations are different (pardon me for not giving more details on this as am not supposed to) - This is more like an audit table (only create and read) - we did purge on tables that we can. on this table client has confirmed they...
  9. catch.saravana

    Large Table Dump & Load

    Hello Everyone, I have a large table which is of size 300GB - like an audit table. We wanted to do a D&L for this table separately and not as part of our migration. I assume we will not be able to do a D&L in parallel with users writing data to this table. What we are trying to get here is; 1...
  10. catch.saravana

    Question Probkup Vs Snapshot

    Hello Everyone, Is using Snapshot as replacement for PROBKUP ONLINE common? We were thinking of using Snapshot to take db back. To take a snapshot of our db it takes hardly 2 to 3 seconds (worst case 5 seconds). What are the pros and cons of using snapshot as replacement of probkup? Before...
  11. catch.saravana

    D&l Limitations

    We are done with 1 round of migration on a test machine, integrated the downstream applications and is in testing phase. Testers haven't reported anything on it - may be they didn't stamp on scenario that is using this table. To be honest I never thought about it until I came across that...
  12. catch.saravana

    D&l Limitations

    Hi All, I see a set of D&L limitations on Progress documentation; out of which they say ROWID/RECID will be dumped as '?' (unknown value). Exact statement below; If you define a database field with a data type of ROWID or RECID, then the ROWID values in that field are dumped and reloaded as...
  13. catch.saravana

    Truncate Bi File

    My bad - In this application we have all area's being defined with variable extents (no fixed extents at all). I meant area and not extent. In this case we are adding 2 new AREA's each of them with a variable extent (no fixed extent). Sorry for the confusion @cj_brandt @TomBascom.
  14. catch.saravana

    Truncate Bi File

    @cj_brandt - I haven't done this before, let me try on our dev machine. But I heard from couple of senior DBA's saying we will have to recreate hotspare db or we will get an error during AI Roll Forward saying 'Area # mismatch'; if in case we are using OE Replication then we don't have to...
  15. catch.saravana

    Truncate Bi File

    @cj_brandt - True, we are not truncating the BI file of Hotspare DB (Stand By DB). We wanted to truncate BI of only live DB. I understand BI of hotspare db will also get increased but haven't reached the critical stage. As the program has been identified and fixed I hope it will not grow any...
  16. catch.saravana

    Truncate Bi File

    We use AI based replication.
  17. catch.saravana

    Truncate Bi File

    I hear from senior DBA's that they will need to recreate the hotspare DB in this case. Do we really need to recreate a hotspare DB in this case? - if not I need to check with them if they feel it as a best practice or what is the necessary of it.
  18. catch.saravana

    Truncate Bi File

    Thanks Rob. Yes the objective is to truncate the BI. We have identified the program that has caused this issue and we have fixed the same. Agreed Rob, I got confused with the other discussion we were having this morning.
  19. catch.saravana

    Truncate Bi File

    9.1E version: Let's say I have a large BI file that is grown to 15 GB and reached the critical limit. In our project they take a backup (will run for 7 hrs) before truncating the BI. Is a backup necessary? Can't I bring down the DB, do a roll forward and truncate the BI? May be a dumb question...
  20. catch.saravana

    Include Files - Named Parameters

    Thanks Tamhas!
Back
Top