Root as dba? (Progress Enterprise DB 8.2C )

matt_

New Member
Greetings all,

I'm using Progress 8.2C on a DEC Alpha.

What is the default admin account for this version? I log into the system and su to root to use the pro data dictionary, but I don't have access to the Admin functionality. Should I be running the data dictionary under another user? Why wouldn't root have admin privs?

Any comments or information are welcome.

Thanks!
 
Whether or not you have full access to the data dictionary also depends on your license. Use the showcfg command from the bin directory of your DLC install to show what licenses you have. If you don't have a development license, then no admin menus.
 
Whether or not you have full access to the data dictionary also depends on your license. Use the showcfg command from the bin directory of your DLC install to show what licenses you have. If you don't have a development license, then no admin menus.

I ran showcfg:

[FONT=r_ansi][FONT=r_ansi]Configuration File: /progress/dlc/progress.cfg
Company Name: xxxx
Product Name: Enterprise DB
Installation Date: Tue Nov 2 18:02:02 1999
User Limit: 80
Expiration Date: None
Serial Number: xxxxxxx
Control Numbers: xxxxx - xxxxx - xxxxx
Version Number: 8.2C
Machine Class: KB
Port Number: xx


I do not see anything about different licenses. I assume this means I don't have a developer license.

Ultimately I would like to export the data from Progress into another RDBMS. I know that I can do a binary dump of the databases, but is it possible to use the binary file in another RDBMS? Would the schema be preserved? I am assuming no. With my limited experience with Progress, and since I don't have admin menu options, I'm thinking my best bet is using ODBC... unless perhaps I am missing something...
[/FONT]
[/FONT]
 
The full owner of the database is the user who was create the database.

I tested this on the test database "sports" included in progress, and I still don't have access to the admin menu, even though root:system owns those files.
 
SQL on 8.2B is going to be very painful. It is still SQL89. SQL92 was introduced in 2000, but 8.2B is from 1997. I've long since forgotten even how to do the ODBC setup for SQL89.

How would you do a binary dump if you can't get to the admin menus.

Your best chance might be to find a consultant with an 8.2 developer license for the same platform. How big is the DB? What about the person or company who developed the software.

I would hunt around on the PSDN site for info on the ODBC setup. But, it is asking a lot for documentation on 13 year old software.
 
SQL on 8.2B is going to be very painful. It is still SQL89. SQL92 was introduced in 2000, but 8.2B is from 1997. I've long since forgotten even how to do the ODBC setup for SQL89.

How would you do a binary dump if you can't get to the admin menus.

Your best chance might be to find a consultant with an 8.2 developer license for the same platform. How big is the DB? What about the person or company who developed the software.

I would hunt around on the PSDN site for info on the ODBC setup. But, it is asking a lot for documentation on 13 year old software.

Do you mean that 8.2C also uses SQL89?

For a binary dump, I was considering: proutil <dbname> -C dump. Would this not preserve the data?

All the databases total to about 18 GB. I have sought contractors for this, including the original software developer, but am atttempting to minimize costs as the size of their quotes are proportional to the age of the software, and I'm working with a limited budget.

I'll look at the PSDN site and see what I can find. Thanks for your help!
 
Yes, SQL92 was introduced in 9.1B ( http://www.oehive.org/versionhistory ). SQL89 was introduced in version 5.0! and everything between the two is SQL89.

Yes, I suppose the binary dump would allow you to dump the data, but not in a readable form.

That's a seasonably large amount of data. Presumably it matters to someone. I might suggest that merely exporting the data is only part of the problem ... you also need to understand the schema in order to know what that data means and how it connects.

Presumably, this is happening in relation to a move to new software. If so, you not only need to deal with getting the data out of the old system, but getting it into the new one ... and this means having to construct a semantic map between the two systems. There is almost certainly going to be data in the old system that has no corresponding place in the new system and data in the new system for which there is no real source in the old. In both cases, you are going to have to decide what to do to make up a consistent set of new data. My experience ... and I have done this sort of thing rather more than once ... is that you want to start out making as much of this semantic data in the context of the old system because there you have rapid access to the schema structure of the old system. Then, your load programs can finish the job. For example, suppose there is a field in the old system which might have as much as 40 characters in it, but this needs to go into into a field which is only 30 characters., it is a lot better to report on what is not going to fit and fix it from the source system.

Bottom line, this is not a job to hack at ... it is a job for someone with experience and tools.
 
Back
Top