It would be helpful to know your current OpenEdge release, your database OS platform, and something about your storage hardware. It would also be helpful to know something about your database size, sizes of the largest tables, and your available window of downtime to complete your upgrade and migration.
I'm glad that you are making this change and looking for help, rather continuing to use an outdated database structure. Part of learning is using the correct terminology. There is no "Level I" or "Level II" storage. From context, you are talking about Type 1 and Type 2 data storage areas.
By "data storage areas", I mean areas that contain or could contain tables, indexes, or LOB columns. In other words, all areas that are not before image, after image, or transaction log areas.
Type 1 data areas are the original data storage area architecture. They were the only data area type available in Progress v9 and earlier. Type 2 data areas were introduced in an early form in OpenEdge v10.0A and fully implemented by v10.1B. All application data, in any v10.x or later OpenEdge database, should be stored in Type 2 areas. The Schema Area, area #6, is required to be Type 1. It is also severely constrained in maximum size. Therefore it should never be used to store application data.
If you want to migrate your data to Type 2 areas, especially from a database with data in the Schema Area, you should plan a full database dump and load. Dump and load of tables can be done as ASCII dump and load, via Data Dictionary routines, or binary dump and load, via proutil commands. Binary dump and load is preferable as it is more efficient. It can and should be scripted and logged to ensure that all tables are migrated in their entirety and that any exceptions are caught.
Automation, practice, and testing are essential for success. Note also that a database dump and load typically involves dump and loading more than just application schema and data. Depending on the database features, options, and add-on products you use, there could be many other types of metadata in your source database that should be preserved and migrated to the target database. Planning and executing a database dump and load is an entire topic unto itself and I can't do it justice here.
Note that some changes can only be made via dump and load, such as changing database platform (e.g. Windows to Linux), and changing the database block size (e.g. 4 KB to 8 KB). I recommend using 8 KB for the database block size.
I am thinking about specifying every table in the structure file
Strictly speaking, tables are not specified in the structure file. Storage areas and their extents (files) are. The assignment of storage objects (tables, indexes, LOB columns) to storage areas is done in the schema.
Do not create a separate data storage area for each table. That is extreme overkill, and makes for a database that is more difficult to manage.
is it possible to just specify the tables that are higher in transaction?
You can create as many or as few data storage areas as you like (within reason). You don't necessarily need separate areas for tables with high activity, though there are some specific use cases where that can be helpful. But generally, you should have a separate area for each table that is fast-growing, i.e. higher in creates. You can get this information by looking at your CRUD (create/read/update/delete) activity.
ProTop is a very good tool for doing this.

It is a free download, at the link in my signature. Let me know if you need help with that.
For a high transaction table with 5 indexes, do I specify each index individually?
This is probably not necessary. For a large table, I typically create an area for the table and a separate area for all of its indexes. I might decide to break down the indexes further, if one of them was exceptionally large/fast-growing.
I saw in the Progress documentation that I could define something like named groups, but it would have less performance.
I can't figure out what this means. Can you provide a relevant link?
With Level II, will I reduce the fragment count?
Given the mean record sizes we see for the subset of tables you have shown, yes, a dump and load will eliminate fragmentation in the short term. Some fragmentation is unavoidable, e.g. records that are larger than the database block size. But there is no evidence of that, yet, in your database. Fragmentation can be caused over time, by record updates that increase record size. This is dependent on application and user behaviour.
There are different schools of thought on how to structure your database, in terms of the assignment of storage objects to storage areas. I have discussed this in the past. Here are some other threads to read:
Hi,
My client uses Progress 10.2B on Linux. The database currently uses 45+GB on the hard disc. It hasn't been dumped and reloaded in years. So fixed extents were just added on every time the database needed to grow.
When I do a database analysis it says the actual database size 17.4 GB.
Tables 14.4GB and indices 3GB
Looking at the .st file the database has many extents that are large than 512000.
I am thinking of doing a dump and reload.
I was thinking also thinking of :
a) Reducing the fixed size of the database to 22GB and allowing a fixed size of; Tables 17.5 and Index...
Hello Everyone,
We have a DB with 900+ tables and approximately 1.1 TB of data. I have couple of quick questions;
1. How do we segregate tables under different areas? I am sure it shouldn't be based on functionality. Can we have it 1 table per area and group of indexes of that table in another area which will lead to 1800 area's?
2. Or do I have to group all the small tables in one area and keep each fast growing tables in separate area?
What is the preferred guidelines for segregating tables and indexes? Do we have any specific article/documentation? Or based on someone's experience...
Hello.
I am hoping to learn different ways I can structure PROGRESS databases, its advantages and disadvantages. If anyone knows a good resource that I can look up, I would appreciate if you could share it with me.
Thank you in advance.
Liza
Note that these threads are old. Some of my opinions have changed over time. For example, I used to recommend RPB 128 for multi-table areas and for index areas. I don't have a good reason to limit it to 128, rather than 256, so I would now recommend RPB 256 for those areas.
I purposely don't have a precise definition for very large tables, i.e. tables that warrant their own areas. This is a trade-off between manageability and complexity, and different DBAs may decide this differently based on their needs. A very large table, in the mind of a DBA with a 50 GB database and minimal disk space, might be considered small by a DBA with a 20 TB database and PB of fast storage.
It would help to know about your entire database. If you only have 170 tables, some of which are empty, you could post information about the subset that have a meaningful amount of data in them, in descending size order.