The simple test is to ask for numbers.
If your DBA can produce metrics related to database performance during problem periods then there is a chance that he is right.
If, on the other hand, he can only offer assertions that there are no database problems then he is almost certainly wrong.
If he does have numbers then you need to consider what those numbers actually are and whether or not they are relevant. To be relevant they need to be consistently measured and they need to be fine enough grained that they cover your trouble period. For instance, it is not sufficient to say "the hit ratio is 98%". You need to know over what period that statement is true -- are we talking about "since the database was started" (which is what PROMON will show by default) or over the last 24 hours (not a very useful period), or is that "yesterday between 10:50 and 10:55 when users were complaining about poor performance". This last statement is useful and relevant.
If metrics are available and relevant they still might not be the right metrics and it is still possible that they are being misinterpreted. For instance, the example above refers to "hit ratio". This is a very popular metric to watch but it isn't very helpful and there is a lot of even more unhelpful folklore surrounding it. You can, for instance, find references in the Progress documentation and in the knowledge center that would seem to state that so long as the value is higher than 95% it is "good". (Under most circumstances 95% is actually really quite bad. 98% is, IMHO, "barely acceptable".) You have to understand what a metric is and what it is telling you and then decide if it is useful -- "hit ratio" for example is telling you how often a database reference is satisfied from RAM vs having to read from disk. Disk IO is thousands of times slower than memory access so a very small number of disk IO ops will have a very perceptible impact on performance. Thus the desire for hit ratios with lots of 9s in them.
But if your workload is very light (maybe you're only doing a few hundred db references per second) does it really matter? No it doesn't...
In your particular situation you probably want to start with a breakdown of what happens when your problem query is run. What tables are being accessed, using which indexes and how much IO does this generate? To do that you should start with table and index statistics. If you are on 10.1C or higher you can get those on a per user basis, otherwise you have to settle for aggregate data across all users (to get data for a specific user and a specific instance of a query you could arrange to be the only user on a test database...) In order to actually capture that data you have to have enabled its collection -- this means that the DBA needs to have the -tablerangesize and -indexrangesize startup parameters set large enough to include all of your tables and indexes. (If these parameters are not set they default to the first 50 tables and indexes. And if they are not set it also is a hint that your DBA isn't performance savvy.) You could then write queries against the VSTs to collect the data or you could use a tool like
ProTop to help you analyze the situation.
Or you could save a lot of time and agony and hire a consultant to help you out. See signature line for details