TomBascom
Curmudgeon
Dynamic query may not work so well with larger result sets.
Unless you specify "forward-only" (or use the -noautoreslist startup parameter) dynamic queries will build sort files on the client -- which will be painfully slow once they become non-trivial in size.
In any event 2,500 records is an awfully small data set to be drawing conclusions from but it looks to me like your static FOR EACH with a WHERE clause was the winner. Which is pretty much what I would have expected...
You might also try using field lists.
And another thing... don't rely on etime to draw conclusions that you're going to try to scale. A better approach is compare logical read operations and "useful records". The closer you come to a "perfect" ratio (slightly more than 2:1) that doesn't change as the result set size changes the better the query will scale and the more reliable the relative performance will be as you change things (like -B) in your environment.
Unless you specify "forward-only" (or use the -noautoreslist startup parameter) dynamic queries will build sort files on the client -- which will be painfully slow once they become non-trivial in size.
In any event 2,500 records is an awfully small data set to be drawing conclusions from but it looks to me like your static FOR EACH with a WHERE clause was the winner. Which is pretty much what I would have expected...
You might also try using field lists.
And another thing... don't rely on etime to draw conclusions that you're going to try to scale. A better approach is compare logical read operations and "useful records". The closer you come to a "perfect" ratio (slightly more than 2:1) that doesn't change as the result set size changes the better the query will scale and the more reliable the relative performance will be as you change things (like -B) in your environment.