I have a SQL DB2 table where the first two fields are the primary keys (not including the third field which is date/time stamp).
The table was designed by another team with the intent to make it generic.
I was brought into the project after the key value for the second field was coded, for when it was inserted onto the table.
This leads me to this: We now have to do a cursor select with a WHERE
clause that includes the first primary key – and then for the second primary key it must be for only when it is a specific value in position 21 for 8 bytes (And we will always know what that value will be for the second field.) The second field is a generic 70 byte field (alphanumeric).
My question is: Should we use a LIKE
wildcard for the WHERE
clause statement for the second primary field condition or instead a SUBSTR
, since we know the position of the value?
I ask because I have done an EXPLAIN
yet, I do not see a difference between the two (neither does my database analyst).
And this is for a few million records for a 1300 byte long table.
However, my concern is volume of the data on the table will grow on various systems. Therefore performance may become an issue. Just right now it is hard to measure the difference between LIKE
and SUBSTR
. But I would like to do my due diligence and code this for long term performance.
And if there is a third option, please let me know.