Sorry if this is a bit handwavy or veering into "how long is piece of string" territory, but I'm looking for some opinions/advice on avoiding database file damage.
I don't get too many phone calls nowadays regarding status 2 errors in the legacy software I support (like the many phantom status 2s back in the days of P.SQL 2000) , but when it happens it tends to be quite dramatic (this week, for example, 9000+ records lost in 2 files when they were rebuilt, which meant that we ended up restoring from a 3-week old backup when it was discovered that the damage existed all the way back to then but had only started giving status 2 errors this week).
It has to be said that after 10+ years of clients using the software, the files involved tend to be the same ones and are tables into which the most rows are being inserted each day. Therefore I imagine they are getting cached the most. In this week's case, the records lost accounted for about 1% and 4% of the total number of records in the files. In all cases, as far as I can remember, the damage has never been traced back to a bug in the applications or (phantom 2s excepted) the database engine - when a culprit has been identified it's been a hardware failure, server crash or network issue.
What concerns me is that despite having backup plans and modern hardware/operating systems, I still find myself having to explain to clients that they are going to have to re-enter data because records have been lost. I was wondering if people on here have any views on the following:
1) File size
Is there a linear relationship in a Pervasive.SQL database file between the number of records lost and the total number of records in the file, i.e. a 10 year old file with a million records is likely to lose more records than a 10 year old file with 100K?
2) File version
The files in question this week were still in version 6 format, even though the database engine involved was 11. Are later file versions more damage-resistant?
With caches getting larger and more tables sitting in RAM to gain performance, is this asking for trouble and a possible reason for the heavier file damage I am seeing? Is the system/OS cache inherently more risky for the Pervasive.SQL engine to use and should it be avoided if possible? Do people see more issues when they combine the system and Pervasive cache?