I always wondered about the file integrity monitoring requirement of the PCI DSS standard. What is the purpose of this requirement? Is it a control or an compensating control. Isn.t it something similar to the much debated .code review or web application firewall. subject?
To understand more about this control, I looked into the control in detail. The file integrity requirement is referenced in two places of the standard.
First reference is under the secure audit trials (10.5) where it talks about the security of the audit trials. The statement 10.5.5 .Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert). On a closer look, the intention of this requirement is to ensure that in case of a modification to the already logged data happens, it should generate an alert.
The challenge I face is that some of the logs are added into the databases and some others are into flat files. The solutions we looked into were unable to provide the desired result especially in the case of flat files.
This is when we looked into the requirement closely and for implementation purpose, we looked into this slightly differently. We reached a conclusion that the need is to protect the log/ audit trial data and the access to this data should be limited and to be set as read only for all users except the application which logs the trials. Any changes to the access control will trigger an alert. This mechanism gave us fairly good control on the audit trial files.
The second reference of the file integrity is at 11.5 in which the standard requires the monitoring of the critical files related to the operating system, application and the configuration related to the system. The examples giver were System executable, Application executable, Configuration and parameter files and the stored log files
Here the challenge we faced includes the non-availability of any FIM products to cover the operating systems we have. For example, operating systems such as Tru64 and Tandem. What we have done is to identify the critical files listed above, develop a cron process to run on a weekly basis and email the information about the baseline file. A manual verification of the same will be performed to check if there is a change in the file size and date of modification. If everything is fine, it is assumed that the file integrity is intact.
Though this may not be the best way to do it, we sure consider this as a cost effective way. How does your organization do this?