Skip to content

Risk based Data Management, nay targeted data review

There is no doubt that our clinical trial designs are getting more complex by the day. This, however challenging, is a result of progression in the industry and is unavoidable. From a Data management (DM) perspective, the result is more complex CRF designs, more and highly complex edit checks and of course data source proliferation. The end result, unwanted higher costs. Since the progression of clinical trials will prevail the only solution is to look for ways to execute the required regulatory expectations in a clinical trial more effectively, and to ensure we are not performing any redundant tasks.

The mere insinuation that any DM tasks are redundant will undoubtably spark a passionate reaction from any data manager. For years we as data managers have seen ourselves as the data police and know all too well what can go wrong in the event that we let our guard down. The industry thus has a blanket cleaning approach, cleaning as much as possible as thoroughly as possible. Although this undoubtably contributes value, the question at hand is, is this in depth, though, level of cleaning actually necessary.

What if some of the data cleaning we are doing is in vain? The key fields for statistical analysis are all linked to the primary and secondary end points of the study, with very few others being included. Is all of the effort we are investing in any additional data cleaning actually resulting in value for cost? Or are we just “stuck” in our own habits?

The argument for this blanket review of data is that this intense level of review is perhaps regulatory driven, and provides assurance to data integrity, thus although seemingly not contributing to actual analysis it is actually providing peace of mind. However it may then be surprising to hear that the regulatory authorities are actually not expecting all data to be clean but rather simply “the absence of errors that matter”, which shows support to the notion that not all data should be cleaned equally . Two of the more recent examples are:

  • ICH GCP E6 (R2) Guidance (March 2018): “The methods used to assure and control the quality of the trial should be proportionate to the risks inherent in the trial and the importance of the information collected.” 
  • MHRA’s ‘GXP’ Data Integrity Guidance and Definitions (March 2018): Organisations are expected to implement, design and operate a documented system that provides an acceptable state of control based on the data integrity risk with supporting rationale.”

Additionally, ICH E8 R1, which has already been drafted, and should be released in Q1 2021 goes into more detail around topics such a quality by design (risk assessments, Quality attributes, critical to quality, etc..) and Quality culture.

This should provide sufficient comfort that not only are regulatory supportive of a different approach to data cleaning, but are actually encouraging industry to reconsider the conservative blanket cleaning approach.

In summary, it is time for the industry to rethink the approach to data cleaning, in fact since the conservative approach is still hugely favoured one could even go as far as to say that generally most companies are not actually currently aligned with regulatory. Its time to partner up with data masters/experts to realign processes, and get the most of out your data, without breaking the bank by taking the most appropriate approach to data cleaning and collection.

Learn more about our services

    Full name
    E-mail