Overview of FDA Draft Guidance: Computer Software Assurance for Production and Quality System Software

Below is a condensed version of the FDA draft guidance “Computer Software Assurance for Production and Quality System Software” with an emphasis on the information relevant to Grand Avenue Software (GAS) validation. These notes were gathered from the draft guidance document and the additional sources listed at the end of this article.

Table of Contents

Introduction to the FDA Draft Guidance

The Computer Software Assurance (CSA) guidance was drafted by a group comprised of FDA staff and representatives from several companies in the life sciences industry. The team was organized after the Center for Devices and Radiological Health (CDRH) determined that the burden of software validation for non-product software was a barrier to medical device manufacturers' adoption of new technologies.

Figure 1. Cultural Barriers Paralyzing the Industry by FDA discusses the drivers behind the shift from Computer System Validation to Computer Software Assurance (CSA) [1 12:48]

General Information

Purpose

The draft guidance provides recommendations on computer software assurance for computers and data processing systems used as part of medical device production or the quality system. The draft guidance describes the following:

  • a risk-based approach for establishing and maintaining confidence that software is fit for its intended use

  • methods of establishing computer software assurance and providing objective evidence to fulfill regulatory requirements

Scope

  • Applicable to computers and data processing systems used as part of production or the quality system

  • Applicable to non-product software (NPS): any software that is not directly used in a medical device, medical device as a service, or end-product

Computer Software Assurance Overview

Computer software assurance is a risk-based approach for establishing and maintaining confidence that software is fit for its intended use. The paradigm shift from Computer System Validation (CSV) to Computer Software Assurance (CSA) is not a change in 21 CFR 820.70(i), and no new regulations have been introduced. The CSA guidance is a clarification that non-product software is lower risk and does not require the same validation rigor as software in a medical device (SiMD) or software as a medical device (SaMD).

Figure 2. From CSA to CSV by FDA illustrates the shift in emphasis on documentation and testing when comparing CSV to CSA [1, 8:38]

The FDA Quality System Regulation (QSR) says the following about software validation:

21 CFR 820.70(i) When computers or automated data processing systems are used as part of ... the quality system, the manufacturer shall validate computer software for its intended use according to an established protocol

This has not changed. The difference is that CSA does not consider an “established protocol” to be a one-size-fits-all document for all types of software. CSA considers the risk of compromised safety and/or quality of the device (should the software fail to perform as intended) to determine the level of assurance effort and activities appropriate to establish confidence in the software.

Computer Software Assurance Risk Framework

Key Steps in Computer Software Assurance (CSA):

  1. Identifying the Intended Use(s)

  2. Assessing Risk

  3. Determining the Appropriate Assurance Activities

  4. Establishing the Appropriate Record

Identifying the Intended Use(s)

The CSA process starts with defining the intended use(s) of the system:

  • What does the system do?

  • Does it have direct impact on device safety and quality?

  • Does it result in patient/user safety risk?

  • What benefit do you get from the system?

The software addressed by this draft guidance falls into one of two categories:

  • software that is used directly as part of production or the quality system

  • software that supports production or the quality system

    • Supporting software often carries lower risk, such that under a risk-based computer software assurance approach, the effort of validation may be reduced accordingly without compromising safety.

Examples of Intended Uses

(Red text indicates example intended use applicable to Grand Avenue Software)

Direct Intended Uses

Supporting Intended Uses

Direct Intended Uses

Supporting Intended Uses

Software intended for automating production processes, inspection, testing, or the collection and processing of production data

Software intended for automating general record-keeping that is not part of the quality record

Software intended for automating quality system processes, collection and processing of quality system data, or maintaining a quality record established under the Quality System regulation

Software intended for use as development tools that test or monitor software systems or that automate testing activities for the software used as part of production or the quality system, such as those used for developing and running scripts

 

FDA recognizes that software used in production or the quality system is often complex and comprised of several features, functions, and operations; software may have one or more intended uses depending on the individual features, functions, and operations of that software. In cases where the individual features, functions, and operations have different roles within production or the quality system, they may present different risks with different levels of validation effort. FDA recommends that manufacturers examine the intended uses of the individual features, functions, and operations to facilitate development of a risk-based assurance strategy. Manufacturers may decide to conduct different assurance activities for individual features, functions, or operations.

 

Assessing Risk

Risk-based approach: systematically identify reasonably foreseeable software failures [to perform as intended], determining whether such a failure poses a high process risk, and perform assurance activities commensurate with the medical device or process risk, as applicable.

A risk-based analysis for quality system software should consider which failures are reasonably foreseeable (as opposed to likely) and the risks resulting from each such failure.

  • Process risk: the potential to compromise the quality system

  • Medical device risk: the potential to harm a patient or user resulting from a quality problem that compromises safety

For purposes of the draft guidance, process risk is binary:

  • High process risk: failure of the software to perform as intended may result in a quality problem that foreseeably compromises safety, meaning an increased medical device risk

  • Not high process risk: failure of the software to perform as intended would not result in a quality problem that foreseeably compromises safety

    • Failure to perform as intended would not result in a quality problem

    • Failure to perform as intended may result in a quality problem that does not foreseeably lead to compromised safety

Examples of Process Risks

(Red text indicates example process risks applicable to Grand Avenue Software)

High Process Risk Features/Functions/Operations

Not High Process Risk Features/Functions/Operations

High Process Risk Features/Functions/Operations

Not High Process Risk Features/Functions/Operations

Maintain process parameters that affect the physical properties of product or manufacturing processes that are identified as essential to device safety or quality

Collect and record data from the process for monitoring and review purposes that do not have a direct impact on production or process performance

Measure, inspect, analyze and/or determine acceptability of product or process with limited or no additional human awareness or review

Used as part the quality system for Corrective and Preventive Actions (CAPA) routing, automated logging/tracking of complaints, automated change control management, or automated procedure management

Perform process corrections or adjustments of process parameters based on data monitoring or automated feedback from other process steps without additional human awareness or review

Intended to manage data, automate an existing calculation, increase process monitoring, or provide alerts when an exception occurs in an established process

Produce directions for use or other labeling provided to patients and users that are necessary for safe operation of the medical device

Used to support production or the quality system (see Supporting Intended Uses above)

Automate surveillance, trending, or tracking of data that the manufacturer identifies as essential to device safety and quality

 

 

Determining the appropriate assurance activities

The following slide from FDA summarizes their position on appropriate methods and activities for Computer Software Assurance (CSA).

Definition of appropriate assurance activities: the activities commensurate with the medical device risk or the process risk.

  • In cases where a quality problem may foreseeably compromise safety (high process risk), the level of assurance should be commensurate with the medical device risk.

  • In cases where a quality problem may not foreseeably compromise safety (not high process risk), the level of assurance rigor should be commensurate with the process risk.

Examples of computer software assurance activities include, but are not limited to, the following:

  • Unscripted testing – Dynamic testing in which the tester’s actions are not prescribed by
    written instructions in a test case. It includes:

    • Ad-hoc testing – A concept derived from unscripted practice that focuses primarily on performing testing that does not rely on large amounts of documentation (e.g., test procedures) to execute.

    • Error-guessing – A test design technique in which test cases are derived on the basis of the tester’s knowledge of past failures or general knowledge of failure modes.

    • Exploratory testing – Experience-based testing in which the tester spontaneously designs and executes tests based on the tester’s existing relevant knowledge, prior exploration of the test item (including results from previous tests), and heuristic “rules of thumb” regarding common software behaviors and types of failure. Exploratory testing looks for hidden properties, including hidden, unanticipated user behaviors, or accidental use situations that could interfere with other software properties being tested and could pose a risk of software failure.

  • Scripted testing – Dynamic testing in which the tester’s actions are prescribed by written instructions in a test case.

    • Robust scripted testing – Scripted testing efforts in which the risk of the computer system or automation includes evidence of repeatability, traceability to requirements, and auditability.

    • Limited scripted testing – A hybrid approach of scripted and unscripted testing that is appropriately scaled according to the risk of the computer system or automation. This approach may apply scripted testing for high-risk features or operations and unscripted testing for low- to medium-risk items as part of the same assurance effort.

High-risk software functions require more rigor, i.e., scripted testing approaches. Non high-risk software functions require less rigor, i.e., unscripted testing approaches (can be a combination of methods suitable for the risk of the intended use.)

Risk Mitigation

Consider the controls or mechanisms in place throughout the quality system that may decrease the impact of compromised safety and/or quality if software fails to perform as intended.

Examples: (italics denote examples added by GAS)

  • Purchasing controls for selection of software vendors (e.g., GAS should be an approved supplier)

  • Data collected by the software for the purposes of monitoring or detecting issues and anomalies in the software (e.g., Windows event log on the application server, GAS security log, GAS audit trail)

  • The use of Computer System Validation tools (e.g., GAS automated test framework) wherever possible

  • The use of testing done in iterative cycles and continuously throughout the lifecycle of the software (e.g., test-driven development, continuous integration)

 

All customers: objective evidence of vendor testing performed provided in the validation package; the software does not permit customization, which would require revalidation by customers

Hosted customers: IQ checklists for all new installations and upgrades; data center certifications and assessments; backup and restore strategy and testing; security through database encryption (releases 14.7 and later); high-availability and scalability of hosting environment; customers have multiple non-production environments for different kinds of testing; upgrades are tested in an environment identical to production, production software is never upgraded automatically, upgrades must be requested by the customer

 

Software vendors are responsible for testing all functionality of an application. If vendors provide evidence of their validation, customers should not repeat the same testing to show that the software’s features and functions work as designed and documented.

 

Establishing the appropriate record

Capture sufficient objective evidence to demonstrate that the software was assessed and performs as intended. The record should include the following:

  • the intended use of the software feature, function, or operation

  • the determination of risk of the software feature, function, or operation

  • documentation of the assurance activities conducted, including:

    • description of the testing conducted based on the assurance activity

    • issues found (e.g., deviations, failures) and the disposition

    • conclusion statement declaring acceptability of the results

    • the date of testing/assessment and the name of the person who conducted the testing/assessment

    • established review and approval when appropriate (e.g., when necessary, a signature and date of an individual with signatory authority)

An embedded file with examples of assurance activities and acceptable records is reproduced verbatim from the draft guidance and can be downloaded by clicking this link: Examples of Assurance Activities and Records

Global Harmonization

  • FDA has buy-in and engagement with International Medical Device Regulators Forum (IMDRF)

  • FDA is working to ensure that CSA and MDSAP are in alignment

  • Companies should engage with standards groups and global regulatory agencies to promote alignment with CSA and encourage adoption of policies around CSA

Summary

CSA Planning

  • Leverage software testing done by the vendor

  • Leverage your supplier approval process

  • Use risk-based evaluation

  • Streamline testing. Traditional IQ/OQ/PQ is not necessary.

CSA Objective Evidence

  • Record what I did and how I did it

  • Evidence can be logged in the system (e.g., use automated logging, audit trail)

  • Minimize the use of screenshots

  • Validation documentation must have ORGANIZATIONAL VALUE (i.e., it is not generated simply to show during an audit)

CSA Reporting Requirements

  • Briefly document your risk justification (convey the thought process that went into the risk determination)

  • Include summary descriptions of features and functions tested

  • Record who performed the testing and the date

  • Record identified issues and their dispositions

  • Provide a conclusion statement about the software validation

References

1. USDM. Update from the FDA on CSV Changes [Webinar and Transcript]. Available from https://www.usdm.com/Insights/Webinars/Computer-Software-Assurance-Update-from-the-FDA

2. FDA CSV Team (2019, April 23). Panel Discussion, FDA and Industry Collaboration on Computer Software Assurance, 20th Annual Computer and IT Systems Validation, Medical Device Innovation Consortium. Retrieved from https://mdic.org/wp-content/uploads/2020/10/2019.04.23-IVT-FDA-Industry-Team-CSA-Presentation-Final.pdf

Additional Sources of Information

USDM. Q&A with the FDA on CSV Changes [Webinar]. Available from https://www.usdm.com/Insights/Webinars/Q-A-with-the-FDA-on-CSV-Changes

USDM. CSA: What You Need to Know About the FDA’s Upcoming Guidance [White Paper]. Available from https://www.usdm.com/Insights/White-Papers/Computer-Software-Assurance-What-You-Need-to-Know

FDA CSV Team (2020, August 20). CSA Revolution Series: Computer Software Assurance - Episode 4. Retrieved from https://mdic.org/wp-content/uploads/2020/10/Episode-4-CSA-Automation-with-Medtronics.pdf

For Further Information Contact

FDA Case for Quality program CaseforQuality@fda.hhs.gov

Francisco Vicenty, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Ave., Bldg. 66, Rm. 1534, Silver Spring, MD 20993-0002, 301-796-5577; or Stephen Ripley, Center for Biologics Evaluation and Research, Food and Drug Administration, 10903 New Hampshire Ave., Bldg. 71, Rm. 7301, Silver Spring, MD 20993, 240-402-7911.

Copyright © 2022, Grand Avenue Software, Inc. All rights reserved.