By Joe Fornadel and Wes Moran 2020-08-26 12:35:54
“Software as medical devices” is becoming more prevalent in health care. Federal regulatory authorities have recognized their benefit, and although many such devices have received regulatory approval, the product field continues to expand at a rapid pace, making it difficult for courts to provide up-to-date guidance on the attendant liability risks.
U.S. Food and Drug Administration-approved artificial intelligence-based algorithms are rapidly growing in number. These algorithms are, for the most part, designed to help healthcare providers implement existing workflows more efficiently in a variety of fields, the two most popular being cardiology and radiology. This article provides a high-level overview of recent developments in the world of artificial intelligence and attempts to identify some of the associated liabilities, particularly in the context of product liability.
Before charting the history of the approach to artificial intelligence taken by U.S. Food and Drug Administration (FDA), it is first helpful to define some of the basic terminology associated with this field. (Note for the reader: the FDA’s website devoted to this topic is informative.) Artificial intelligence (AI) is a broad term that captures the entirety of the field devoted to making intelligent machines. Machine learning is a particular AI technique in which a computer program can be used to generate an algorithm based on a particular data set and then apply that algorithm to new data. These algorithms can be “locked” or “adaptive”: the former indicates an algorithm that remains unchanged despite any results associated with new data (and thus seemingly easier to understand and regulate); the latter allows the program to change the algorithm over time to reflect the ever-expanding set of data better (thus creating a moving target that is more complex and difficult to regulate). When such programs are “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man,” they fall within the jurisdiction of the FDA and are referred to as “software as a medical device” (SaMD).
As a medical device, machine learning incorporated into SaMD would need to proceed through the traditional FDA review and approval pathways: premarket clearance (510(k)), de novo classification, or premarket approval. Given the infancy of the SaMD field and for the purposes of this article, the 510(k) process is likely irrelevant to most devices in the near future. Further, because most of the products currently in development appear to be in the field of radiology, the premarket approval pathway (applicable to devices that support or sustain human life, are of substantial importance in preventing impairment of human health, or which present a potential, unreasonable risk of illness or injury) is, for the time being, not worthy of further discussion. That leaves the de novo process as the primary process for the consideration of SaMD.
The Challenge of “Software as a Medical Device”
Despite the pathway being defined, SaMD presented the FDA with a novel challenge. Unlike a stent or an MRI machine (or the software inside them), which are instruments that are used to treat a condition or further a clinician’s ability to render a clinical judgment, SaMD can be designed to generate a clinical recommendation or opinion of its own. Thus, the attendant regulatory and liability considerations that accompany such devices are somewhat broader than those associated with other previously developed product types.
However, the FDA did not have to tackle the challenge of analyzing and regulating SaMD in a vacuum. Its representatives were leading members of and eventually built off the work of the International Medical Device Regulators Forum (IMDRF), which published a framework for defining and considering risk in the SaMD context in 2014. This 2014 framework incorporated the definitions and standards captured in existing engineering standards promulgated by the International Electrotechnical Commission (and others) and identified a number of factors that should be considered when attempting to regulate SaMD. The IMDRF then proposed classifying SaMD into one of four categories, based on the purpose of the information provided by the SaMD to the healthcare decision (treatment or diagnosis, being the most significant, followed by driving clinical management, then informing clinical management) and the state of the healthcare situation or condition (scaled from critical—to serious—to non-serious). This classification system is aptly captured in the Table 1 published by the IMDRF.

Indeed, in addition its work with the IMDRF, the FDA publicly encouraged advancements in digital healthcare and in SaMD in particular. On July 27, 2017, then-FDA Commissioner Dr. Scott Gottlieb issued a public statement announcing “new steps to empower consumers and advance digital healthcare.” Commissioner Gottlieb recognized that the medical community had been hesitant to adopt new technologies in the past and indicated that the FDA was focused on empowering and facilitating innovation. Among other things, he announced the “Digital Health Software Precertification Program,” which streamlined review of SaMD from precertified firms. Although targeted at lower-risk devices (and not some of the higher-risk devices contemplated above), this precertification program reflected a larger FDA decision to embrace the digital healthcare revolution.
One of the most notable developments in this field occurred just six months later when the FDA authorized IDx’s IDx-DR retinal diagnostic software, which is indicated for use with the Topcon TRC-NW400 retinal camera. On its own, the TRC-NW400 is designed to provide images of the retina and anterior segment of the human eye. The IDx-DR software, once integrated with the camera, is designed to detect more than mild diabetic retinopathy in patients diagnosed with diabetes mellitus. It does so by transmitting specific patient images selected by the user to IDx’s servers. Then, the images are input into AI software, and the diagnostic results are transmitted back to the user and communicated to the patient. The magnitude of this authorization cannot be understated because the anticipated user will not need prior imaging experience, save for a minor and product-specific training session. The novelty of this SaMD faced some skepticism from the medical community at first. Nevertheless, the American Diabetes Association now recognizes it as an alternative to traditional screening approaches. Indeed, for a device that likely falls into one of the higher-risk tiers set out above, the FDA’s authorization decision is a groundbreaking one.
The FDA approach to regulating SaMD has continued to evolve, particularly for those that incorporate AI. On April 2, 2019, it published a discussion paper titled, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD).” Organized into seven sections, the paper marked where the field of AI-based SaMD stood and how the FDA currently regulated such devices, and it proposed a framework for evaluating modifications to existing devices. Notably, this framework expressly mentioned the IMDRF’s four-tier risk-classification system (discussed above). The FDA also recognized that AI-based SaMD present another layer of complexity to the extent the underlying algorithm is adaptive and not static. In the paper, the FDA then explored in detail how different types of modifications may affect the necessary or desired regulatory oversight. While the finer details of this proposal are beyond the scope of this article, it is important to note that the FDA emphasized SaMD manufacturer transparency as a means to both improve safety and achieve regulatory compliance.
Moreover, the FDA’s April 2019 discussion paper was not its most recent action in this field. The FDA hosted a two-day public workshop, “Evolving Role of Artificial Intelligence in Radiological Imaging,” on February 25 and 26, 2020. Interestingly, just days before this conference, the FDA authorized the Caption Guidance software, an AI-based SaMD that provides real-time guidance to users to allow them to capture diagnostic-level echocardiographic images. It is not surprising that the FDA’s first public workshop discussing AI-based SaMD focused on radiology; the number of such devices authorized by the FDA in this specialty is far greater than in any other field, likely due to the fact that the data-rich nature of radiological imaging lends itself to the application of AI algorithms.
In any event, the workshop provided some insight into developments in the future use of AI and its incorporation into medical devices and practice. Based on the presentations and surrounding discussions, it does not appear that autonomous devices with incorporated AI systems such as the IDx-DR will overtake the radiological-imaging field anytime soon. This is not to say that the “revolution” that Commissioner Gottlieb described in July 2017 is not underway. The efficiencies and improved accuracy inherent in AI-based SaMD are significant. In some cases, they may obviate the need for human review of the underlying images.
To date, courts have provided very little guidance on how AI-based “products” should be analyzed under the existing legal framework. However, it makes sense to start with the threshold question of whether AI is a “product” at all. This issue was recently examined by the U.S. Court of Appeals for the Third Circuit in the case of Rodgers v. Christie (795 Fed. Appx. 878 (2020)). That case involved a multifactor risk-estimation model known as the Public Safety Assessment (PSA) that aided the New Jersey state court system in determining whether an inmate should be granted pretrial release. The plaintiff brought product liability claims against the foundation that administered the PSA when her son was murdered by a man, who just days before, had been granted pretrial release by a New Jersey state court.
The case was filed in the United States District Court for the District of New Jersey. The plaintiff’s claims were brought under the New Jersey Product Liability Act (NJPLA), which similar to many states’ product liability statutes, imposes strict liability on manufacturers and sellers of certain defective products. However, the NJPLA does not define the term “product.” Therefore, the district court looked to the Third Restatement of Torts: Products Liability, which defines products as “tangible personal property distributed commercially for use or consumption” or any “[o]ther item[]” whose “context… of distribution and use is sufficiently analogous to that of tangible personal property.” The district court dismissed the plaintiff’s complaint on the grounds that the PSA was not considered a product under the NJPLA. The Third Circuit upheld the dismissal on appeal.
The Third Circuit cited two reasons for upholding the dismissal. First, the PSA was not distributed commercially. Rather, it was designed as an objective, standardized, and empirical risk-assessment instrument to be used by pretrial services programs such as New Jersey’s. Second, the Third Circuit held that the PSA was neither tangible personal property nor remotely analogous to it. The Third Circuit agreed with the district court in concluding that information, guidance, ideas, and recommendations are not products under the Third Restatement, both as a definitional matter and because extending strict liability to the distribution of ideas would raise serious First Amendment concerns.
The Rodgers holding could have far-reaching implications for AI-based SaMD. Although Rodgers did not involve a medical device, the Third Circuit made clear that information, guidance, ideas, and recommendations do not qualify as products under the NJPLA. Assuming that other courts find the Third Circuit’s reasoning persuasive, it is unlikely that AI-based SaMD that provides information to aid healthcare providers in diagnosing an underlying condition or clinical management will be deemed to be products in the short term. However, the law is not static. As AI-based SaMD become more commonplace, courts may come to define the term “product” more broadly to encompass these devices. If so, SaMD may be subject to product liability claims sounding in strict liability, negligence, and breach of warranty.
Strict Liability
The scope of strict liability as defined in statutes and case law varies from state to state. However, as a general rule, for a plaintiff to recover under a claim of strict liability, the plaintiff must prove that he or she was harmed by a product manufactured or sold by a defendant that contained a manufacturing or design defect or failed to warn of a potential safety hazard, and the product was being used in a reasonably foreseeable manner when the harm occurred. See Restatement (Third) of Torts: Product Liability §1. Given the nature of SaMD, manufacturing defects are less likely to occur; and in any event, further discussion of the technical minutiae of such claims is beyond the scope of this article. Strict liability claims involving SaMD, on the other hand, are more likely to be brought under theories of design defect or failure to warn.
Design Defect
Design defects are generally established through either the consumer-expectations test or the risk–utility test. Under the consumer-expectations test, a product is defective in design if it fails to perform as safely as an ordinary consumer would expect when used in an intended or reasonably foreseeable manner. However, the consumer-expectations test only applies where the everyday experience of a product’s user permits the conclusion that the product’s design violated minimum safety assumptions. Therefore, it is unlikely for the consumer expectations to apply in a case involving SaMD, which are specialized medical device incorporating AI-based algorithms. Instead, the risk–utility test is more likely to apply.
Under the risk–utility test, a product is defective in design when the risks inherent in the design outweigh the benefits. Typically, design defects under the risk utility test must be established through expert testimony, especially in cases involving highly technical products such as AI-based algorithms. Therefore, in a design defect case involving SaMD, the plaintiff’s expert would need to establish that the risks of using the AI-based software (i.e., the algorithm itself) used to diagnose or treat a patient outweighed the benefits. Key considerations would likely include the gravity of danger or magnitude of harm posed by the design (i.e., how could a patient be harmed if a diagnosis is missed or an incorrect treatment plan is recommended); the likelihood that such danger would occur (i.e., the algorithm’s error rate); the feasibility of a safer alternative design; and the financial cost of the safer alternative design.
Failure to Warn
As previously mentioned, manufacturers and sellers can also be held liable under a failure-to-warn theory if the plaintiff can prove that he or she was harmed because the manufacturer or seller failed to instruct or warn adequately of a risk that was known, or should have been known to occur, when the product was being used in a reasonably foreseeable manner. However, most states recognize the learned-intermediary defense for manufacturers and sellers of medical devices. In states where this defense applies, the manufacturer or seller of the medical device may discharge its duty to warn the end consumer by furnishing adequate warnings and instructions to the treating physician who uses the device to diagnose or treat the patient. Such a defense would likely be available in cases involving SaMD (as it is in typical medical device cases).
Negligence
Depending on the law of the state, a plaintiff may also bring a product liability claim under a theory of negligence. Unlike strict liability, negligence claims focus on the conduct of the defendant and are based on what was known, or should have been known, at the time of manufacture. Typically, to establish negligence, the plaintiff must prove that the defendant seller or manufacturer failed to exercise due care in some manner, and as a result, the plaintiff was harmed. Negligence claims can be based in negligent design and failure to warn, both of which would likely apply in cases involving SaMD.
Breach of Warranty
The third theory of recovery that is potentially applicable in SaMD product liability cases is breach of warranty. Claims sounding in breach of warranty are often governed by statute. Generally, three types of warranties can apply: (1) express warranties, (2) the implied warranty of merchantability, and (3) the implied warranty of fitness for a particular purpose. As with other product liability claims, a claim for breach of warranty only applies to products, not services.
Express warranties are created in one of three ways: (1) by an affirmation of fact made by the seller that becomes part of the bargain; (2) through a description of the product that becomes part of the bargain; or (3) by the seller providing a sample or model of the product that becomes part of the bargain. Implied warranties are those created by law. The implied warranty of merchantability requires that the product meet certain minimum standards of quality, specifically, that the product be fit for the ordinary purposes for which it is sold. This requirement includes a standard of reasonable safety. Lastly, the implied warranty of fitness for a particular purpose arises in cases where the seller knows, or has reason to know, of a particular purpose for which the product is required and the purchaser relies on the seller to select a suitable product to meet that purpose.
To recover under a breach-of-warranty theory, the plaintiff must generally prove that (1) the plaintiff purchased the product from the defendant; (2) the seller issued an express warranty, or one was implied through the operation of law (as in the case of the implied warranty or merchantability or fitness for a particular purpose); (3) the seller breached the warranty because the product failed to perform as warranted; and (4) the plaintiff was harmed as a result.
It is rare for patients to recover for breach of warranty in the medical device context because it is typically the healthcare provider, not the patient, that purchased the medical device from the manufacturer or supplier. However, as SaMD becomes more commonplace in our society, it is likely that we will see medical devices that are marketed directly to consumers and used in the home setting. So, manufacturers and sellers of SaMD should consider the possibility of product liability claims premised on a breach-of-warranty theory when deciding whether to market such devices directly to individual consumers.
Appendix A to the FDA’s April 2019 discussion paper presented a series of hypothetical scenarios that bear further discussion and may prove to be fruitful grounds for a discussion of potential product liability considerations.
Intensive Care Unit SaMD (ICU SaMD)
Consider the following scenario:
An AI, machine-learning application intended for ICU patients receives electrocardiogram, blood pressure, and pulse-oximetry signals from a primary patient monitor. The physiologic signals are processed and analyzed to detect patterns that occur at the onset of physiologic instability. When physiologic instability is detected, an audible alarm signal is generated to indicate that prompt clinical action is needed to prevent potential harm to the patient. This SaMD AI, machine-learning application will drive clinical management in a critical health-care situation or condition.
After the initial authorization and commercialization of the ICU SaMD, the manufacturer proposes two potential modifications: (1) modify the algorithm to ensure consistent performance across subpopulations, especially in situations where real-world monitoring suggest that the algorithm underperforms; and (2) reduce false-alarm rates while maintaining increasing sensitivity to the onset of physiologic instability. Assuming that the ICU SaMD is in fact a product and not a medical service, several potential product liability considerations come into play.
First, is the ICU SaMD defective because there are feasible alternative designs that could improve the product’s performance? Fortunately for the device manufacturer, the analysis does not end there. Under the risk–utility test, the fact finder would also have to consider the danger posed by the current design; the likelihood that the harm would occur; and the financial cost of the “safer” alternative design. In this case, the harm that could occur from the algorithm failing to identify potential physiologic instability could be great. The patient could fall into an extremely unstable condition if treatment is untimely. Harm could also result from false-positive results due to preventative actions taken by medical staff to prevent a psychologic event that was never going to occur. However, even though the potential harm is severe, the other factors of the risk–utility test may weigh against a finding that the device is defective. For example, if the SaMD’s error rate is already very low, and the update to the algorithm only slightly improves the device’s accuracy, the SaMD may not be deemed to be defective. It would also be difficult to find that the product was defective in design if the proposed alternative design were to increase the cost such that it became too expensive to use in a clinical setting.
Another consideration is whether the device manufacturer should be held liable under a failure-to-warn theory if it failed to warn the physician about the potential for underperformance in certain subpopulations, or failed to educate the physician about potential error rates. Such a determination would depend on when the manufacturer became aware that the ICU SaMD was underperforming in certain subpopulations, or had a certain error rate, and whether it had a continuing duty to warn the user even after the product was sold.
Further, could the manufacturer be held liable for negligent design because certain demographics were underrepresented in the data set the helped form the basis of the algorithm that predicts when a physiologic event is likely to occur? Assuming that any such claim is not preempted (an assumption necessary for the entirety of this discussion), this question is likely one that a jury would be left to answer. All of the above scenarios present questions that manufacturers of SaMD may have to grapple with in the event that such devices are determined to be products subject to a product liability legal framework.
Skin Lesion Mobile Medical App
Consider yet another scenario arising with a different type of AI-based medical device:
An AI, machine-learning mobile medical app (MMA) uses images taken by a smartphone camera to provide detailed information to a dermatologist on the physical characteristics of a skin lesion so that the dermatologist can label the skin lesion as benign or malignant. The MMA will drive clinical management in a serious healthcare situation or condition.
Then, consider the manufacturer proposing the following modifications for the skin lesion MMA: (1) improved sensitivity and specificity in analyzing physical characteristics of benign or malignant skin lesions using real-world data; and (2) extending the MMA to be used with similar smartphone image acquisition systems with prespecified acceptance criteria for the image acquisition characteristics and a real-world performance plan to monitor performance across image-acquisition systems.
This manufacturer would be faced with similar product liability concerns that were presented with the ICU SaMD. However, because the manufacturer of the skin lesion MMA is likely selling the product directly to consumers, it is also faced with potential liability under a theory of breach of warranty. Given the risks, when weighing potential liability concerns, manufacturers and sellers should consider not only the purpose for which the SaMD is being used, but who is using the SaMD and to whom the SaMD is being marketed.
It is also important to consider how the MMA manufacturer’s first proposed modification raises affects the feasibility of an alternative design. At what point is the proposed modification (i.e., the alternative design) feasible? In other words, can the proposed modification be deemed to be a feasible alternative design if it has not yet received the FDA approval required to market it to healthcare providers and/or consumers? Moreover, assuming that the proposed modification materially improves the MMA’s ability to differentiate between benign and malignant skin lesions, would the manufacturer face liability if it leaves already-approved version of the MMA on the market once the improved iteration with the proposed modification has been approved by the FDA? In this case, the feasibility of a safer alternative design may favor a finding of defect. However, given that it is the algorithm and the physical aspects of the device that are not being modified, the MMA manufacturer may not have to keep older versions of the product in use. In the case of the MMA, it is certainly possible that users will be able automatically update the algorithm with a routine software update once the modification becomes available. Since implantable medical devices increasingly incorporate AI, surgical intervention may not be required to update the product as improvements in the design (i.e., the algorithm) are made. Therefore, as SaMD becomes more common in the medical device arena, manufacturers may face fewer limitations on their ability to provide users with the most up-to-date design, thereby reducing their product liability exposure by having older, less safe products in use on the market.
As illustrated above, FDA-approved SaMD is becoming more commonplace in the healthcare setting. The FDA has recognized the benefit of these technologies and established a streamlined review process for SaMD from precertified firms under the Digital Health Software Precertification Program. However, as SaMD becomes more widely available, courts will grapple with how to deal with lawsuits arising out of the use of these devices in a healthcare setting.
Given the infancy of the technology, courts have provided little guidance on how SaMD and other AI-based “products” should be analyzed under the existing legal framework. Indeed, the Third Circuit Court of Appeals has held that an algorithm is not a product subject to strict product liability claims under New Jersey’s Products Liability Act. However, as SaMD and other AI-based devices become more common, courts may expand the scope of the term “product” to encompass these devices. If so, lawsuits involving SaMD will likely fall under the existing product liability framework and SaMD-related claims brought under theories of strict liability, negligence, and breach of warranty.

Joe Fornadel is an associate with Nelson Mullins Riley & Scarborough LLP. Mr. Fornadel practices primarily in Columbia, South Carolina. He specializes in product liability litigation, in particular, cases involving pharmaceuticals and medical devices. He also counsels clients in the risk-prevention and regulatory compliance space. Mr. Fornadel is a member of the DRI Drug and Medical Device Committee and the Complex Medicine/Experts Special Litigation Group. Wes Moran focuses his practice in the areas of product liability and complex commercial litigation. Since joining Nelson Mullins in 2016, he has been engaged in the defense of pharmaceutical and medical device companies in both state and federal court.
©DRI. View All Articles.