Hofstra Law Review
Abstract
The American healthcare system faces two trends that not only threaten the quality of health care, but which, combined, exacerbate another threat: the threat to patient privacy. The first of the two trends is the rapid expansion of private-equity acquisitions in the healthcare sector in the last decade, continuing the trend of medicine's corporatization. Decisions increasingly are being made by investors motivated by short-term profit, rather than by doctors motivated by clinical care, compromising healthcare delivery. The second trend is the simultaneous incursion in the healthcare industry of The American healthcare system faces two trends that not only threaten the quality of health care, but which, combined, exacerbate another threat: the threat to patient privacy. The first of the two trends is the rapid expansion of private-equity acquisitions in the healthcare sector in the last decade, continuing the trend of medicine's corporatization. Decisions increasingly are being made by investors motivated by short-term profit, rather than by doctors motivated by clinical care, compromising healthcare delivery. The second trend is the simultaneous incursion in the healthcare industry of A.I.-supported technology, upon which private equity firms have incentives to rely. Under private equity, which prioritizes financial returns, A.I. use intensifies harm to the quality of care, with little industry accountability. The same forces that threaten the quality of care combine to exacerbate the threat to patient privacy. Already, healthcare data privacy lags under the current regulatory system that relies on the Health Insurance Portability and Accountability Act's ("HIPAA") quaint custodians-of-record framework. The private-equity-A.I. era amplifies this vulnerability, and while current law enforcement efforts have focused on private equity's anticompetitive effects, attention has not yet turned to the privacy harms from the combination of private equity and A.I. in health care. Part I describes how the surge in both the private equity model and A.I.-supported technology threatens the quality of health care. Part II describes the other threat: the combined effect of the dual surge that exacerbates the threat to patient privacy. This Part examines how A.I. 's deployment under the private-equity model of healthcare increases patient privacy vulnerabilities. PartIII examines law enforcement's response to the current lack of privacy protections for healthcare data that is already lagging under HIPAA's outdated architecture. Efforts so far have focused against private equity's acquisitions of healthcare systems as violating antitrust laws, with attention beginning to turn to transparency and harm to patient quality of care. However, enforcement efforts have not yet turned to the harms to patient privacy from the private-equity model of healthcare. Part IV of this Article proposes riding the brewing momentum to regulate private equity to include protections for health data privacy when using A.I. systems. Rather than waiting for an overhaul of sectoral healthcare data regulation or the creation of a comprehensive privacy statute, some protections for patient privacy can be carved out now by requiring private equity to provide pre-merger reporting of its data governance plan for acquisitions of healthcare systems in which A.I. systems are to be deployed by expanding upon the already-existing reporting requirement under the Hart-Scott-Rodino Act to the Federal Trade Commission ("FTC').A pre-merger obligation serves two purposes: (1) it prevents the private equity firm from escaping liability and transferring the liability to the healthcare entity it creates; and (2) it removes the patient's burden under traditional privacy frameworks that require the consumer's notice and consent. The pre-merger report should be followed up with compliance reviews and investigations by the FTC to ensure that even after a merger has been effected, the private equity firm will continue to be held to the terms of the data governance report. This Article proposes that the private-equity firm should report its data governance plans to maintain confidentiality of data, transparency and accountability, and quality management. The data-governance reporting requirement is not necessarily exclusive to private equity or to A.I., so carving out an area of regulatory protection in a high-profile technology category within the healthcare industry lays the groundwork for expanding data privacy protections to eventually replace the current outdated scheme and can translate into part of a comprehensive privacy statute. Requiring enhanced transparency in private-equity use of A.I. provides a starting point to mitigate not only the anticompetitive harms law enforcement already has on its radar, but also the deleterious effects of imperiled patient privacy. A.I.-supported technology, upon which private equity firms have incentives to rely. Under private equity, which prioritizes financial returns, A.I. use intensifies harm to the quality of care, with little industry accountability. The same forces that threaten the quality of care combine to exacerbate the threat to patient privacy. Already, healthcare data privacy lags under the current regulatory system that relies on the Health Insurance Portability and Accountability Act's ("HIPAA") quaint custodians-of-record framework. The private-equity-A.I. era amplifies this vulnerability, and while current law enforcement efforts have focused on private equity's anticompetitive effects, attention has not yet turned to the privacy harms from the combination of private equity and A.I. in health care.
Recommended Citation
Park, Eunice
(2025)
"Private Equity and A.I. in Healthcare: A Perilous Pairing for Patient Privacy,"
Hofstra Law Review: Vol. 53:
Iss.
2, Article 3.
Available at:
https://scholarlycommons.law.hofstra.edu/hlr/vol53/iss2/3
