Texas AG arrives at first of its kind settlement regarding AI use in healthcare
Recently, the Texas Attorney General (Texas AG) reached a settlement with the company Pieces Technologies, Inc., (Pieces) a healthcare technology company marketing generative AI products that were utilized in at least four Texas hospitals to create clinical summaries and patient documentation records. Notably, though this settlement did not result in any financial penalties, it sent a clear message to AI companies: AI is not governed merely by AI-specific legislation.
The Texas AG took aim at the company’s use of internally-developed metrics for accuracy and their use of such metrics in advertising claims stating that the metrics were “likely inaccurate and may have deceived hospitals about the accuracy and safety of the Company’s products.” On these grounds, the AG’s office brought suit under the Texas Deceptive Trade Practices Consumer Protection Act, clarifying that AI products are within the scope of existing consumer protection statutes. Additionally, the Texas AG specifically highlighted the lack of transparency in this case while emphasizing the need for such transparency when it comes to AI companies targeting healthcare, providing the following quote:
AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible and unnecessarily puts Texans’ safety at risk. Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly.
While this may be the first clear instance of an attorney general’s office using consumer protection laws to push back against AI companies resulting in a highly publicized settlement, it certainly won’t be the last. Rather, this case embodies the ongoing shift in the outlook of various state attorneys general and the growing focus of enforcement bodies both at the state and federal level to regulate the largely uncharted realm of artificial intelligence using the tools available.
Alongside Texas, state attorneys general in Massachusetts, Colorado, New Hampshire, Virginia, and California have all demonstrated a focus on ensuring that privacy protections are upheld, and AI companies are not exempt from those privacy-protective laws as written. What’s more, at the federal level, the Federal Trade Commission (FTC) has also expressed that AI companies and their practices or statements surrounding their products fall within the Commission’s unfair and deceptive enforcement powers, with the FTC warning companies to ensure that their statements to the public are truthful and accurate.
And misrepresentation by AI companies, as displayed in this settlement and the FTC’s words of caution, are not the only missteps by AI companies that enforcement bodies are watching closely. For example, the attorneys general for various states have also focused in on the fraudulent and abusive uses of AI technology, including the use, or misuse, of personally identifying information, creation of deep fakes, and inclusion of bias and discrimination in decision-making processes.
With the use of AI technology becoming an ever-growing facet of the lives of everyday Americans, AI companies are facing immense pressure to comply with existing and emerging laws and standards. And compliance, evidenced by Pieces’ settlement decree, is non-optional.
If you have any questions about your company’s compliance with cyber regulations, concerns about vulnerability to attacks or other breaches, or if you want to learn more about proactive cybersecurity defense, contact a member of McDonald Hopkins’ national data privacy and cybersecurity team