Home / News & Insights / Whistleblower Law Insights / AI Whistleblowers: Reporting Fraud in New Technologies

AI Whistleblowers: Reporting Fraud in New Technologies

Artificial Intelligence (“AI”) pervades current discussions in government, business, and academia.  While the potential for this rapidly developing technology is both exciting and unnerving, some familiar enforcement issues will arise as the new technology is deployed.  Whistleblowers will have an important role to play in ensuring that AI companies, and individuals and entities using AI tools, comply with requirements intended to protect the public from a variety of potential harms.

AI Whistleblowers

Whistleblowers will have an important role to play in helping to ensure that AI is not used as a new tool to engage in familiar types of misconduct. However, AI whistleblowers will likely face some of the same obstacles faced by other whistleblowers, including retaliation and efforts to inhibit reporting.  Recently, former employees of Open AI called for the SEC to conduct an investigation into severance agreements that allegedly purported to inhibit whistleblowers from reporting misconduct to the government by, among other things, requiring that employees relinquish any reward and obtain consent before reporting confidential information. The SEC has long taken a strong stance against the inclusion in non-disclosure agreements of language inhibiting whistleblowers from reporting to the government.  (See the SEC Office of the Whistleblower Annual Report to Congress for Fiscal Year 2023). The CFTC is also cracking down on such efforts, announcing earlier this year that it was investigating whether banks were using language in NDAs that would discourage whistleblowers. Congress has also discouraged such conduct by requiring companies to include language advising employees of their right to report misconduct to the government, subject to certain confidentiality requirements.  See Defend Trade Secrets Act of 2016, Pub. L. No. 114-153, §7, 130 Stat 376, 384 (May 11, 2016).

Are AI Whistleblowers Protected?

AI may be new, but many of the ways in which it can be used to cause harm to the public are not.  The False Claims Act incentivizes individuals to report fraud to the federal Treasury by providing financial incentives and protection from retaliation.  The SEC and CFTC whistleblower programs incentivize individuals to report violations of federal securities laws or commodity laws by offering whistleblowers substantial financial rewards, protection from job retaliation, and confidentiality.

AI and Taxpayer funds

As Deputy Attorney General Lisa Monaco stated in her public remarks earlier this year, “Fraud using AI is still fraud.” As companies in a wide range of sectors that depend on government funding start to incorporate AI, the use of these tools does not provide a free pass to violate material government requirements for obtaining government funds. This emphasizes the importance of AI whistleblowers.

AI and Healthcare

Healthcare entities have been early adopters of AI tools, which hold promise given the large data sets involved in the provision of healthcare services. For example, AI could be useful in predicting what patients need certain services and when.  But just like human actors, AI can push medically unnecessary procedures, and potentially do so on a larger scale.  There have been a few examples of enforcement actions involving such misuse of technology already.  For example, in 2020, DOJ settled a case against an electronic health records (EHR) vendor, Practice Fusion, which was alleged to have received kickbacks from a pharmaceutical company in exchange for modifying its EHR software to increase the number of alerts physicians received, leading to increased prescriptions for opioids that were not based on medical necessity.  The government also has intervened in several qui tam complaints that allege fraud based on the use of algorithms to submit inaccurate diagnoses codes to the Medicare Advantage program. As AI becomes commonplace, it is likely that these types of fraud will become more common too.

AI and Cybersecurity

The federal government has made cybersecurity a top priority, and among other steps, the White House issued an Executive Order that directs increased cybersecurity requirements for entities contracting with the federal government. E.O. 14028, 86 Fed.Reg. 26633 (May 17, 2021).  AI has the potential to help companies ensure compliance with cybersecurity requirements through, for example, detection and prevention of risks and vulnerabilities.  But the security is only as good as the tools that are being used, and failure to have adequate human oversight for the use of AI may undermine compliance in this increasingly critical area.

AI and Procurement

AI opens new possibilities for efficiently managing manufacturing, testing, and supply chain management. But companies remain responsible for ensuring that the tools they employ to perform these functions are doing the job accurately.

AI and Other Contractual Obligations

The government enters contracts and funds grants that often contain requirements designed to further social goals, such as helping small business or underserved communities. Allowing AI to override those requirements or apply them inappropriately could also give rise to problems.  AI systems, for example, can incorporate biases that could result in decision-making that could run afoul of material contract requirements.

AI and the Investing Public

Like any new technology, AI can be the subject of a variety of efforts to mislead investors and commit other financial frauds.  Companies that tout the benefits of their nonexistent AI products and procedures and companies that misuse AI to violate trading rules are engaged in the same types of misconduct that is familiar to law enforcement.  Earlier this year the SEC issued an investor alert that called attention to the potential for misleading investment claims about the use of AI in an entity’s investment platform, or in a company’s development of AI technology.  The CFTC has also issued an alert about scams involving AI. The SEC recently settled charges against two investment advisors for making false and misleading statements about their use of AI and violating the Marketing Rule, which prohibits investment advisors from disseminating advertisements containing untrue statements of material fact.

AI and Criminal Activity

AI may also become a tool for criminals to use in engaging in a variety of scams or to facilitate or cover up crimes, including through the use of fake documentation and identities.  The Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) administers an anti-money laundering whistleblower program. In addition, the Department of Justice is launching a pilot program to reward individuals who report misconduct not covered by other whistleblower programs.

Phillips & Cohen Supports AI Whistleblowers

If you know of fraud, waste, or abuse of government funds, or fraud on the investing public, contact Phillips & Cohen for a free, confidential review of your potential case by experienced whistleblower lawyers.

Let us help you.
Get a free, confidential case review