A recent announcement by multiple federal agencies has highlighted their intention to enforce their separate regulations against developers, deployers and users of AI systems. Federal Trade Commission (FTC) Chair Lina Khan and officials from the US Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB) and the US Equal Employment Opportunity Commission (EEOC) each reinforced their worries about automated systems, citing civil rights, fair competition, consumer protection and equal opportunity concerns. Their serious language, joint public commitment and previous enforcement actions in this area make this statement no simple theater.
All companies—big and small—are collecting a tsunami of data. The US Department of Justice (DOJ) has now challenged corporate America to harness and analyze that data to improve corporate compliance programs by going beyond the risk profile of what has happened to better understanding the risk profile of what is happening. But where to begin? Artificial intelligence, which is already used to assist in the review and production of documents and other materials in response to government subpoenas and in corporate litigation, is invaluable in proactively reviewing data to identify and address compliance risks.
- DOJ expects compliance programs to be well resourced and to continually evolve.
- DOJ wants companies to assess whether their compliance program is presently working or whether it is time to pivot.
- DOJ uses data in its own investigations and it expects the private sector to rise to the occasion and analyze its own data to identify and address compliance risks.
- The data is there—mountains of it—and the key is to find an efficient way to analyze that data to improve the compliance program.
- Artificial intelligence is an important tool for solving the challenge of big data and identifying and remediating compliance risks effectively, quickly and regularly, in conjunction with further periodic review.
Click here to read our full alert.