image-practice-bg

Cutting-edge technologies inevitably raise cutting-edge legal issues. The rapid growth and deployment of generative AI and related technologies is quickly changing the way 21st century companies do business, and the products and services they offer. Just as quickly, however, generative AI and related emerging technologies have raised a wide range of novel legal issues related to data protection, privacy, cybersecurity, and intellectual property, as well as ethical and reputational issues. 

For many years, Willkie attorneys have advised clients on the benefits and risks of using big data, machine learning, and other AI technologies to help their businesses grow and diversify. Today, Willkie’s AI & Emerging Technologies team provides sophisticated, thoughtful and innovative advice to help our clients navigate the complex, fast-moving legal and regulatory challenges presented by these cutting-edge technologies, including the new issues presented by the proliferation of generative AI.

A Global, Interdisciplinary Practice
Willkie’s AI & Emerging Technologies team includes lawyers from our corporate and financial services, litigation, privacy, cybersecurity and data strategy, intellectual property, investigations and enforcement, crisis management, antitrust and competition, insurance transactional and regulatory, and communications and media practices, across our U.S., UK and EU offices. Our lawyers have deep technological experience as well as experience navigating the many legal and regulatory regimes in which our clients operate. We work seamlessly together to help our clients make global business decisions.

Broad Scope of Experience 

AI Use Policies
We have extensive experience advising on AI use policies for clients across a range of industries. For example, we are assisting a number of clients in drafting internal, employee-facing generative AI use policies specific to their business needs. These policies are intended to clearly identify legal, regulatory and reputational risks and to prohibit certain uses of chatbots and similar AI technologies, while also encouraging employees to identify and seek approval to test potentially beneficial and efficiency-enhancing use cases.

Antitrust and Competition
The development of AI presents issues that companies must be mindful of to the extent they integrate AI into their businesses. If companies plan to use these tools, they should consider the ways in which these technologies are collecting and integrating information from competitors into their knowledge base. 

Notably, both the FTC and DOJ have indicated the potential for anticompetitive conduct arising from the use of AI in connection with pricing algorithms and the high barriers to entry for new AI providers. The FTC in particular has suggested that it will be focused on regulating providers of AI tools and competitive abuses in the technology industry. Meanwhile, the EU’s AI Act is in the legislative process and is expected to support the European Commission in identifying and investigating AI related competition law infringements. Under the proposal, AI ‘supervising authorities’ will be established in each Member State which will inform the European Commission about any competition law issues which they encounter. While there is no AI regulation on the horizon in the UK, a recent market study in relation to algorithms by the Competition & Markets Authority suggests that regulators are very aware of potential competitive threats posed by AI, in particular algorithmic collusion, digital cartels and self-preferencing by dominant companies. 

Willkie’s antitrust professionals have deep experience in and knowledge of the relevant industries and are well-positioned to advise clients about developments in AI and other emerging technologies.

Insurance 
AI and big data analytics are transforming important insurance industry processes including underwriting, customer service, claims adjusting, rating and renewal activity, marketing and fraud detection. AI presents enormous opportunities for the insurance industry and its customers. State insurance regulators, state legislators, the National Association of Insurance Commissioners (NAIC) and National Conference of Insurance Legislators are studying these emerging technologies and industry practices to develop AI and big data-related rules and guidelines. The NAIC, through its AI Principles, has expressed concerns that such practices remain ethical, transparent, not unfairly discriminatory and protective of consumer privacy. Meanwhile, certain states have begun issuing rules and guidance governing AI and the use of external data, including New York Circular Letter No. 1 (2019), Colorado’s Statute Concerning Protecting Consumers from Unfair Discrimination in Insurance and Connecticut’s Notice on the Usage of Big Data and Avoidance of Discriminatory Practices. 

Such rules and guidance, along with generally applicable insurance laws, create a dense regulatory structure around the deployment of new AI applications. The Willkie insurance team has helped clients navigate this complex regulatory environment in relation to the full range of insurer functions leveraging AI. We have also worked with clients and data analytics consultants to develop testing protocols to ensure that AI and big data are used in ways that are not unfairly discriminatory. The Willkie insurance team stands ready to advise clients on how the opportunities presented by the latest AI use cases and other emerging technologies can be realized within a challenging and evolving regulatory environment.

Intellectual Property
The development of generative AI presents both obstacles and opportunities for IP rights holders and AI creators. Today, there are almost daily developments in the courts, the U.S. Patent & Trademark Office, and the U.S. Copyright Office about the role of AI as an author or inventor of creative works, including AI-assisted books, images, music, and scientific inventions. There are disputes about training of AI on copyrighted materials and outputting portions of copyrighted materials without a license. 

Users of AI must consider how the AI is trained and whether they may be exposed to legal risk, ranging from infringement claims to viral open source licensing issues. Private equity and other investors need to monitor the changing landscape to make informed business decisions. Congress is considering legislation to protect IP rights holders, as well as to consider impacts on privacy, job disruption, copyrights, licensing, and the impact of Section 230 of the Communications Act. 

Willkie’s IP team, with its long history of cutting-edge litigation and transactional work across technologies, is well suited to help clients understand these IP issues and their intersection with other areas of the law.

Privacy, Cybersecurity and Data Strategy
Generative AI technologies present significant challenges related to data privacy and security. These applications often ingest and are trained on vast amounts of personal information, but, in some cases, their developers have failed to take into account either privacy-by-design principles or existing privacy laws. Moreover, many uses of these technologies may present high risks to individuals’ privacy rights. Before companies use these technologies, they need to understand how these tools have been developed and what risks they may present to the company’s privacy compliance. Willkie’s sophisticated team helps clients navigate the legal and regulatory issues raised by AI, including obtaining informed consent, providing the ability to opt out, limiting data collection, specifically describing the purposes of the processing, and offering rights of deletion.

Generative AI technologies may also present heightened cyber threats because of their ability to generate believable messaging to advance phishing schemes and deepfakes, and to generate and disguise malicious code. Willkie’s team works with clients to anticipate, identify and respond to cyber threats and incidents arising out of these evolving tactics.

 
Explore Details