On financial safeguards for the news industry and local journalism (Analyses, August 2024)
Legislators in California are considering two bills which aim to safeguard the financial viability of the news industry and support journalism by tapping revenues from online platforms. The Institute of Technology, Law and Policy analyzed the legal risks, mitigations and likelihood of each.
On the California Consumer Protection Act and Personal Information Used to Train Generative AI (July 2024)
The California Consumer Protection Act should grant Californians the right to know whether their personal information has been used to train GenAI systems. By taking decisive and timely action, the California Privacy Protection Agency can uphold the integrity of the CCPA and fulfill its mission of empowering consumers to exercise their data privacy rights, no matter the technology involved.
Towards a framework of institutional trust for AI regulatory enforcement (Policy Brief, February 2024)
As an increasing number of government agencies utilize algorithms in regulatory enforcement processes, the incorporation of AI-powered tools raises a number of difficult questions. This brief explores these concerns and raises considerations stakeholders should bear in mind.
On Provisions of the Digital Millennium Copyright Act (Amicus Curiae, December 2023)
"Fair use is not just a privilege granted by the courts and blessed by Congress; it is a constitutional requirement. [T]he Court should invalidate the Digital Millennium Copyright Act’s anti-circumvention and anti-trafficking provisions as unconstitutional under the First Amendment." ITLP is among those in this amicus curiae brief for Green v. DOJ.
On Artificial Intelligence and Copyright (Comments, August 2023)
"Revisions to the Copyright Act should be made to clarify that AI-generated works are unprotectable by copyright, a subject that, clearly, was not contemplated by Congress when it passed the Copyright Act of 1976." Comments in response to the Copyright Office’s Notice of Inquiry (“NOI”) in a study on artificial intelligence (“AI”) and copyright.
On the FDA's Regulatory Framework for AI/ML-based Software as Medical Device
As Artificial Intelligence/Machine Learning technologies improve, their use in healthcare is set to continue expanding. The safe adoption and integration of these technologies into the practice of medicine will depend largely on the regulatory structures in place to nurture responsible innovation.
Liability and Preemption in the New Regulatory Framework of Data Driven Healthcare
The regulation of software as a medical device (SaMD), and in particular AI SaMD, requires a shift from the traditional paradigm of medical device regulation to account for the continuous updates that an AI system might receive over its lifetime. The FDA's rapidly evolving regulatory framework that aims to keep up with the pace of technology has consequences for the liability of AI/ML device manufacturers.
On Privacy and Civil Liberties Impacts Related to Efforts to Counter Domestic Terrorism, Modern Information and Communications Technology
There is a pressing need to carefully consider the adverse impacts of technologically-based initiatives aimed at countering terrorism, particularly those involving machine learning and facial recognition, as the biases within these systems tend to have an outsized effect on marginalized and minority groups.