A partnership between UCLA School of Law and UCLA Samueli School of Engineering, the Institute for Technology, Law & Policy examines the benefits and risks presented by technologies such as artificial intelligence and machine learning, robotics, cybersecurity and digital media and communications.
These and other rapidly evolving technologies raise questions germane to the outcome of ethical and public policy issues, the applicability and utility of current laws and regulations that govern their use.
Upcoming events (see below for event descriptions and registration information)
- October 20, 2021: Ethics in Tech with Dunstan Allison-Hope and Michael Karanicolas
- October 22, 2021 - Workshop: Calibrating Data Surveillance
- November 1-5, 2021 - Conference: Power and Accountability in Tech
- November 8, 2021 - Journal of Free Speech Law Series
- November 17, 2021 - Panel: Transparency and Corporate Social Responsibility
- November 19, 2021- Journal of Free Speech Law Series
- December 1, 2021 - Journal of Free Speech Law Series
Past events (see below for event descriptions and videos)
- September 18, 2020: Is Big Tech too Big?
- October 16, 2020: Addressing the Challenges of Content Moderation
- January 28, 2021: The Future of Internet Speech: How Online Content Shapes Offline Events
- February 11, 2021: Does the Government Have the Right to Control Content Moderation Decisions?
- June 3, 2021: A Space for Everyone? Debating Online Platforms and Common Carriage Rules
- June 17, 2021: Censorship and State Repression of Online Speech
- July 1, 2021: When American Companies Moderate Global Content
- July 15, 2021: Life Interrupted: the Impacts of Internet Shutdowns
- July 29, 2021: Misinformation and Synthetic Media
- August 12, 2021: Facial recognition and Entrenching Racial Discrimination
- August 25, 2021: Ethics in Tech with Cory Doctorow and Sarah Roberts
- September 8, 2021 - Panel: iSpy - War Crimes and Digital Documentation
- September 13, 2021 - Panel: AI Inventors and Patent Law
- September 22, 2021: Ethics in Tech with Eva Galperin and Alan Rozenshtein
- October 6, 2021: Ethics in Tech with Dr. Achuta Kadambi and Dr. Safiya Noble
- October 12, 14, 15, 2021 - Workshop: The Future of Open Source
ITLP produces podcasts featuring a series of conversations with thought leaders on important topics at the intersection of technology, law, and policy. Watch or listen to the podcasts.
Who We Are
- Executive Director
- Faculty Director
Alexandra MataProgram Coordinator
Leeza ArbatmanLeeza Arbatman is a student at UCLA Law. At ITLP she is conducting research on the scope of First Amendment protection for anonymous online expression. In law school, she has taken part in UCLA's First Amendment "Pop Up" Clinic and California Environmental Legislation and Policy Clinic, and served as a judicial extern for the Honorable Susan Illston of the U.S. District Court for the Northern District of California. Before coming to UCLA, she worked for a criminal justice organization and interned at NPR member station KQED. She earned her undergraduate degree in sociology at UC Santa Cruz.
Ally BoutelleAlly Boutelle is a student at the USC Gould School of Law. At ITLP, she is conducting research on emerging international copyright law frameworks and the associated implications for U.S. companies. She holds a B.A. in history and journalism from the University of Wisconsin, Madison, where she was also an editor at The Badger Herald, the nation’s largest independent student newspaper. Her previous experience includes five years at Pinterest, where she held a variety of positions in intellectual property, including Head of Intellectual Property Operations from 2018 to 2020.
Leeza Arbatmen & John Villasenor, "Anonymous Expression and 'Unmasking' in Civil and Criminal Proceedings," forthcoming in the Minnesota Journal of Law, Science, and Technology, 2022.
Mark Verstraete & Tal Zarsky, "Optimizing Breach Notification," forthcoming in the University of Illinois Law Review (2021).
Michael Karanicolas, "A FOIA For Facebook," 66 forthcoming in the Saint Louis University Law Journal.
Mark Verstraete, "Inseparable Uses," 99 North Carolina Law Review 427 (2021).
Michael Karanicolas, "Too Long; Didn't Read: Finding Meaning in Platforms’ Terms of Service Agreements," 51 University of Toledo Law Review 1 (2021).
Michael Karanicolas, "Even in a Pandemic, Sunlight Is the Best Disinfectant: COVID-19 and Global Freedom of Expression," 22 Oregon Review of International Law 101 (2021).
Virginia Foggo and John Villasenor, “Algorithms, Housing Discrimination, and the New Disparate Impact Rule,” 22 Columbia Science and Technology Law Review 1 (2021).
Virginia Foggo, John Villasenor, and Pratyush Garg, “Algorithms and Fairness,” 17 Ohio State Technology Law Journal 123 (2020).
John Villasenor and Virginia Foggo, "Artificial Intelligence, Due Process, and Criminal Sentencing," 2020 Michigan State Law Review 295 (2020).
John Villasenor, "Soft Law as a Complement to Regulation," The Brookings Institution, July 31, 2020
Rebecca Wexler and John Villasenor, "How well-intentioned privacy laws can contribute to wrongful convictions," The Brookings Institution, February 11, 2020
John Villasenor, "Artificial Intelligence, Geopolitics, and Information Integrity," The Brookings Institution and ISPI, January 2020
John Villasenor, "Products liability law as a way to address AI harms," The Brookings Institution, October 2019
Short articles, op-eds, and blogs
Ally Boutelle and John Villasenor, “The European Copyright Directive: Potential impacts on free expression and privacy,” The Brookings Institution, February 2, 2021
John Villasenor, “Zoom is Now Critical Infrastructure. That’s a Concern,” The Brookings Institution, August 27, 2020
John Villasenor, "Why creating an internet "fairness doctrine" would backfire," The Brookings Institution, June 24, 2020
John Villasenor, "Why Colleges Should Pool Teaching Resources," The Chronicle of Higher Education, June 4, 2020
John Villasenor, "Online college classes are here to stay. What does that mean for higher education?," The Brookings Institution, June 1, 2020
John Villasenor and Virginia Foggo, "Why a proposed HUD rule could worsen algorithm-driven housing discrimination," The Brookings Institution, April 16, 2020
John Villasenor, "Six Steps to Prepare for an Online Fall Semester," The Chronicle of Higher Education, April 8, 2020
John Villasenor, "Why I Won't Let My Classes Be Recorded," The Chronicle of Higher Education, January 10, 2020
John Villasenor, "Preparing Today's Students for an AI Future," The Chronicle of Higher Education, October 13, 2019
John Villasenor, "Deepfakes, social media, and the 2020 election," The Brookings Institution, June 3, 2019
John Villasenor and Virginia Foggo, "Algorithms and sentencing: What does due process require?," The Brookings Institution, March 21, 2019
John Villasenor, " Artificial intelligence, deepfakes, and the uncertain future of truth," The Brookings Institution, February 14, 2019
John Villasenor, "Artificial intelligence and bias: Four key challenges," The Brookings Institution, January 3, 2019
In addition to the individual videos listed below you can also view the ITLP YouTube playlist.
Is Big Tech Too Big?
September 18, 2020
Transcript - Please note that the accuracy of the transcript is not guaranteed.
Companies such as Facebook, Amazon, and Google have been extraordinarily successful in building a large base of users and in acquiring market share. But are they too big?
To explore this question, the UCLA Institute for Technology, Law, and Policy (ITLP) is hosting an online panel discussion with Ashkhen Kazaryan of TechFreedom and Alex Petros of Public Knowledge, moderated by ITLP director John Villasenor. The event will explore issues such as the proper role of government in relation to large technology companies and the extent to which existing regulatory frameworks are—or are not—sufficient in light of the current dynamics of the technology sector.
The event will last one hour, and will include approximately 40 minutes of moderated discussion followed by 20 minutes of audience Q&A.
Ashkhen Kazaryan is the Director of civil liberties at TechFreedom. She manages and develops projects on free speech, content moderation, surveillance reform, and the intersection of constitutional rights and technology. Ashkhen is regularly featured as an expert commentator in news outlets across television, radio, podcasts, and print and digital publications including: CNBC, BBC, Fox DC, Politico, Axios and others. She is a board member of the Fourth Amendment Advisory Committee and an expert at the Federalist Society's Emerging Technology Working Group. Ashkhen received her Specialist in Law degree summa cum laude from Lomonosov MSU, Master of Law Degree from Yale Law School, and is completing her PhD in Law at the Law School of Lomonosov Moscow State University.
Alex Petros currently works as a Policy Counsel at Public Knowledge, where he focuses on antitrust and broader platform accountability issues. Prior to Public Knowledge, he worked for Senators Amy Klobuchar, Richard Blumenthal, Joe Donnelly, and the House Committee on Oversight and Reform. He received his J.D.,cum laude, from Georgetown University Law Center and his B.A. from Yale College in Economics and Political Science with distinction.
Addressing the Challenges of Content Moderation
October 16, 2020
Transcript - Please note that the accuracy of the transcript is not guaranteed.
Under the simplest framing, the content moderation challenges facing companies such as Facebook, Twitter, and YouTube boil down to drawing a line between acceptable and unacceptable content. But that framing masks a more complex set of questions including 1) what the goals of content moderation should be, 2) how by whom content moderation decisions should be made, and 3) how companies that operate globally should navigate the varying cultural and legal frameworks relating to the limits of acceptable online content in different jurisdictions.
To explore these issues, the UCLA Institute for Technology, Law, and Policy (ITLP) is hosting an online panel discussion with Kate Klonick of St. John's University and John Samples of the Cato Institute and Facebook's content moderation Oversight Board, moderated by ITLP director John Villasenor.
Kate Klonick is a professor at the St. John's University School of law, where her research centers on law and technology, using cognitive and social psychology as a framework. Most recently she has been studying and writing about private Internet platforms and how they govern online speech. Professor Klonick has published in the Harvard Law Review, the Georgetown Law Journal, Southern California Law Review, and Yale Law Journal, as well as in the New York Times, the New Yorker, The Atlantic, the Guardian, Lawfare, Slate, Vox and numerous other publications. She is the author of a forthcoming New Yorker article on content moderation. Klonick holds an A.B. from Brown University, a J.D. from Georgetown, and a Ph.D. in Law from Yale Law School.
John Samples is a vice president at the Cato Institute. He founded and directs Cato's Center for Representative Government, which studies the First Amendment, government institutional failure, and public opinion. Dr. Samples also serves on the Facebook Oversight Board, which hears appeals from content moderation decisions by Facebook and Instagram. Prior to joining Cato, Samples served eight years as director of Georgetown University Press. He received his PhD in political science from Rutgers University. Samples' views expressed during this event are his own and do not represent those of the Oversight Board or of Facebook.
The Future of Internet Speech: How Online Content Shapes Offline Events
January 28, 2021
- Rebecca MacKinnon, Ranking Digital Rights
- Mohammad Tajsar, ACLU of Southern California
Moderator: Alex Alben, UCLA School of Law
Recent events have raised profoundly important questions regarding the role of online content in shaping offline events. This panel will consider questions including: What responsibilities do social media companies have to identify and filter out disinformation? Are the current legal frameworks, such as Section 230, working? What role should infrastructure companies (such as Amazon, through AWS) play in relation to content moderation?
Panelist and Moderator Bios
Rebecca MacKinnon is the founding Director of Ranking Digital Rights. In the academic year 2019-2020, she was also a University of California Freedom of Speech and Civic Engagement Fellow and a UC San Diego Pacific Leadership Fellow. Author of Consent of the Networked: The Worldwide Struggle for Internet Freedom, MacKinnon co-founded the citizen media network Global Voices. She has held fellowships at Harvard’s Shorenstein Center on the Press and Public Policy, the Berkman Center for Internet and Society, the Open Society Foundation, and Princeton’s Center for Information Technology Policy. She received her AB magna cum laude from Harvard University and was a Fulbright scholar in Taiwan.
Mohammad Tajsar is a Staff Attorney at the ACLU of Southern California, which he joined in 2017. His work there has spanned a wide range of areas, including digital rights and government surveillance. Prior to joining the ACLU, he worked at a law firm where he focused on civil rights and workers’ rights, and prior to that he was a law clerk in United States District Court for the District of Nevada and a legal fellow at the ACLU of Southern California. He has a law degree from UC Berkeley and an undergraduate degree from UCLA.
Alex Alben teaches Internet Law, Media & Society at UCLA School of Law. The Los Angeles Times recently published his opinion piece on “The President’s Bizarre Fixation on Dismantling an Internet Rule.” He has also been a senior executive for several pioneering Internet companies, and served as Washington State's first Chief Privacy Officer. Alben is engaged with research and policy development relating to Artificial Intelligence technologies, with a focus on AI Ethics. Alben received his A.B. with distinction from Stanford University, and his J.D. from Stanford Law School.
Does the Government Have the Right to Control Content Moderation Decisions?
February 11, 2021
- Eric Goldman, Santa Clara University School of Law
- Eugene Volokh, UCLA School of Law
Moderator: Leeza Arbatman, UCLA School of Law
As private entities, social media platforms are not bound by the First Amendment, and are free to permit—or block—content and users as they see fit; and 47 U.S.C. § 230 preempts any state statutes that would impose greater limits on such companies. That, at least, is the traditional view.
But some state legislatures are considering statutes that would ban viewpoint-based blocking by platforms; and some scholars are arguing that those laws might prevail, notwithstanding § 230. What are these theories? And what are their strengths and weaknesses? The event will last one hour, and will include approximately 40 minutes of moderated discussion followed by 20 minutes of audience Q&A.
Panelist and Moderator Bios
Eric Goldman is a Professor of Law at Santa Clara University School of Law. He also co-directs the High Tech Law Institute and supervises the Privacy Law Certificate. His research and teaching focuses on Internet, IP and advertising law topics, and he blogs on these topics at the Technology & Marketing Law Blog [http://blog.ericgoldman.org]. Before joining the Santa Clara Law faculty, he was an assistant professor at Marquette University Law School in Milwaukee, Wisconsin. Before that, he practiced law for eight years in the Silicon Valley as General Counsel of Epinions.com and an Internet and technology transactions attorney at Cooley Godward LLP.
Eugene Volokh teaches First Amendment law and a First Amendment amicus brief clinic at UCLA School of Law, where he has also often taught copyright law, criminal law, tort law, and a seminar on firearms regulation policy. Before coming to UCLA, he clerked for Justice Sandra Day O'Connor on the U.S. Supreme Court. Volokh is the author of over 90 law review articles, a member of the American Law Institute, and the founder and coauthor of The Volokh Conspiracy, a leading legal blog. His law review articles have been cited by opinions in eight Supreme Court cases and in several hundred court opinions.
Moderator: Leeza Arbatman is a student at UCLA Law. Through the UCLA Institute for Technology, Law, and Policy, she is conducting research on the scope of First Amendment protection for anonymous online expression. She has taken part in UCLA's First Amendment "Pop Up" Clinic and California Environmental Legislation and Policy Clinic, and served as a judicial extern for the Honorable Susan Illston of the U.S. District Court for the Northern District of California. Before coming to UCLA, she worked for a criminal justice organization and interned at NPR member station KQED.
A Space for Everyone? Debating Online Platforms and Common Carriage Rules
June 3, 2021
Online platforms play a central role in modern public discourse. But while their moderation decisions can have a huge impact, there is often no recourse available to people or organizations who have their content or their accounts deleted. Concern about this power being exercised by immensely wealthy private organizations—and in particular with the unexplained way the power is often exercised—has led to many proposed regulatory reforms, including a suggestion that “common carriage” rules should apply to these companies, which would effectively require them to provide a voice and a platform to everyone. Could such a rule be reconciled with the platforms’ own First Amendment rights, and would it represent a meaningful improvement over the status quo? This panel will address these questions and more.
Algorithmic Criminal Justice?
A Symposium Hosted by the UCLA School of Law, January 24, 2020
About the Symposium
Algorithms are playing a growing role in both policing and criminal justice. In theory, algorithms can provide information that can help promote analytical rigor, objectivity and consistency. But they can also reflect and amplify biases inadvertently introduced by their human creators and biases present in data.
This event convened a diverse set of national thought leaders to engage with a key set of critically important questions on the proper role of algorithms in policing and in the criminal justice system. Topics addressed include, 1) approaches to identify and mitigate algorithmic bias, 2) the unique challenges and opportunities associated with the subset of algorithms that use AI, 3) ways to spur technological innovation so that the positive potential of algorithmic approaches in policing and criminal justice can be realized, while also protecting against the downsides, 4) the relative roles of the public and private sectors in developing, deploying, and ensuring the quality of new algorithmic solutions, and 5) approaches that can help ensure that algorithmic approaches enhance, rather than undermine, civil liberties.
Program and Videos
Welcoming remarks and introductions - Video
Panel 1: Creating Algorithms for Justice - Video
- Alex Alben (moderator) – UCLA
- Colleen Chien – Santa Clara University
- Eric Goldman – Santa Clara University
- Rebecca Wexler – UC Berkeley
Panel 2: Algorithmic Policing - Video
- Jeff Brantingham – UCLA
- Beth Colgan (moderator) – UCLA
- Catherine Crump – UC Berkeley
- Andrew Ferguson – American University
- Orin Kerr – UC Berkeley
Panel 3: Algorithmic Adjudication - Video
- Chris Goodman – Pepperdine University
- Sandy Mayson – University of Georgia
- Richard Re (moderator) – UCLA
- Andrew Selbst – UCLA
- Chris Slobogin – Vanderbilt University
Panel 4: Regulation and Oversight - Video
- Jane Bambauer – University of Arizona
- Gary Marchant – Arizona State University
- Ken Meyer – Los Angeles District Attorney's Office
- Mohammad Tajsar – ACLU of Southern California
- John Villasenor (moderator) – UCLA
Keynote: Commissioner Rebecca Kelly Slaughter – Federal Trade Commission - Video