Aleksander's Congressional Q&A
A selection of questions from the Q&A following Aleksander’s testimony before the House Subcommittee on Cybersecurity, Information Technology, and Government (with lightly edited answers)
Q: Widespread use of AI systems raise questions about responsibility and liability. For example, if a bank purchases an algorithm from another entity, and use of that algorithm leads to inequal treatment between people of different races, who is responsible, and potentially liable, for any unfair treatment of applicants? Is it the bank for using the algorithm, is it the organization that sold the algorithm to the bank, or is it the developer who created the algorithm?
This is a crucial, challenging, and ultimately under-studied question in the field of AI, made even more difficult by the fact that the dynamics of AI deployment are changing. In the past, companies that deployed AI systems would develop these systems “in-house,” retaining full control over the corresponding data, algorithms, and interface with users. More recent systems, however, are far less centralized—different actors are responsible for sourcing data, curating it, training the AI system, and deploying the AI system. In particular, there is an emergent “supply chain” when it comes to AI deployment.
As a result of the existence of this AI supply chain, responsibility and accountability can become quite diffuse. One's first instinct might be to assign blame to the ones deploying a faulty AI system (i.e., the bank, in the provided example). But these entities are now largely “last-mile” businesses, relying on a complex chain of AI providers that they are not always positioned to understand. Indeed, AI failures can be caused by mistakes at almost any link of this chain, from poor model development to biased data collection.
Solving this problem will be a major challenge, but an important first step is to establish requirements for proper disclosure regarding such aspects as intended usage, known failure modes, and possible biases. None of this occurs now, but it should—at all points of the “supply chain,” so not only for the final AI system but for all the underlying components. Government requirements establishing (and perhaps standardizing) disclosure across entities is thus a necessary first step here; efforts such as model cards and datasheets for datasets are good initial ideas in this context but are nowhere near sufficient.
There is, however, a significant challenge here that specification alone may not solve: sometimes there may not be a single organization responsible for a downstream failure. That is, everyone along the supply chain might be fulfilling their duty of care, and the result might still be undesirable, simply due to unexpected interactions between different components in the supply chain.
Thus, while we may not yet have all the answers, it's clear that the AI supply chain (and the challenge it poses for responsibility and liability attribution) should be front and center in AI policy discussions moving forward.
Q: How should we think about the consequences of the AI “black box” problem, where the inputs and operations of an AI's system are not visible to the user or to any interested parties. For example, if a patient is denied a critical surgery or transplant because of a decision assisted by an AI system, should the patient have a right to understand how that decision was made? What are the technical limitations of establishing such transparency?
Transparency is a value that we hold dear in our society, and also a key mechanism to facilitate accountability for consequential human decisions. I believe that we can and should strive to hold decisions made by AI systems to a similar standard, with the mentioned example of a patient being subjected to an AI-assisted medical decision as a motivating scenario.
The nascent area of AI interpretability (also referred to as AI explainability) provides us with an initial understanding of how to work towards these goals, what might be possible to achieve in this regard, and what the pitfalls might be. In particular, in terms of the pitfalls, we have already learned that “explanations” of AI decision-making (including those provided by the system itself) can often be extremely convincing and yet misleading—that is, provide little insight into the actual way that the AI arrived at its decision. Another lesson is that there are unavoidable trade-offs between the capability of an AI system and the level of interpretability that this system can ever offer.
As a result, policymakers need to make judicious—and thus context-dependent—choices regarding what level of interpretability we view as necessary. For instance, how would we negotiate the tension between the ability of an AI system to provide a life-saving intervention and its ability to properly justify the need for that intervention? We should also accept the fact that AI systems will always, to a certain degree, be “black boxes” to us and recognize that what constitutes an explanation also depends on the expertise of the person for whom this explanation is for as well as the purpose this explanation is supposed to serve. (In fact, very similar considerations apply to the context of human decision-making too. In particular, we do not demand full transparency from humans—humans are fairly inscrutable as well.)
Nonetheless, there are many possible paths forward—and a sustained investment in research in this area will push the boundaries of what's achievable even further. One way to gain greater insight into how an AI system arrived at its decision is data attribution: tying a decision back to the data on which the AI model was trained 1 2 3 4. Another is auditing: testing whether a system meets a set of criteria under different conditions5 6 7. Regulators could also mandate certain information disclosures regarding the inner workings of a given AI system, with the exact nature of these disclosures depending on the specific use case and regulatory context.
Q: What are the steps that AI developers, industry, and the civil sector are currently taking, or should be taking, to prevent this technology from being used to create harm or violate a person's rights?
This is a very important (and still largely open) question. After all, with AI we've unleashed a technology that is very powerful and also very different from the types of technologies we have dealt with in the past. Still, even if there aren't definite answers here, there are again very clear first steps to take.
In particular, there are several harms and risks that have already emerged in AI deployment (like AI's struggle to deliver accurate predictions or its tendency to perpetuate undesirable biases). It is critical that these risks be fully identified and documented, and that tools be developed to mitigate them. Academia and the civil sector have been fairly active on this front, but these efforts might not be enough: the most advanced systems are built using data and computational resources that only a small group of companies can currently command. There are several steps that government should take here:
Adapt and enforce existing regulations and identify the emerging regulatory gaps. It is important to realize that there is already a vast body of existing regulation—at the federal, state, and local levels—that might not have been developed with AI in mind but still apply to AI systems nonetheless. (The FTC's newly released guidelines for generative AI products are a good illustration of this.) Of course, these rules tend to be sectoral and “human-facing” in nature and as such they will require proper adaptation to the AI context. Still, this adaptation should proceed immediately, spearheaded by the respective (sectoral) regulatory body with an involvement of AI experts.
Such adaptation effort would not only quickly bring a lot of much-needed regulatory structure to bear on the burgeoning AI services space but it would also help us understand what aspects of real-world AI deployment might not be yet properly addressed by the existing rules.
Such efforts would also likely stimulate the developers of this technology to prioritize mitigation of the undesirable societal impacts of the systems they build. That would be particularly important, given the recent decisions of several major companies to downsize, or straight out eliminate, their departments focused on safe and ethical AI deployment. In fact, as an initial, immediate step here, it would help if the government just laid out clearly the possible undesirable impacts of AI and possible regulatory approaches.
Combine technological solutions with policy action. A number of technical approaches to tackling the safety of AI have already been proposed (particularly in academia), but no policies have been imposed that would get these solutions refined and implemented. In particular, there has been work on detecting and preventing deepfakes, on stopping copyright violations, and on auditing AI systems89101112 but again, without a corresponding policy effort, further development and sufficiently broad adoption of these urgently needed solutions is unlikely.
(Further) empower academia and civil society. Finally, the government can act to bridge the gap between the AI research done in academia and the civil sector. In particular, the US government should provide funding for (a) research focused on societal aspects of AI-driven decision making; and (b) building academic/open-source systems of large enough scale to enable meaningful research insights into the workings of the larger and proprietary commercial systems.
Q: What are some common misunderstandings about AI?
There are (at least) two broad areas of misunderstanding when it comes to AI systems, namely their capabilities (what they can do) and their underpinnings (how they work).
Regarding the former area, people tend to vastly overestimate the (current) capabilities of AI systems. Specifically:
Current AI systems are only as good as their data. Existing AI systems are designed to extract (and perpetuate) patterns gleaned from their training data (i.e., data used to develop them). This means that they perform well when applied to tasks that are accurately captured by this data, but tend to struggle otherwise. Indeed, ensuring that the training data is truly relevant for the intended task is absolutely key. So, for instance, consider an AI to assist driving by detecting pedestrians. If the system is trained solely on video collected while driving in suburbs, this system will likely underperform when deployed in a city. The key role of training data can also be seen in the impressive “versatility” of sophisticated conversational systems such as OpenAI's ChatGPT. That versatility is driven mainly by the staggeringly large size and diversity of their training data, not by any ability of the system to truly understand human language and reason about the acquired knowledge.
Even within tasks that a given AI system was designed for, it might be far from infallible. From self-driving cars to sophisticated conversational systems, every AI system makes mistakes. Furthermore, malicious actors can “force” these mistakes by carefully manipulating inputs to these systems. For instance, so-called “adversarial examples” in computer vision can fool an AI system by applying imperceptible changes to an input image, and increasingly sophisticated “jailbreak prompts” can trick conversational systems into providing answers the system developers do not want them to provide.
AI is not “unbiased,” nor can data be “unbiased.” As much as proper curation and the collection of sufficiently diverse data is key, every dataset will have overrepresentation of some population over another (in part because “representativeness” is a relative concept that depends on the context or application). And, also, just the fact that a dataset is properly representative and curated for a given application, does not guarantee that the resulting AI system will not perpetuate some undesirable biases/harms.
As to the latter area of misunderstanding, people have a tendency to anthropomorphize AI systems, and to assume their underlying logic mirrors our own. This leads to a number of common misconceptions. It is important to recognize:
Current AI models do not “reason.” Instead, as we already touched on above, they mimic reasoning by gleaning patterns found in the enormous datasets they sift through during their development. They then output content according to these patterns, in a way that often looks very convincing.
In the same vein, asking a conversational system like ChatGPT to “explain” its answer does not necessarily result in a valid explanation. These models are trained to predict the next word in a sequence, and to do so in a way that humans generally approve of, but nothing more. Thus, even if an “explanation” generated by asking ChatGPT to explain an answer sounds plausible (and consistent with the answer being explained), it does not necessarily correspond to why the model actually outputted the answer. In fact, the same goes for many other existing approaches to AI explainability and this problem is, to a large degree, inherent. (As discussed in the answer to Question 2, there are viable and helpful technical approaches to gaining insights into the inner workings of AI systems, but these approaches still fall far short of offering “full explainability.”)
Q: How should Congress think about a risk-based approach to AI governance? How can Congress ensure that industry is carefully considering and mitigating the risks of AI tools?
To answer the first question, while risk is an important consideration it can be properly evaluated only within a specific context. Therefore, I believe that the most effective approach to AI governance should be use case-based, with risk aspects taken into account only once such specific context is clear. In particular, whenever an AI system is deployed for consumers, its “last mile” developer should specify the contexts and purpose it is designed for. For instance, does this system aim to provide medical or legal advice? Is this system intended to be used by consumers or trained professionals? What kind of conditions and mode of operation is it designed for?
Then, if the declared use cases merit it (and this is where risk assessment comes in), the developer should be required to work with the corresponding sectoral regulating body to ensure the safety of the AI system and proper compliance with existing law in the context of these use cases.
Also, importantly, the developer would have to make a reasonable effort to prevent uses beyond those that are declared. That would likely involve taking technical steps to make it difficult to use the system for such undeclared purposes.
Regarding the second question, it is important that the regulation not be overly prescriptive in terms of technology, since the technology is evolving very fast. Specifically, Congress should avoid regulating inputs to AI systems (e.g., insisting that the data be “unbiased”—see also my answer to Question 4) and should not regulate the technology per se either. Rather the focus should be on regulating the outcomes and effects of the technology on end users (see my answer to Question 2).
More broadly, an effective regulatory framework should allow regulators to be in constant conversation with AI developers, updating requirements for premarket release and monitoring as appropriate given technical advances. Such a framework might involve requiring (confidential) disclosures from AI developers to regulators on the inner workings of their solutions. An analogy here might be the regulatory structure that exists for finance, where firms think about regulatory issues when deploying new data-driven decision making instruments and tools and discuss with regulators how they are seeking to make those instruments and tools safe and compliant.
Finally, as previously noted, existing regulatory structure and rules should be applied to AI as much as possible. A lot of structures and rules are relevant and can (and, as I say in my answer to Question 4, should) be “translated” to the AI regime. That approach would take advantage of the enormous domain expertise, case law and human capital of the relevant agencies. Of course, not all the issues that arise in the context of AI will fall within the scope of one of these existing agencies. So I believe it might also be necessary to stand up a new, AI-specific regulatory body. That organization could, on the one hand, work effectively with the existing regulators (including at the state and local levels), potentially serving as a “clearing house” for them and a resource in terms of AI expertise and talent. But, it should also be able to act with authority to regulate unique challenges that arise in the context of AI, such as the AI supply chain issue I discussed in the answer to Question 1 and in my testimony.
Q: There are fears that AI will eliminate jobs. In the past, new technologies have led to the elimination of some jobs and the creation of others. Can you provide an illustration or two of how AI could lead to the creation of additional jobs?
There is no doubt that AI will transform the labor market, although I would expect that this transformation will be more around changing the nature of occupations rather than completely eliminating them. (The current trajectory of AI development makes it likely that higher-skilled, better paying jobs will be affected first.)
Still, as the question notes, AI has the potential to follow a similar pattern as previous technological advancements, eliminating some jobs while creating others. I would expect a prominent illustration of the latter to be jobs needed to operate and supervise deployed AI systems—for instance, in the context of medical services. Specifically, advances in AI should make it possible for professionals who have not undergone as sophisticated and demanding training as doctors to offer fairly advanced (and accurate) medical diagnoses. These professionals could thus help ameliorate the acute shortages of healthcare we experience as a country, e.g., in rural areas, by, at the very least, providing diagnosis and referral services for the population experiencing such shortages. One could also expect similar opportunities to arise in the context of providing more specialized and advanced education as well as training.
It is important to keep in mind though that none of these positive, job-creating impacts are preordained and even if such jobs are created, they may not be available to those being displaced. It is thus critical we invest in training and workforce education, to provide people with the tools to adapt to a changing labor landscape.
Acknowledgements
I am grateful for invaluable help from Sarah Cen, David Goldston, Andrew Ilyas, and Luis Videgaray.
Koh, P. W., & Liang, P. (2017). Understanding Black-box Predictions via Influence Functions. International Conference on Machine Learning (ICML).
Feldman, V., & Zhang, C. (2020). What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation. Advances in Neural Information Processing Systems (NeurIPS), 33, 2881–2891.
Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., & Mądry, A. (2022). Datamodels: Predicting Predictions from Training Data. International Conference on Machine Learning (ICML).
Park, S. M., Georgiev, K., Ilyas, A., Leclerc, G., & Mądry, A. (2023). TRAK: Attributing Model Behavior at Scale. International Conference on Machine Learning (ICML).
Cen, S. H., Madry, A., & Shah, D. (2023). A User-Driven Framework for Regulating and Auditing Social Media. ArXiv Preprint ArXiv:2304. 10525.
Cen, S., & Shah, D. (2021). Regulating algorithmic filtering on social media. Advances in Neural Information Processing Systems, 34, 6997–7011.
Yan, T., & Zhang, C. (2022). Active fairness auditing. International Conference on Machine Learning, 24929–24962. PMLR.
Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2019). Deep Learning for Deepfakes Creation and Detection.
Ruiz, N., Adel Bargal, S., & Sclaroff, S. (2020). Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.
Salman, H., Khaddaj, A., Leclerc, G., Ilyas, A., & Mądry, A. (2023b). Raising the Cost of Malicious AI-Powered Image Editing. International Conference on Machine Learning (ICML).
Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023). A Watermark for Large Language Models. Arxiv Preprint ArXiv:2301.10226.
Vyas, N., Kakade, S., & Barak, B. (2023). Provable copyright protection for generative models. ArXiv Preprint ArXiv:2302. 10870.