Abstract

This paper examines the philosophical implications of Saul Kripke’s interpretation of Wittgenstein’s rule-following paradox for artificial intelligence (AI). Central to this analysis is the question of whether AI systems, which operate through pre-programmed rules and machine learning models, can genuinely “follow rules” or if their operations are merely sophisticated simulations of rule-following behavior. Drawing on Kripkean skepticism, this work argues that AI lacks the communal grounding necessary for authentic rule-following and understanding. Further, it explores whether AI could ever develop a form of communal practice akin to human rule-following. The discussion concludes by examining ethical implications for the development and deployment of AI systems in society.

1. Introduction

Artificial intelligence has rapidly evolved, with applications in areas ranging from autonomous vehicles to natural language processing. As AI systems become more sophisticated, questions about their understanding and autonomy grow increasingly relevant. Can AI systems genuinely “understand” the rules they appear to follow, or are they simply mechanistic tools executing pre-defined instructions? What does it mean for a machine to “follow a rule”?

To address these questions, this paper draws on Saul Kripke’s interpretation of Wittgenstein’s rule-following paradox. Kripke’s work highlights the communal and normative dimensions of rule-following, suggesting that meaning and understanding are not intrinsic to an individual or system but arise from social practices. Applying this framework to AI, the paper explores the limitations of current systems and the philosophical challenges they pose.

2. The Rule-Following Paradox: A Kripkean Account

In his Wittgenstein on Rules and Private Language (1982), Kripke famously reinterprets Wittgenstein’s rule-following paradox. The paradox asks: what determines the correct application of a rule in new cases? For example, when one is taught to “add,” how does one know to extend this rule to new numbers, such as 1,000 + 2,000?

Kripke argues that no finite mental or physical representation of a rule can fully determine its application. An algorithm or instruction can always be interpreted in multiple ways. For example, a rule could be interpreted as “add unless the sum exceeds 1,000, in which case subtract.” The paradox shows that the application of a rule is not determined by any intrinsic property of the rule itself but rather by communal agreement about its correct use.

This communal aspect introduces a normative dimension: rule-following involves being held accountable to shared standards of correctness. Without this communal grounding, rule-following becomes unintelligible.

3. Rule-Following and Artificial Intelligence

Modern AI systems, especially those using machine learning, operate by applying rules derived from data. These systems appear to “follow rules” in tasks such as language translation or image recognition. However, Kripke’s insights raise doubts about whether such systems genuinely follow rules or merely simulate rule-following behavior.

3.1 The Problem of Ambiguity in AI

One of Kripke’s key insights is that rules are inherently ambiguous without communal validation. AI systems face similar challenges:

• Unprecedented Inputs: Machine learning models may encounter inputs that differ from their training data. For example, a self-driving car may encounter a situation it was never programmed to handle, such as a kangaroo hopping across a road. Without explicit communal norms to guide interpretation, the system’s response remains contingent on its training rather than genuine understanding.

• Contextual Interpretation: AI often struggles with context. A content moderation algorithm might flag satire as offensive speech because it lacks the broader cultural understanding that humans bring to interpreting language.

These examples highlight that AI’s “rule-following” is fragile, dependent on the limits of its programming and training data.

3.2 Derived vs. Original Intentionality

AI operates with what John Searle terms “derived intentionality”—its actions and “understanding” are entirely dependent on the intentions of its human designers and the data it has been trained on. In contrast, humans possess “original intentionality,” arising from their ability to create and interpret meaning autonomously within a social context.

Kripkean skepticism reinforces the view that AI systems lack the grounding required for original intentionality. Their rule-following behavior is not normative but merely functional, constrained by the intentions of their designers.

4. Could AI Form a Community of Rule-Followers?

A potential objection to Kripkean critiques of AI is that AI systems might develop their own form of communal practices. For example, decentralized AI networks could interact and establish shared protocols, akin to human linguistic practices. This raises several intriguing questions:

• Can AI systems create norms within their networks, leading to a form of communal rule-following?

• Would such norms be analogous to human practices, or would they remain fundamentally different?

4.1 The Nature of AI Communities

For AI to develop communal rule-following, its systems would need to engage in normative practices, holding each other accountable to shared standards. However, Kripkean insights suggest that such accountability would require more than functional agreement; it would necessitate a shared “form of life.” AI, being non-conscious and non-social, lacks the lived experience that underpins human communities.

4.2 AI’s Lack of Normativity

Even if AI systems develop shared protocols, these protocols lack normativity in the human sense. For humans, normativity involves the capacity for critique, reinterpretation, and adaptation grounded in shared values and experiences. AI protocols, by contrast, remain fixed within the bounds of their programming and data.

5. Ethical and Practical Implications

Kripke’s analysis has significant implications for the development and deployment of AI systems.

5.1 Transparency in AI Design

Given AI’s inability to genuinely follow rules, developers must prioritize transparency. Systems should clearly indicate their limitations, especially in ambiguous or high-stakes contexts, such as medical diagnosis or legal decision-making.

5.2 The Necessity of Human Oversight

AI’s lack of intrinsic understanding underscores the importance of human oversight. For example, autonomous weapons systems pose ethical risks precisely because they operate without human judgment in life-and-death scenarios.

5.3 Rejecting AI Overreach

Claims that AI possesses “understanding” or “autonomy” should be critically examined. Kripke’s insights remind us that rule-following and understanding are deeply tied to communal practices that machines cannot replicate.

5.4 Designing for Collaboration

Rather than striving for AI autonomy, developers should focus on designing systems that collaborate effectively with humans, leveraging human capacities for interpretation and judgment.

6. Broader Philosophical Implications

Kripke’s rule-following paradox not only critiques AI but also raises broader questions about human cognition and social practices. By examining the differences between human and machine rule-following, we gain a deeper understanding of what it means to be human. AI, in this sense, serves as a mirror, highlighting the unique aspects of human normativity and community.

7. Conclusion

Kripke’s interpretation of Wittgenstein’s rule-following paradox offers a profound critique of AI’s claims to understanding and autonomy. While AI systems can simulate rule-following with remarkable sophistication, they lack the communal grounding and normative accountability that define genuine rule-following. As AI continues to advance, developers and ethicists must remain mindful of these philosophical limits, ensuring that machines are designed not as autonomous agents but as tools that augment human capabilities.

References

1. Kripke, Saul. Wittgenstein on Rules and Private Language. Harvard University Press, 1982.

2. Searle, John. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, vol. 3, no. 3, 1980, pp. 417–424.

3. Wittgenstein, Ludwig. Philosophical Investigations. Blackwell, 1953.

4. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

5. Floridi, Luciano. The Ethics of Information. Oxford University Press, 2013.