Reason, Simulation, and Judgment: A Historiography of Artificial Intelligence and the Epistemic Limits of Machines

These downloadable resources are available to subscribers. Already subscribed? Just log in to access them.

Downloads:

💾
W.E. Skidmore, “Reason, Simulation, and Judgment: A Historiography of Artificial Intelligence and the Epistemic Limits of Machines”.pdf
(Login to download)
Please subscribe or
log in to access downloads.
5–8 minutes
Reason, Simulation, and Judgment: A Historiography of Artificial Intelligence and the Epistemic Limits of Machines

Rating: 1 out of 5.

The project of artificial intelligence never sought to simulate cognition in a literal sense. Instead, it attempted to formalize selected functions of the mind—deduction, pattern recognition, optimization—and render them machinable. In doing so, it displaced older conceptions of thought with new ones, redefined not by philosophical introspection or empirical psychology, but by what computers could process. This transformation, as historians and theorists within Science and Technology Studies (STS) have shown, has not merely influenced technological systems. It has rewritten the very conditions under which knowledge claims about reasoning, learning, and thinking become possible.

To understand how artificial intelligence differs from human reasoning, one must first track the intellectual, institutional, and epistemological decisions that framed its development. Traditional histories of AI often remain internalist. They follow technical milestones, architectures, and breakthroughs. In The Quest for Artificial Intelligence, Nils Nilsson produced such an account from the perspective of a practitioner. He described early work on logic-based reasoning, expert systems, and neural networks with the aim of cataloging progress.1 Though comprehensive in scope, his narrative foregrounded the capacities of AI systems without critically examining the implications of their epistemic design. Nilsson acknowledged that definitions of intelligence shifted over time, but he offered little discussion of how such shifts influenced the boundary between simulation and reasoning.

Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge University Press, 2009)

Scholars within STS reframed the problem. Rather than ask how machines think, they asked how thinking became something that machines could do. Paul Edwards introduced this move with force in The Closed World, a study of Cold War military logic, systems theory, and the politics of computation. He argued that AI did not arise in isolation from ideology. It emerged as a key component of a closed-world discourse, a conceptual space where technical systems promised rational control in an era of geopolitical uncertainty.2 The development of machine reasoning, Edwards insisted, formed part of a broader effort to systematize decision-making and remove ambiguity from conflict. What passed for intelligence within this context bore little resemblance to the deliberative, contextual judgment that defined human reasoning. It resembled, instead, a constrained form of calculation, embedded in structures of command and prediction.

Where Edwards revealed the political infrastructure of AI, Sherry Turkle illuminated its psychological interface. In The Second Self, she examined how individuals engaged with early AI programs like ELIZA, not as tools, but as entities capable of thought.3 Through interviews with children and adults, she documented how users imputed meaning, intention, and awareness to machines that executed rule-based routines. “People experience computers as psychological entities,” she wrote, “not because of what they are, but because of how we experience them.”4 Turkle’s ethnographic method disrupted the assumption that intelligence exists independently of interpretation. Her work showed that the illusion of machine thought emerged not from computational complexity, but from the projection of human categories onto algorithmic behavior.

Where Turkle addressed the user’s side of the relationship, Donald MacKenzie turned to the epistemic foundations of AI itself. In Mechanizing Proof, he traced how developers of formal systems equated mechanical demonstration with mathematical certainty.5 He argued that automated reasoning did not extend human rationality. It reconfigured it. In MacKenzie’s account, early AI did not simulate existing mental processes. It operationalized new ones, defined by institutional needs for certainty, reproducibility, and trust. “The proof,” he wrote, “became the machine’s domain, not because machines could reason, but because reasoning was redefined in terms of what machines could do.”6

Stephanie Dick advanced this line of critique with exceptional clarity. In her study of the Logic Theory Machine, developed by Newell and Simon at RAND, she examined how formal logic became the benchmark for intelligent behavior.7 The machine, she argued, did not mirror human thought. It performed according to a restricted model of reasoning, built from postulates and rules of inference. “Rather than model the mind,” she observed, “these systems proposed an alternative to it, grounded in a new material epistemology.”8 In this account, AI does not reflect cognition. It imposes new norms upon it. The result is not simulation, but substitution.

The distinction between simulation and reasoning becomes even starker with the rise of machine learning. Symbolic AI aimed to replicate logical thought. Statistical learning systems, by contrast, optimize outcomes based on data distributions. Matteo Pasquinelli, writing in the context of this transition, described how machine learning shifted the basis of intelligence from logic to probability.9 He argued that neural networks operate as “black boxes,” capable of generating outputs without offering reasons. They predict without explaining. They correlate without inferring. The model, he emphasized, cannot articulate why it reached a decision. It only demonstrates that the decision aligns with prior patterns in the data.

Lorraine Daston’s work on objectivity and scientific reasoning clarifies what this epistemic shift entails. Though not writing about AI per se, Daston examined how historical regimes of reasoning relied upon norms of justification, transparency, and the cultivation of judgment.10 AI systems, she would likely argue, violate these norms. They produce reliable outcomes without interpretive fidelity. They offer neither argument nor defense. They displace the rational subject with a statistical artifact.

Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021)

Kate Crawford addressed these tensions directly in Atlas of AI, where she rejected the metaphor of the thinking machine. AI, she argued, performs cognition only insofar as it processes data. Behind each system lies a network of human decisions: the selection of training sets, the design of optimization functions, the framing of objectives.11 Crawford called attention to the material, political, and environmental costs of AI, but her more fundamental insight concerned agency. AI systems do not choose. They do not weigh reasons. They reflect prior inputs through layers of abstraction, and they do so without awareness. “Artificial intelligence is neither artificial nor intelligent,” she concluded. “It is made from natural resources, labor, and classifications, and it operates through the statistical exploitation of past data.”12

Taken together, these contributions trace a coherent historiographical arc. Early AI displaced traditional models of reasoning with formal logic. Machine learning displaced formal logic with statistical inference. In both cases, the simulation of intelligence involved a narrowing of epistemic scope. Where humans reflect, machines optimize. Where humans deliberate, machines correlate. The difference is not one of degree, but of kind.

To mistake simulation for reason is to misunderstand both. To treat AI systems as analogues of human thinkers risks the projection of intentionality where none exists. This error confounds design, policy, and pedagogy. It leads to systems that simulate justification but do not provide it. It encourages trust in outputs that cannot be interrogated. It renders interpretation subordinate to performance.

The solution does not lie in rejecting AI. It lies in understanding what it does and what it cannot do. Historians of AI have shown that the boundary between thinking and calculating matters. They have shown that human reasoning entails context, reflexivity, and doubt—qualities absent from even the most sophisticated models. To collaborate with AI systems effectively, users must speak not to minds, but to mechanisms. They must treat the interface not as a conversation, but as a protocol.

Understanding this difference empowers users. It allows them to prompt more effectively, test more rigorously, and interpret more responsibly. It preserves the human in the loop, not as a source of error, but as a source of meaning. The history of AI does not point toward its replacement of human thought. It points toward the need to protect it.


Notes

  1. Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge: Cambridge University Press, 2009), 11. ↩︎
  2. Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996), 23. ↩︎
  3. Sherry Turkle, The Second Self: Computers and the Human Spirit (Cambridge, MA: MIT Press, 1984), 103. ↩︎
  4. Turkle, The Second Self, 103. ↩︎
  5. Donald MacKenzie, Mechanizing Proof: Computing, Risk, and Trust (Cambridge, MA: MIT Press, 2001), 27. ↩︎
  6. MacKenzie, Mechanizing Proof, 227. ↩︎
  7. Stephanie Dick, “After Math: Reasoning, Proving, and Computing in the Postwar United States,” Historical Studies in the Natural Sciences 43, no. 1 (2013): 78. ↩︎
  8. Dick, “After Math,” 89. ↩︎
  9. “Three Thousand Years of Algorithmic Rituals: The Emergence of AI from the Computation of Space – Journal #101,” e-flux, accessed April 22, 2025, https://www.e-flux.com/journal/101/273221/three-thousand-years-of-algorithmic-rituals-the-emergence-of-ai-from-the-computation-of-space/. ↩︎
  10. Lorraine Daston and Peter Galison, Objectivity (New York: Zone Books, 2007), 17. ↩︎
  11. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven: Yale University Press, 2021), 34. ↩︎
  12. Crawford, Atlas of AI, 8. ↩︎

Rating: 1 out of 5.

Leave a Reply

Discover more from TYPE IV

Subscribe now to keep reading and get access to the full archive.

Continue reading