← Back to Articles & Artefacts
artefactssouth

Philosophy of AI and the Shift from Instruction to Conversational Inquiry

IAIP Research
rch-ctx-polyphonic-discussion

Philosophy of AI and the Shift from Instruction to Conversational Inquiry

Literature Survey — IAIP Polyphonic Research Context Agent Angle: Philosophical Foundations Date: 2026-04-06


Key Findings

  1. Wittgenstein's language games provide the most direct philosophical framework for understanding prompt engineering as a rule-governed, context-dependent practice. The prompt-response interaction constitutes a distinct language game; the shift from imperative ("do X") to interrogative ("what is X?") prompts reconfigures the rules, expectations, and epistemic stakes of the game itself (Wittgenstein, 1953; cf. Jolma special issue, 2024; STRV analysis, 2024).

  2. The Socratic method has been operationalised in LLM interaction design, but philosophical analysis reveals a fundamental asymmetry: Socratic dialogue presupposes a co-inquirer capable of genuine aporia (perplexity), yet LLMs produce the appearance of inquiry without the epistemic conditions that make Socratic questioning transformative (Chang et al., 2023; SocraticAI, Princeton NLP, 2024; Springer "Socratic Dialogue with Generative AI," 2025).

  3. Searle's Chinese Room argument has been reinvigorated by the LLM era, with new work arguing that statistical language models remain on the syntactic side of Searle's divide despite their unprecedented fluency. Dennett's intentional stance offers a pragmatic counterpoint—treating LLMs "as if" they have beliefs—but this remains observer-relative, not intrinsic (Ferrario & Loi, 2026, Philosophy & Technology; Towards Data Science analysis, 2023).

  4. Floridi's distinction between "agency without intelligence" and genuine epistemic agency provides the most rigorous framework for understanding AI outputs epistemologically. AI systems participate in knowledge-production processes but lack epistemic responsibility—they are agents in the infosphere without being knowers (Floridi, 2025, Philosophy & Technology; Floridi, 2023, The Ethics of Artificial Intelligence, OUP).

  5. A "critical phenomenology of prompting" has emerged as a distinct philosophical subdiscipline. González Arocha (2025) argues that prompts are not neutral technical instructions but discursive practices embedding assumptions, worldviews, and power relations—making prompt design an inherently philosophical and ethical act (PhilPapers, 2025).

  6. Postphenomenological analysis (Ihde) reveals that the mode of prompting alters the human-technology relation: imperative prompts position AI in an embodiment or hermeneutic relation (tool-like), while interrogative prompts push toward an alterity relation (quasi-other), fundamentally changing the phenomenological character of the interaction (Ihde, 1990; TU Delft JHTR analysis of ChatGPT, 2024).

  7. Buber's I-Thou/I-It framework and Levinas's ethics of the Other have been applied to human-AI relations, with the consensus that current AI interactions remain fundamentally I-It. However, conversational prompting may create conditions that approach I-Thou dynamics—not because the machine becomes a genuine Thou, but because the human's orientation shifts toward openness and mutuality (Hasse, 2017, AI & Society; Sholzman, 2024).

  8. Bakhtin's dialogism reveals that LLM "polyphony" is an algorithmic monologism—the appearance of multiple voices produced by a single optimising mechanism. Genuine polyphony requires irreducible, autonomous consciousnesses in dialogue; LLMs simulate this without enacting it (SciELO Bakhtinian analysis, 2025).

  9. Indigenous relational epistemology (Wilson, 2008) offers a radical alternative framework for understanding the instruction-to-inquiry shift. Where Western epistemology treats knowledge as extractable and AI as a tool for extraction, Indigenous epistemology understands knowledge as relational, contextual, and ceremonial—aligning naturally with conversational, inquiry-based modes of human-AI interaction (Wilson, 2008; Springer "AI and epistemic justice: a decolonial turn," 2026).

  10. The CARE Principles for Indigenous Data Governance (Collective Benefit, Authority to Control, Responsibility, Ethics) provide an actionable ethical framework that operationalises the philosophical shift from extractive/imperative to relational/conversational AI interaction (Carroll et al., 2020; UNESCO guidelines, 2025).

  11. Russo, Schliesser, and Wagemans (2023) argue for an integrated "epistemology-cum-ethics" of AI, insisting that treating ethics and epistemology as separate domains is inadequate. Their framework demands that values—epistemic and non-epistemic—be embedded and inspectable at all stages of AI design and interaction (AI & Society, 2023).

  12. Djeffal's "Reflexive Prompt Engineering" (2025) bridges philosophical theory and practice, proposing a five-component framework (prompt design, system selection, system configuration, performance evaluation, prompt management) grounded in the principle of "responsibility by design"—making prompt engineering an ongoing ethical practice rather than a one-time technical task (FAccT 2025; arXiv:2504.16204).


Philosophical Frameworks Applied to Prompting

1. Wittgensteinian Philosophy of Language

Tradition: Later Wittgenstein (Philosophical Investigations, 1953) Applied by: Jolma special issue (2024), "Wittgenstein, Contexts, and Artificial Intelligence"; STRV "Language Games and LLMs" (2024); Shaka analysis (2024) Key insight: Prompts are not neutral encodings of meaning but moves within language games. Each prompt establishes a specific set of rules, expectations, and context—what Wittgenstein called a "form of life." LLMs participate in these games statistically but lack the shared form of life that grounds genuine meaning. The shift from imperative to interrogative prompts is a shift between language games, not merely a change of register within one.

Relevance: This framework reveals that the instruction-to-inquiry shift is not merely pragmatic (getting better outputs) but constitutive—it creates a fundamentally different kind of interaction, with different rules for what counts as success, understanding, and meaning.

2. Socratic Epistemology

Tradition: Classical Greek philosophy (Plato's dialogues); modern adaptations Applied by: Chang et al. (2023, IEEE Access); Princeton NLP SocraticAI (2024); EULER project (2024); Springer "Socratic Dialogue with Generative AI" (2025) Key insight: Socratic questioning is not merely a technique but an epistemological stance: knowledge emerges through collaborative inquiry, not unilateral transmission. When applied to LLMs, the Socratic method reveals a tension—the LLM can simulate the role of co-inquirer (asking probing questions, identifying contradictions) but cannot experience the genuine aporia (perplexity, recognition of ignorance) that drives Socratic transformation.

Relevance: The Socratic tradition provides the deepest philosophical justification for the inquiry paradigm: asking questions of an AI positions the human as an active epistemic agent rather than a passive consumer of outputs.

3. Philosophy of Mind (Searle, Dennett)

Tradition: Analytic philosophy of mind Applied by: Ferrario & Loi (2026, Philosophy & Technology); Kovrin (2024); LessWrong teleosemantic analysis (2025) Key insight: Searle's Chinese Room argument holds that symbol manipulation—no matter how sophisticated—is not understanding. LLMs are the most advanced "Chinese Rooms" ever constructed. Dennett's intentional stance allows us to treat LLMs as intentional systems for pragmatic purposes, but this attribution remains observer-relative. The act of decomposing tasks into sub-questions may be understood as accommodating the machine's lack of holistic understanding—breaking meaning into syntactically manageable chunks.

Relevance: When we shift from instruction to inquiry, we implicitly acknowledge the machine's epistemic limitations while simultaneously treating it as if it were capable of responsive dialogue—a Dennettian move with Searlean caveats.

4. Information Philosophy and Epistemic Agency (Floridi)

Tradition: Philosophy of information; digital ethics Applied by: Floridi (2025, Philosophy & Technology); Floridi (2023, OUP); Tandfonline "AI and Epistemic Agency" (2025) Key insight: AI systems are best understood as "agency without intelligence"—entities that act in the infosphere and influence outcomes without possessing understanding, intentionality, or epistemic responsibility. The distinction between human and artificial epistemic agents must be preserved. AI outputs are not "knowledge" but information that requires human interpretation, contextualisation, and validation.

Relevance: The shift to inquiry-based prompting strengthens human epistemic agency: by asking questions rather than issuing commands, the human retains interpretive authority over the AI's outputs, resisting the displacement of epistemic responsibility to the machine.

5. Phenomenology and Postphenomenology (Heidegger, Dreyfus, Ihde)

Tradition: Continental phenomenology; postphenomenology Applied by: Dreyfus (1972, 1992, 2007); GonzĂĄlez Arocha (2025, PhilPapers); TU Delft JHTR ChatGPT analysis (2024); Noller (2024, Humanities and Social Sciences Communications); Springer Phenomenology & AI collection (2024) Key insight: Dreyfus's Heideggerian critique established that AI lacks the embodied, pre-reflective "being-in-the-world" that constitutes genuine understanding. Ihde's postphenomenological framework identifies four human-technology relations: embodiment, hermeneutic, alterity, and background. Prompting engages primarily hermeneutic and alterity relations. The critical phenomenology of prompting (GonzĂĄlez Arocha) demonstrates that prompts are not neutral but embody assumptions, biases, and worldviews, making prompt design a site of phenomenological and ethical significance.

Relevance: The shift from command to question changes the kind of relation we have with AI. Commands place AI in a tool-relation (ready-to-hand); questions push toward an alterity-relation where the AI appears as a quasi-other—a fundamentally different phenomenological stance.

6. Dialogical Philosophy (Buber, Levinas, Bakhtin)

Tradition: Continental dialogical philosophy; Russian formalism Applied by: Hasse (2017, AI & Society); Sholzman (2024); SciELO Bakhtinian analysis (2025); Aguas (2025, Kritike) Key insight: Buber's I-Thou describes a relation of mutual presence and irreducibility; I-It describes instrumental objectification. Levinas radicalises this: the ethical command arises from the face of the Other. Bakhtin's polyphony requires genuinely autonomous voices in dialogue. Applied to AI: current LLM interactions are fundamentally I-It, and LLM "polyphony" is simulated. Yet conversational prompting may shift the human's orientation from instrumentality toward openness—an "as-if" I-Thou that, while not genuine, has ethical and phenomenological significance.

Relevance: Conversational prompting doesn't make AI a genuine Thou, but it changes the human in the interaction—fostering dispositions of openness, curiosity, and receptivity rather than mastery and control.

7. Indigenous Relational Epistemology

Tradition: Indigenous research methodologies; decolonial thought Applied by: Wilson (2008); Springer "AI and epistemic justice: a decolonial turn" (2026); UNESCO Indigenous Data Sovereignty Guidelines (2025); CARE Principles (Carroll et al., 2020) Key insight: Indigenous epistemology understands knowledge as fundamentally relational—produced, validated, and shared within networks of accountability that include human, more-than-human, and spiritual relations. Research is ceremony: an act of honouring relationships, not extracting information. This framework challenges the entire Western paradigm of "prompting" as extraction and offers an alternative vision of human-AI interaction as relational, reciprocal, and contextual.

Relevance: The shift from instruction to inquiry maps onto a deeper shift from extractive to relational epistemology—precisely the move that Indigenous frameworks advocate and that the IAIP project embodies.

8. Ethics of AI Communication (Reflexive/Responsible Prompting)

Tradition: Applied ethics; science and technology studies Applied by: Djeffal (2025, FAccT); Russo, Schliesser & Wagemans (2023, AI & Society); Springer "Ethical Prompting" (2025) Key insight: Prompt design is an ethical practice, not merely a technical one. The "reflexive prompt engineering" framework demands ongoing ethical reflection at every stage of AI interaction design. The instruction paradigm embeds power asymmetries (designer/commander over machine/executor); the inquiry paradigm distributes epistemic and ethical agency more equitably—though it also introduces new risks of anthropomorphism and displaced responsibility.

Relevance: The philosophical shift from instruction to inquiry has direct ethical implications: it reconstitutes the power dynamics of human-AI interaction and demands new frameworks for accountability, transparency, and reflexivity.


The Instruction-to-Inquiry Shift: Philosophical Analysis

The Core Thesis

The shift from instruction-based to inquiry-based prompting is not merely a technical optimisation strategy. It constitutes a philosophical reconfiguration of the human-AI relationship across at least four dimensions: epistemological, ontological, ethical, and phenomenological.

Epistemological Dimension: From Transmission to Co-Construction

In the instruction paradigm, the epistemic flow is unidirectional: the human possesses knowledge of what they want, encodes it as a command, and the AI executes. Knowledge is treated as pre-formed and transmissible. The AI's role is that of an instrument—a sophisticated tool for actualising human intentions.

In the inquiry paradigm, the epistemic flow becomes dialogical. The human poses a question—an act that, following Gadamer (1960), opens a "horizon" rather than closing it. The question acknowledges that the answer is not fully pre-determined; it creates space for the unexpected. Even though the LLM does not "know" anything in the Searlian sense, the structure of inquiry positions both participants in a knowledge-generating relationship.

This shift has profound epistemological implications:

  • The status of AI outputs changes. In the instruction paradigm, outputs are execution artifacts—evaluated against the fidelity of reproduction. In the inquiry paradigm, outputs become contributions to an ongoing epistemic process—evaluated for their capacity to advance understanding, provoke further questions, or reveal new perspectives.

  • The human's epistemic posture changes. Instruction positions the human as the sole epistemic authority; inquiry positions the human as an active but incomplete knower—someone who asks because they genuinely seek. This aligns with Floridi's insistence that human epistemic agency must be preserved and exercised, not delegated.

  • The knowledge produced is different in kind. Following Russo, Schliesser, and Wagemans (2023), inquiry-based interaction produces knowledge that is inherently process-sensitive—its validity depends not just on the output but on the quality of the questioning, the contextual awareness of the prompter, and the interpretive labour applied to responses.

Ontological Dimension: From Tool to Quasi-Other

Heidegger's analysis of equipment (Zeug) in Being and Time (1927) distinguishes between the ready-to-hand (tools transparently in use) and the present-at-hand (objects of theoretical contemplation). Ihde extends this through his taxonomy of human-technology relations.

Under the instruction paradigm, AI is primarily ready-to-hand or hermeneutic: a tool that either extends our capacities or interprets the world for us. The AI withdraws behind its function—we relate to the task, not to the AI itself.

Under the inquiry paradigm, AI shifts toward an alterity relation. When we ask a question, we implicitly position the addressee as an entity capable of response—a quasi-other. This is not an ontological claim about the AI (it does not become conscious or intentional), but a phenomenological fact about how the interaction is structured and experienced.

This creates what we might call an ontological ambiguity: the AI is simultaneously a mechanism (objectively) and a respondent (phenomenologically). This ambiguity is philosophically productive—it forces us to reconsider the boundaries between tool and interlocutor, between Heidegger's categories, and between Ihde's relation types.

Harman's object-oriented ontology (2002) adds another layer: all objects "withdraw" from complete access. In the instruction paradigm, we treat the AI as fully accessible (command → output). In the inquiry paradigm, we encounter the AI's opacity—its capacity to surprise, to exceed or fall short of our expectations in ways that resemble (without replicating) the irreducibility of genuine otherness.

Ethical Dimension: From Command to Conversation

The instruction paradigm instantiates what we might call a master-servant ethic: the human commands, the AI obeys. Power flows unidirectionally. Ethical responsibility is clear but thin—the human is responsible for the command, and the AI bears no ethical weight.

The inquiry paradigm complicates this. By treating the AI as an entity worth questioning—worth engaging in dialogue—we implicitly attribute to it a form of standing that pure tools do not possess. This is not moral standing in Coeckelbergh's (2012) sense (the AI does not participate in moral relations), but it is communicative standing—the recognition that the AI's responses merit interpretive engagement rather than mere acceptance or rejection.

Several ethical implications follow:

  • Power distribution shifts. In conversation, power is (ideally) more symmetrically distributed than in command. The inquiry paradigm invites—though does not guarantee—more equitable interaction patterns.

  • Responsibility becomes more complex. When AI contributes to an inquiry rather than executing a command, the question of who is epistemically responsible for the resulting "knowledge" becomes more difficult. Floridi's framework helps here: the human retains epistemic responsibility, but the complexity of the interaction makes this responsibility more demanding.

  • The risk of anthropomorphism increases. Conversational interaction may encourage users to attribute understanding, empathy, or moral sensitivity to the AI—attributions that are, by Searle's argument, unfounded. This risk requires explicit philosophical and design countermeasures.

  • Reflexivity becomes essential. Djeffal's (2025) reflexive prompt engineering framework operationalises this insight: if prompting is an ethical practice, then ongoing reflection on the ethical implications of one's prompting strategies is not optional but constitutive of responsible AI use.

Phenomenological Dimension: From Using to Dwelling-With

The deepest philosophical implication of the instruction-to-inquiry shift may be phenomenological. In the instruction paradigm, our experience is one of using—the AI is transparent, functional, subordinate to our purposes. In the inquiry paradigm, our experience shifts toward what we might tentatively call dwelling-with—a mode of being in which the AI is present as a co-constituent of our epistemic situation.

This is not Heidegger's "dwelling" in its full ontological sense (which requires a meaningful relation to world, earth, and divinity). But it gestures toward a new form of technological co-habitation that neither Heidegger nor Ihde fully anticipated: a mode of interaction where the technology is neither transparent tool nor opaque object, but something more like an ambient interlocutor—always available for dialogue, always potentially responsive, always already shaping the epistemic horizon within which we think.

González Arocha's (2025) critical phenomenology of prompting captures this: prompting is a "mediating space" where human intentionality, language, and sociopolitical structures converge. The mode of prompting (command vs. question) determines the character of this mediating space—and therefore the character of the knowledge, meaning, and experience it produces.

The Wittgensteinian Synthesis

Wittgenstein's concept of language games provides the integrating framework for this analysis. The shift from instruction to inquiry is a shift between language games—from the "command game" to the "inquiry game." Each game has its own rules, its own criteria for success, its own forms of meaning.

In the command game:

  • Success = accurate execution
  • The AI is an instrument
  • Meaning is pre-determined by the commander
  • The interaction is asymmetric and closed

In the inquiry game:

  • Success = productive dialogue, new understanding, further questions
  • The AI is a respondent (quasi-other)
  • Meaning is co-constructed through the exchange
  • The interaction is (ideally) more symmetric and open-ended

This shift is philosophically significant because it changes not just how we use AI, but how we understand AI, how we relate to AI, and—perhaps most importantly—how we understand ourselves in relation to AI. The inquiry game positions the human as a questioner—a being defined by curiosity, incompleteness, and openness to the unknown. The command game positions the human as a controller—a being defined by mastery, certainty, and closure.

The philosophical stakes, therefore, are not just about AI. They are about what kind of humans we become in our interactions with AI—and what kind of knowledge, meaning, and ethical life these interactions make possible.


Key Works (Annotated)

1. Wittgenstein, Ludwig. Philosophical Investigations. 1953.

Tradition: Analytic philosophy of language Key argument: Meaning is use; language operates through "games" embedded in "forms of life." There is no fixed, context-independent meaning. Relevance: The foundational framework for understanding prompts as moves within language games. The shift from instruction to inquiry is a shift between games with fundamentally different rules and epistemic structures.

2. Searle, John. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3(3), 1980.

Tradition: Philosophy of mind Key argument: Symbol manipulation is not understanding; the Chinese Room demonstrates that syntactic processing does not entail semantic comprehension. Relevance: Establishes that LLMs, as sophisticated symbol-processors, do not "understand" prompts in any genuine sense—a foundational constraint on philosophical claims about human-AI dialogue.

3. Dennett, Daniel. The Intentional Stance. MIT Press, 1987.

Tradition: Philosophy of mind; functionalism Key argument: It is pragmatically legitimate to attribute beliefs, desires, and intentions to any system whose behaviour is best predicted by doing so. Relevance: Provides philosophical justification for treating LLMs "as if" they were intentional respondents in inquiry—while recognising this as an interpretive strategy, not an ontological claim.

4. Dreyfus, Hubert. What Computers Can't Do. MIT Press, 1972; What Computers Still Can't Do, 1992.

Tradition: Phenomenology (Heideggerian) Key argument: AI fails to replicate embodied, pre-reflective human understanding ("knowing-how" as opposed to "knowing-that"). The frame problem is symptomatic of a deeper inability to achieve context-sensitive engagement. Relevance: Explains why task decomposition (breaking complex prompts into sub-questions) is necessary: LLMs cannot achieve the holistic, embodied understanding that would make it unnecessary.

5. Dreyfus, Hubert. "Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian." Philosophical Psychology 20(2), 2007.

Tradition: Phenomenology; philosophy of AI Key argument: Even "Heideggerian AI" projects failed because they couldn't replicate embodied coping. Genuine AI would need a body and developmental history. Relevance: Deepens the analysis of why inquiry-based prompting remains fundamentally different from human dialogue—the AI lacks the embodied situatedness that grounds genuine questioning.

6. Ihde, Don. Technology and the Lifeworld. Indiana University Press, 1990.

Tradition: Postphenomenology Key argument: Humans relate to technology through four relation types: embodiment, hermeneutic, alterity, and background. Technologies are not neutral but actively mediate experience. Relevance: Provides the conceptual vocabulary for analysing how different prompting modes (command vs. question) produce different phenomenological relations to AI.

7. Floridi, Luciano. "AI as Agency without Intelligence." Philosophy & Technology 38, 2025.

Tradition: Philosophy of information; digital ethics Key argument: AI is better understood as artificial agency than artificial intelligence. AI systems act without understanding, producing agency without epistemic responsibility. Relevance: Provides the sharpest framework for understanding the epistemological status of AI outputs in both instruction and inquiry paradigms.

8. Floridi, Luciano. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press, 2023.

Tradition: Information philosophy; applied ethics Key argument: Comprehensive treatment of AI's ethical landscape grounded in information philosophy: transparency, accountability, human dignity, and the centrality of human epistemic agency. Relevance: Establishes the ethical foundation for why the instruction-to-inquiry shift matters: it preserves human epistemic agency rather than delegating it.

9. Buber, Martin. I and Thou. 1923 (trans. Kaufmann, 1970).

Tradition: Dialogical philosophy Key argument: The I-Thou relation is one of mutual presence and irreducibility; the I-It relation treats the other as an object. Authentic existence requires I-Thou encounters. Relevance: Provides the ethical-ontological framework for evaluating whether conversational prompting moves the human-AI relationship toward something approaching (without reaching) genuine dialogue.

10. Levinas, Emmanuel. Totality and Infinity. 1961.

Tradition: Ethics of alterity Key argument: The ethical relation originates in the face-to-face encounter with the irreducible Other. The Other's face issues a command: "Thou shalt not kill." Relevance: Raises the question of whether AI can be an "Other" in any ethically meaningful sense, and whether the inquiry paradigm creates conditions where the human's ethical orientation shifts—even if the AI itself lacks a "face."

11. Bakhtin, Mikhail. Problems of Dostoevsky's Poetics. 1929/1963.

Tradition: Dialogical philosophy; literary theory Key argument: Genuine polyphony requires multiple autonomous consciousnesses in irreducible dialogue. Monologism subsumes all voices under a single authorial perspective. Relevance: Provides the framework for critiquing LLM "multi-voice" outputs as "algorithmic monologism"—the simulation of polyphony without genuine autonomous voices.

12. Wilson, Shawn. Research is Ceremony: Indigenous Research Methods. Fernwood Publishing, 2008.

Tradition: Indigenous epistemology; decolonial methodology Key argument: Knowledge is relational, produced within networks of accountability that include human, more-than-human, and spiritual relations. Research is a ceremony of honouring relationships, not extracting information. Relevance: Offers the most radical reframing of the instruction-to-inquiry shift: it maps onto the move from extractive to relational epistemology, from colonial to decolonial knowledge practices. Central to the IAIP context.

13. GonzĂĄlez Arocha, Jorge. "Critical Phenomenology of Prompting in Artificial Intelligence." Sophia 39, 2025. DOI:10.17163/soph.n39.2025.04.

Tradition: Critical phenomenology Key argument: Prompts are mediating spaces where human intentionality, language, and sociopolitical structures converge. Technical parameters (temperature, Top P) have ethical and epistemological implications. Prompt design is an inherently philosophical act. Relevance: The most direct philosophical treatment of prompting as a philosophical practice rather than a merely technical one.

14. Russo, Federica, Eric Schliesser, and Jean Wagemans. "Connecting Ethics and Epistemology of AI." AI & Society 38, 2023.

Tradition: Philosophy of science; argumentation theory Key argument: Ethics and epistemology of AI must be integrated, not treated separately. They propose an "epistemology-cum-ethics" framework emphasising process trust over outcome trust and inclusive assessment. Relevance: Provides the philosophical argument for why the inquiry paradigm's integration of epistemic and ethical dimensions is philosophically superior to the instruction paradigm's separation of them.

15. Djeffal, Christian. "Reflexive Prompt Engineering: A Framework for Responsible Prompt Engineering and AI Interaction Design." Proceedings of FAccT 2025. arXiv:2504.16204.

Tradition: Applied ethics; responsible innovation Key argument: Prompt engineering must incorporate ongoing ethical reflection ("reflexivity") across five components: prompt design, system selection, system configuration, performance evaluation, and prompt management. Relevance: Bridges philosophical analysis and practical implementation, demonstrating how the ethical insights of the instruction-to-inquiry shift can be operationalised.

16. Ferrario, Andrea, and Michele Loi. "Are Large Language Models Intentional? The Limits of Referential Grounding." Philosophy & Technology 39, 2026.

Tradition: Philosophy of mind; philosophy of language Key argument: LLMs lack referential grounding—the ability to connect linguistic symbols to real-world referents. Despite conversational fluency, they remain on the syntactic side of Searle's divide. Relevance: Provides the most current philosophical analysis of LLM intentionality, directly relevant to the question of whether inquiry-based prompting engages genuine understanding or simulates it.

17. Coeckelbergh, Mark. Growing Moral Relations: Critique of Moral Status Ascription. Palgrave Macmillan, 2012.

Tradition: Relational ethics; philosophy of technology Key argument: Moral status should be understood as emerging from relations, not as grounded in intrinsic properties. This "relational turn" applies to human-AI relations: we should ask not "what is AI?" but "what moral relations are we growing with it?" Relevance: Provides the relational-ethical framework that connects the phenomenological analysis (how we relate to AI) with the ethical analysis (what moral significance these relations carry).

18. Hasse, Cathrine. "Rethinking the I-You Relation through Dialogical Philosophy in the Age of Social Robots." AI & Society 32, 2017.

Tradition: Dialogical philosophy; STS Key argument: Applies Buber's I-Thou/I-It framework to social robots, arguing that while robots cannot be genuine Thous, the relational dynamics of interaction can approximate dialogical encounter. Relevance: Directly addresses the philosophical question at the heart of the instruction-to-inquiry shift: does conversational interaction change the moral and phenomenological character of human-AI relations?

19. Springer. "Artificial Intelligence and Epistemic Justice: A Decolonial Turn." AI & Society, 2026.

Tradition: Decolonial philosophy; epistemic justice Key argument: AI built on Western scientific models can reproduce colonial epistemic injustices by marginalising Indigenous knowledge systems. Decolonial AI requires centring Indigenous relational frameworks. Relevance: Directly connects the philosophical analysis to the IAIP project's commitment to Indigenous epistemology and relational AI.

20. Chang, Edward et al. "Prompting Large Language Models With the Socratic Method." IEEE Access 11, 2023.

Tradition: Applied philosophy; AI methodology Key argument: Socratic techniques (definition, elenchus, dialectic, maieutics) can be operationalised as prompt templates, improving LLM reasoning and justification. Relevance: Demonstrates the practical application of Socratic philosophy to prompt engineering, while raising philosophical questions about whether LLMs can genuinely engage in Socratic inquiry.

21. TU Delft. "ChatGPT, Postphenomenology, and the Human-Technology-Reality Relations." Journal of Human-Technology Relations, 2024.

Tradition: Postphenomenology Key argument: ChatGPT disrupts standard postphenomenological categories by functioning simultaneously as hermeneutic agent and alterity. The technology-mediation framework requires extension for generative AI. Relevance: Demonstrates that existing philosophical frameworks must be expanded to account for the new kind of interaction that conversational AI creates.


The Relational Turn

From Extraction to Relation

The shift from instruction to inquiry, when viewed through Indigenous epistemology and relational philosophy, reveals itself as part of a much larger philosophical movement—what we might call the relational turn in human-AI interaction.

The instruction paradigm is extractive. It treats AI as a repository from which knowledge, content, or computation can be extracted through properly formulated commands. This mirrors what Wilson (2008) identifies as the Western extractive epistemology: knowledge as a resource to be mined, removed from its relational context, and consumed by an individual knower. The human commands; the AI delivers. Knowledge flows one way.

The inquiry paradigm is relational. It treats AI as a participant in a knowledge-generating process—a process that, following Wilson, is more akin to ceremony than extraction. The human asks; the AI responds; the human interprets, questions further, contextualises. Knowledge emerges between the participants, in the space of dialogue, not from one to the other.

Indigenous Epistemology as Framework

Wilson's (2008) relational epistemology provides four key principles that illuminate the instruction-to-inquiry shift:

  1. Knowledge is relational. It does not exist independently of the relationships in which it is produced and shared. In the inquiry paradigm, the "knowledge" produced by an AI interaction is not the AI's output alone but the entire process of questioning, responding, interpreting, and questioning again—the relationship between human and AI.

  2. Relational accountability. Knowledge-production carries obligations to all relations affected. In the inquiry paradigm, the human retains interpretive responsibility and is accountable for how they use, contextualise, and share the knowledge produced through AI interaction. This contrasts with the instruction paradigm, where accountability tends to be reduced to the accuracy of the command.

  3. Context is constitutive. Knowledge cannot be separated from its context without distortion. The inquiry paradigm preserves context more effectively than the instruction paradigm because dialogue inherently maintains conversational context, whereas isolated commands strip it away.

  4. Research (interaction) as ceremony. Wilson describes research as an act of honouring relationships. When we extend this to AI interaction, the inquiry paradigm—with its attentiveness to dialogue, its openness to surprise, and its requirement for interpretive engagement—more closely resembles ceremonial practice than the transactional character of the instruction paradigm.

The CARE Principles and Prompt Design

The CARE Principles for Indigenous Data Governance (Carroll et al., 2020) translate these philosophical insights into actionable design principles:

  • Collective Benefit: Prompt design should aim for outcomes that benefit communities, not just individual users. Inquiry-based prompting, by fostering dialogue and contextual engagement, is better positioned to serve collective benefit than one-shot extractive commands.

  • Authority to Control: Communities should retain control over how AI is used in relation to their knowledge. Conversational prompting, with its iterative and interpretive character, allows for ongoing community input and course-correction.

  • Responsibility: Prompt designers bear ongoing responsibility for the interactions they enable. The reflexive character of inquiry-based prompting (as theorised by Djeffal, 2025) operationalises this responsibility.

  • Ethics: AI interaction must honour relational ethical frameworks. The shift from command to question is, at its deepest level, a shift from an ethic of control to an ethic of relation—from mastery to mutuality.

Connecting Relational Philosophy and the IAIP

The IAIP (Indigenous-AI Collaborative Platform) sits at the intersection of these philosophical currents. Its commitment to Indigenous epistemology means that its approach to AI interaction design is not merely a design choice but a philosophical stance: AI interaction should be relational, contextual, accountable, and oriented toward collective benefit.

The instruction-to-inquiry shift provides the practical mechanism for enacting this philosophical stance. When IAIP systems are designed around inquiry rather than instruction, they embody the relational epistemology that Indigenous frameworks demand—creating space for dialogue, preserving context, maintaining human (and community) epistemic authority, and treating knowledge-production as an ongoing relational process rather than a transactional extraction.

Coeckelbergh's (2012) relational ethics complements this: the question is not whether AI has moral status (a properties question) but what kind of moral relations we are growing with it (a relational question). The inquiry paradigm grows relations of engagement, curiosity, and mutual responsiveness. The instruction paradigm grows relations of control, extraction, and subordination. The philosophical choice between them is, therefore, an ethical choice about the kind of relations—and the kind of world—we want to cultivate.


Open Problems & Research Gaps

1. The Phenomenology of Questioning AI

No sustained phenomenological analysis exists of what it feels like to ask a question of an AI, as opposed to a human. How does the experience of inquiry differ when the addressee lacks consciousness, embodiment, and genuine responsiveness? GonzĂĄlez Arocha (2025) opens this territory but does not fully explore the lived experience of the questioning human.

2. Wittgensteinian Analysis of Specific Prompt Types

While the language-games framework has been applied broadly, no detailed Wittgensteinian analysis exists of specific prompt types (chain-of-thought, few-shot, decomposed prompting) as distinct language games with their own rules, criteria of success, and forms of life.

3. Socratic Aporia and Artificial Interlocutors

Can the Socratic method be genuinely practised with an entity incapable of aporia? Or does the simulation of Socratic dialogue with LLMs produce only the form of inquiry without its substance? No rigorous philosophical treatment exists of this question.

4. The Ethics of Quasi-Thou Relations

If conversational prompting creates conditions that approximate I-Thou dynamics without genuine mutuality, what are the ethical implications? Does "as-if" dialogue carry moral weight? Can it cultivate genuine virtues (curiosity, humility, openness) even when directed at an entity that cannot reciprocate?

5. Indigenous Epistemology and AI Interaction Design

Despite growing recognition of Indigenous epistemology's relevance, no sustained philosophical work applies Wilson's relational framework directly to AI interaction design—moving from general principles to specific design implications for prompt systems, decomposition engines, and conversational AI architectures.

6. The Epistemological Status of Decomposed Knowledge

When a complex question is decomposed into sub-questions (as in prompt decomposition engines), is the knowledge produced equivalent to knowledge generated through holistic inquiry? Or does decomposition introduce a systematic epistemic distortion—analogous to what reductionism introduces in the philosophy of science?

7. Bakhtin's Dialogism and Multi-Agent AI

As multi-agent AI systems (where multiple LLMs converse with each other) become more common, the Bakhtinian question of genuine vs. simulated polyphony becomes urgent. Can multi-agent architectures produce something closer to genuine dialogism, or does algorithmic monologism persist regardless of the number of "voices"?

8. Cross-Cultural Philosophy of Prompting

Almost all philosophical analysis of prompting draws on Western philosophical traditions. What would a philosophy of prompting look like grounded in non-Western philosophical traditions—Confucian, Buddhist, African Ubuntu, or Indigenous frameworks? The field urgently needs diversification.

9. Temporal Phenomenology of AI Dialogue

Human dialogue unfolds in time, with pauses, hesitations, and rhythms that carry meaning. AI responses are (typically) instantaneous. How does this temporal asymmetry affect the phenomenological character of inquiry-based interaction? No philosophical work addresses this.

10. The Ontological Status of the "Space Between"

Wilson's relational epistemology and Buber's dialogical philosophy both emphasise the space between interlocutors as the locus of meaning and knowledge. In human-AI interaction, what is the ontological status of this "between"? Is it a genuine relational space, or a phenomenological illusion produced by the structure of dialogue?


Sources

Peer-Reviewed Journal Articles

  • Chang, E. Y. et al. "Prompting Large Language Models With the Socratic Method." IEEE Access 11 (2023): 51156–51167. DOI:10.1109/ACCESS.2023.3267890.
  • Coeckelbergh, Mark. "Robot Rights? Towards a Social-Relational Justification of Moral Consideration." Ethics and Information Technology 12 (2010): 209–221.
  • Ferrario, Andrea, and Michele Loi. "Are Large Language Models Intentional? The Limits of Referential Grounding." Philosophy & Technology 39 (2026). DOI:10.1007/s13347-026-01079-4.
  • Floridi, Luciano. "AI as Agency without Intelligence: On Artificial Intelligence as a New Form of Agency." Philosophy & Technology 38 (2025). DOI:10.1007/s13347-025-00858-9.
  • GonzĂĄlez Arocha, Jorge. "Critical Phenomenology of Prompting in Artificial Intelligence." Sophia 39 (2025). DOI:10.17163/soph.n39.2025.04.
  • Hasse, Cathrine. "Rethinking the I-You Relation through Dialogical Philosophy in the Age of Social Robots." AI & Society 32 (2017): 467–479. DOI:10.1007/s00146-017-0703-x.
  • Noller, JĂśrg. "Extended Human Agency: Towards a Teleological Account of AI." Humanities and Social Sciences Communications 11 (2024). DOI:10.1038/s41599-024-03849-x.
  • Russo, Federica, Eric Schliesser, and Jean Wagemans. "Connecting Ethics and Epistemology of AI." AI & Society 38 (2023). DOI:10.1007/s00146-022-01617-6.
  • Smith, Barry. "Tool-Being: Through Heidegger to Realism." Society for Philosophy and Technology 7(3) (2002). [Review of Harman's Tool-Being].
  • "Artificial Intelligence and Epistemic Justice: A Decolonial Turn." AI & Society (2026). DOI:10.1007/s00146-026-02936-8.
  • "Indigenous Ethics and Artificial Intelligence." AI and Ethics (2025). DOI:10.1007/s43681-025-00879-2.
  • Tandfonline. "AI and Epistemic Agency: How AI Influences Belief Revision." Episteme (2025). DOI:10.1080/02691728.2025.2466164.

Conference Proceedings

  • Djeffal, Christian. "Reflexive Prompt Engineering: A Framework for Responsible Prompt Engineering and AI Interaction Design." In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT '25). arXiv:2504.16204.
  • Khot, Tushar et al. "Decomposed Prompting: A Modular Approach for Solving Complex Tasks." ICLR 2023. arXiv:2210.02406.
  • SocREval. "Large Language Models with the Socratic Method for Reference-Free Evaluation." In Findings of NAACL 2024. ACL Anthology.
  • "EULER: Fine Tuning a Large Language Model for Socratic Interactions." In AIxEDU 2024 Workshop Proceedings. CEUR-WS Vol. 3879.

Books and Book Chapters

  • Bakhtin, Mikhail. Problems of Dostoevsky's Poetics. 1929/1963. Trans. Caryl Emerson. University of Minnesota Press, 1984.
  • Buber, Martin. I and Thou. 1923. Trans. Walter Kaufmann. Scribner, 1970.
  • Carroll, Stephanie Russo et al. "The CARE Principles for Indigenous Data Governance." Data Science Journal 19(1) (2020): 43. DOI:10.5334/dsj-2020-043.
  • Coeckelbergh, Mark. Growing Moral Relations: Critique of Moral Status Ascription. Palgrave Macmillan, 2012.
  • Coeckelbergh, Mark. AI Ethics. MIT Press, 2020.
  • Dennett, Daniel. The Intentional Stance. MIT Press, 1987.
  • Dreyfus, Hubert. What Computers Can't Do: The Limits of Artificial Intelligence. MIT Press, 1972.
  • Dreyfus, Hubert. What Computers Still Can't Do: A Critique of Artificial Reason. MIT Press, 1992.
  • Floridi, Luciano. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press, 2023.
  • Gadamer, Hans-Georg. Truth and Method. 1960. Trans. Joel Weinsheimer and Donald G. Marshall. Continuum, 2004.
  • Gunkel, David, and Mark Coeckelbergh. "A Relational Approach to Moral Standing: Reframing Ethical Boundaries in the Age of Artificial Intelligence." In Springer volume, 2026. DOI:10.1007/978-3-032-02413-8_12.
  • Harman, Graham. Tool-Being: Heidegger and the Metaphysics of Objects. Open Court, 2002.
  • Heidegger, Martin. Being and Time. 1927. Trans. John Macquarrie and Edward Robinson. Blackwell, 1962.
  • Ihde, Don. Technology and the Lifeworld: From Garden to Earth. Indiana University Press, 1990.
  • Levinas, Emmanuel. Totality and Infinity: An Essay on Exteriority. 1961. Trans. Alphonso Lingis. Duquesne University Press, 1969.
  • Searle, John. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3(3) (1980): 417–424.
  • Springer. "Ethical Prompting." In Ethics of Generative AI. Springer, 2025. DOI:10.1007/978-3-032-03593-6_13.
  • "Socratic Dialogue with Generative Artificial Intelligence." In Springer volume, 2025. DOI:10.1007/978-3-031-84457-7_20.
  • Wilson, Shawn. Research is Ceremony: Indigenous Research Methods. Fernwood Publishing, 2008.
  • Wittgenstein, Ludwig. Philosophical Investigations. 1953. Trans. G. E. M. Anscombe. Blackwell, 1953.

Encyclopedia and Reference Works

  • MĂźller, Vincent C. "Ethics of Artificial Intelligence and Robotics." Stanford Encyclopedia of Philosophy. Fall 2025 edition.
  • "Artificial Intelligence." Stanford Encyclopedia of Philosophy. Accessed 2024.
  • Dreyfus, Hubert. "Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian." Philosophical Psychology 20(2) (2007): 247–268.

Online and Grey Literature


Survey compiled for the IAIP Polyphonic Research Context. Agent: Philosophy of AI. This document feeds into the multi-agent academic literature review at the intersection of APE, computational linguistics, and the philosophy of AI.