

Influential Books
Philosophy and AI
AI, Consciousness, and Personhood
Futures & Scenarios of AI
Introductions to Artificial Intelligence

AI Ethics

Where to Start
First Steps for Beginers and Scholars
Where to Start For Beginers
Suggested Learning Paths
Philosophy of AI
Coming Soon
Beginers Learning Path
PDF Download
Scholars Learning Path
PDF Download

Philosophy of AI
Annotated Research Bibliographies
Last Update: August 28, 2025
This section is very incomplete. It should be more robust by Mid-October 2025.
Ontology of AI
AI as artifact vs. quasi-agent; personhood status; anthropomorphism and simulated mindedness.
-
Gunkel, David J. 2012. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press.
-
Frames whether and how machines could be moral patients/agents, mapping the conceptual terrain.
-
-
Gunkel, David J. 2018. Robot Rights. MIT Press.
-
Systematic treatment of whether artifacts can be rights-holders and on what grounds.
-
-
Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3(3): 417–457. DOI: 10.1017/S0140525X00005756
-
The Chinese Room argument against understanding/semantics in purely symbolic machines.
-
-
Floridi, Luciano. 2011. The Philosophy of Information. Oxford University Press.
-
Develops an informational ontology that underwrites how artificial agents can be analyzed as infospheric entities.
-
-
Coeckelbergh, Mark. 2022. The Political Philosophy of AI. Polity.
-
Recasts AI’s “being” via its institutional embedding and political status rather than intrinsic mental properties.
-
-
Bryson, Joanna J. 2018. “Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics.” Ethics and Information Technology 20(1): 15–26. DOI: 10.1007/s10676-018-9448-6)
-
Argues against granting moral patiency/personhood to AI while demanding human accountability.
-
-
Gunkel, David J., and Mark Coeckelbergh, eds. 2021. Routledge Handbook of Philosophy of Information. Routledge
-
Wide-ranging chapters on informational objects, agency, and artifact ontology.
-
-
Schwitzgebel, Eric, and Mara Garza. 2015. “A Defense of the Rights of Artificial Intelligences.” Midwest Studies in Philosophy 39(1): 98–119.
-
Conditional case for future AI rights given certain functional/phenomenal thresholds.
-
-
Gabriel, Iason. 2020. “Artificial Intelligence, Values, and Alignment.” Minds and Machines 30(3): 411–437.
-
Ontology bleeds into normativity: what kind of thing AI is determines what aligning it could mean.
-
-
The Ethics and Ontology of AI Systems (2024) Robert Schaefer, Sven Nyholm, eds.
-
Recent volume centering the metaphysics of artificial agents, moral status, and system boundaries.
-
Epistemology and AI
Machine knowledge vs. human knowledge; explainability/interpretability; epistemic risks (bias, hallucination, misinfo).
-
Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (rev. ed. 2023).
-
Accessible synthesis on what current systems actually “know” and where they fail.
-
-
Weidinger, Laura, et al. 2022. “Taxonomy of Risks Posed by Language Models.” arXiv:2112.04359v4.
-
Comprehensive map of epistemic and social risks including hallucination and mis/disinformation.
-
-
Jacovi, Alon, and Yoav Goldberg. 2020. “Towards Faithfully Interpretable NLP Systems.” arXiv:2004.03685.
-
Distinguishes faithful explanations from merely plausible rationalizations.
-
-
Bommasani, Rishi, et al. 2021. “On the Opportunities and Risks of Foundation Models.” JEP (Stanford CRFM). arXiv:2108.07258
-
Field-defining synthesis of capabilities, generalization, and emergent behaviors.
-
-
Chiang, Ted. 2023. “ChatGPT Is a Blurry JPEG of the Web.” The New Yorker. Link
-
Popular but incisive epistemic metaphor for statistical compression and error.
-
-
Linardatos, Pantelis, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2021. “A Survey of Explainable AI.” Information 12(3): 96.
-
Broad overview of XAI methods and evaluation issues across modalities.
-
-
Elish, Madeleine Clare. 2019. “Moral Crumple Zones.” Engaging Science, Technology, and Society 5: 40–48. DOI: 10.17351/ests2019.277
-
Shows how accountability collapses onto human operators when AI systems err, masking epistemic opacity.
-
AI and Human Flourishing
1. AI in healthcare, education, spiritual formation
2. AI as collaborator vs. competitor
3. The “alignment problem” and value learning
Aesthetics and AI
1. AI-generated art, music, and literature
2. Creativity, authorship, and originality
3. The aesthetic value of machine-generated works
Consciousness and AI
1. Strong AI and the possibility of artificial consciousness
2. The “hard problem” applied to AI
3. Functionalism, panpsychism, and dualist critiques
4. Phenomenal consciousness vs. access consciousness in machines
Philosophy of Mind and AI
1. Mind–body problem and implications for AI
2. Computationalism and connectionism
3. Intentionality and representation in AI systems
4. Mental causation and artificial agents
Phenomenology and AI
1. AI through Merleau-Ponty, Heidegger, Ihde
2. Embodied perception and AI design
3. Human–AI interaction as lived experience
Embodiment and AI
1. Physical instantiation of intelligence
2. The role of the body in cognition (embodied cognition theories)
3. Robotics and sensory-motor grounding
Future Directions
Speculative Futures
AGI and superintelligence scenarios
Human–AI symbiosis
Collapse and resilience planning
Emerging Philosophical Questions
The metaphysics of synthetic minds
AI in moral reasoning
Rethinking human identity in light of machine intelligence

Can AI Think?
Research Bibliography
Updated frequently.
Updated August 27, 2025
Can Large Language Models Think, Reason, Believe?
-
Dziri, N., Lu, X., Sclar, M., Li, X. L., Jiang, L., Lin, B. Y., West, P., Bhagavatula, C., Le Bras, R., Hwang, J. D., Sanyal, S., Welleck, S., Ren, X., Ettinger, A., Harchaoui, Z., & Choi, Y. (2023). Faith and fate: Limits of transformers on compositionality. Advances in Neural Information Processing Systems (NeurIPS2023).https://arxiv.org/abs/2305.18654
-
Hong, P., Majumder, N., Ghosal, D., Aditya, S., Mihalcea, R., & Poria, S. (2024). Evaluating LLMs’ mathematical and coding competency through ontology-guided interventions. arXiv. https://arxiv.org/abs/2401.09395
-
Jiang, B., Xie, Y., Hao, Z., Wang, X., Mallick, T., Su, W. J., Taylor, C. J., & Roth, D. (2024). A peek into token bias: Large language models are not yet genuine reasoners. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 4722–4756). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.emnlp-main.272
-
Kambhampati, S. (2024). Can large language models reason and plan? arXiv. https://arxiv.org/abs/2403.04121
-
Lewis, M., & Mitchell, M. (2024). Using counterfactual tasks to evaluate the generality of analogical reasoning in large language models. arXiv. https://doi.org/10.48550/arXiv.2402.08955
-
McCoy, R. T., Yao, S., Friedman, D., Hardy, M. D., & Griffiths, T. L. (2024). Embers of autoregression show how large language models are shaped by the problem they are trained to solve. Proceedings of the National Academy of Sciences, 121(41), e2322420121. https://doi.org/10.1073/pnas.2322420121
-
Mirzadeh, S. I., Alizadeh, K., Shahrokhi, H., Tuzel, O., Bengio, S., & Farajtabar, M. (2025). GSM-Symbolic: Understanding the limitations of mathematical reasoning in large language models. In International Conference on Learning Representations (ICLR 2025). https://openreview.net/forum?id=AjXkRZIvjB
-
Mondorf, P., & Plank, B. (2024). Beyond accuracy: Evaluating the reasoning behavior of large language models—A survey. arXiv. https://arxiv.org/abs/2404.01869
-
Nezhurina, M., Cipolina-Kun, A., Cherti, M., Borovykh, A., Oseledets, I., & Jitsev, A. (2024). Alice in Wonderland: Simple tasks showing complete reasoning breakdown in state-of-the-art large language models. arXiv.https://arxiv.org/abs/2406.02061
-
Prabhakar, A., Griffiths, T. L., & McCoy, R. T. (2024). Deciphering the factors influencing the efficacy of chain-of-thought: Probability, memorization, and noisy reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 3710–3724). Association for Computational Linguistics. https://aclanthology.org/2024.findings-emnlp.212
-
Qiu, L., Jiang, L., Lu, X., Sclar, M., Pyatkin, V., Bhagavatula, C., Wang, B., Kim, Y., Choi, Y., Dziri, N., & Ren, X. (2024). Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hypothesis refinement. In International Conference on Learning Representations (ICLR 2024). LINK
-
Srivastava, S., Annarose, M. B., Anto, P. V., Menon, S., Sukumar, A., Samod, A., Philipose, A., Prince, S., & Thomas, S. (2024). Functional benchmarks for robust evaluation of reasoning performance, and the reasoning gap. arXiv. https://arxiv.org/abs/2402.19450
-
Wu, Z., Qiu, L., Ross, A., Akyürek, E., Chen, B., Wang, B., Kim, N., Andreas, J., & Kim, Y. (2024). Reasoning or reciting? Exploring the capabilities and limitations of language models through counterfactual tasks. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL): Human Language Technologies (pp. 1819–1862). Association for Computational Linguistics. LINK
-
Yan, J., Wang, C., Huang, J., & Zhang, W. (2024). Do large language models understand logic or just mimick context? arXiv. LINK






























