top of page

Last Update 9/9/2025

This page is a work in progress and will be updated frequently.

Philosophy of Technology Logo

Influential Books
Philosophy and AI 

AI, Consciousness, and Personhood

Machines_Who.jpg

An older but important book. Tells the story of AI’s origins as a tale less about machines than about us—our ambition, our hubris, and our restless hope that thought itself could be made.

Digital_Mind.jpg

A sweeping exploration of how biological and technological systems process information, asking whether the rise of “digital minds” forces us to rethink what counts as intelligence, consciousness, and personhood.

 

From_Deep.jpg

A philosophically ambitious attempt to show how deep-learning architectures might embody the classical faculties of perception, memory, imagination, and attention, and raise the question of whether machines can ever approach rationality.

Create_Mind.jpg

claims the key to building intelligent machines lies in reverse-engineering the brain’s pattern-recognition architecture, offering a bold but reductionist blueprint for recreating mind in silicon.

 

Snake_Oil.jpg

A very good book exposing which AI promises are genuine breakthroughs and which are illusions, urging us to separate statistical tricks from real intelligence.

AI_Mirror.jpg

Argues that our current AI systems don’t transcend us but simply reflect back our biases and limits, and unless we reshape them, they risk distorting our moral imagination and narrowing the horizons of a genuinely human future.

 

Artificial_You.jpg

A fun read that presses the question of whether minds can be uploaded or fused with machines, insisting that consciousness itself, might be the line that cannot be crossed.

Myth_AI.jpg

Larson dismantles the common prophecy of inevitable AGI, showing that without genuine reasoning, today’s machines are brilliant pattern-matchers but not minds.

 

Handbook.jpg

Original essays on the theoretical foundations of AI research, theories of mental architecture, principal areas of research, and extensions of AI such as artificial life.

Futures & Scenarios of AI

Nexus_.jpg

Reframes human history as the story of evolving information systems, casting AI as the latest—and most disruptive—network in our long quest to bind knowledge and power.

 

 

Optimist.jpg

profiles Sam Altman as the face of Silicon Valley’s new faith in AI, showing how his restless optimism about world-changing intelligence collides with the fears and fractures it unleashes.

 

 

Age_AI.jpg

Interesting arguments for the thesis that AI transforms not just industry but the foundations of knowledge and politics.

Singularity.jpg

Kurzweil doubles down on his earlier vision, claiming exponential growth is on schedule to deliver machine-human fusion and digital immortality within a generation.

Life.jpg

Contemporary classic. Imagines divergent futures shaped by AI, from utopia to extinction, pressing us to decide what kind of world we want.

Introductions to Artificial Intelligence

Art_Int.webp

A clear and critical examination of AI’s past, present capabilities, limitations, and ethical challenges, written for a general audience.

AI_Textbook.jpg

Universally regarded as the standard introductory textbook in AI. 

Hist_AI.webp

An authoritative survey by a leading AI researcher, charting the field’s trajectory from its origins through today’s breakthroughs to its future frontiers, with deep reflection on its cultural and societal impact.

Eye_of_Master.jpg

AI is not the imitation of intelligence but the automation of labor and social relations, rooted in industrial-age machines and modern algorithms. Rather than a path toward autonomy or sentience, AI remains blind automation, demanding a new literacy of its social and economic foundations.

Philosophy of Technology Logo

AI Ethics

Math.jpg

Popular landmark book. Confronts opaque, unregulated algorithms that magnify bias and harm at scale, arguing for transparency, auditing, and regulation to make models answerable to the public good.

Utopia.jpg

Faces the loss-of-work problem posed by automation. Contends human flourishing can be secured by either human–machine enhancement or life in virtual worlds, ultimately defending a “virtual utopia.”

Atlas.jpg

Diagnoses AI as an extractive industrial system—mining data, labor, and the planet—and seeks to replace the myth of “clean intelligence” with accountability across the whole supply chain.

Christian_Alingment.jpg

Investigates how to make learning systems pursue what we actually value (e.g., capturing human intent, norms, and ethics in objectives and data), so AI behaves safely in messy real life.

Person_Thing_Robot.jpg

Shows why robots don’t fit our inherited categories of person or thing, and offers a fresh ontological framework that reshapes debates in philosophy, technology, and law.

Ethics_of.jpg

Essays in four categories: building ethics into machines, ethical issues in specific technologies (e.g., self-driving cars), superintelligence, and AI rights.

Compatible.jpg

The central the danger that increasingly powerful AI systems may pursue goals misaligned with human values. Proposes designing AI with objective uncertain so that it remains corrigible and cooperative .​

AI Ethics.jpg

A solid and widely used introduction to AI ethics that treats AI’s moral standing relationally, via practices of recognition and responsibility.

Oxford_AI_Ethics.jpg

Essays on the normative limits on machine learning to questions of AI consciousness, rights, and the conceptual terms for understanding intelligence, human or artificial.

Philosophy of Technology Logo

Where to Start

First Steps for Beginers and Scholars

Where to Start For Beginers

Artificial_You.jpg

A fun read that presses the question of whether minds can be uploaded or fused with machines, insisting that consciousness itself, might be the line that cannot be crossed.

AI_Lit.jpg

An easy to read primer on the basic concepts of AI, how it’s already present in our daily lives, and how to prepare for the future. 

 

Where to Start for Scholars

Handbook.jpg

Original essays on the theoretical foundations of AI research, theories of mental architecture, principal areas of research, and extensions of AI such as artificial life.

AI_Textbook.jpg

Universally regarded as the standard introductory textbook in AI. 

 

Suggested Learning Paths
Philosophy of AI

Coming Soon

Beginers Learning Path 

PDF Download

Scholars Learning Path 

PDF Download

Philosophy of Technology Logo

Philosophy of AI
Annotated Research Bibliographies

Last Update: August 28, 2025

This section is very incomplete. It should be more robust by Mid-October 2025.

Ontology of AI

AI as artifact vs. quasi-agent; personhood status; anthropomorphism and simulated mindedness.

  1. Gunkel, David J. 2012. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press.

    • Frames whether and how machines could be moral patients/agents, mapping the conceptual terrain. 

  2. Gunkel, David J. 2018. Robot Rights. MIT Press. 

    • Systematic treatment of whether artifacts can be rights-holders and on what grounds.

  3. Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3(3): 417–457. DOI: 10.1017/S0140525X00005756

    • The Chinese Room argument against understanding/semantics in purely symbolic machines.

  4. Floridi, Luciano. 2011. The Philosophy of Information. Oxford University Press. 

    • Develops an informational ontology that underwrites how artificial agents can be analyzed as infospheric entities.

  5. Coeckelbergh, Mark. 2022. The Political Philosophy of AI. Polity.

    • Recasts AI’s “being” via its institutional embedding and political status rather than intrinsic mental properties.

  6. Bryson, Joanna J. 2018. “Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics.” Ethics and Information Technology 20(1): 15–26. DOI: 10.1007/s10676-018-9448-6)

    • Argues against granting moral patiency/personhood to AI while demanding human accountability.

  7. Gunkel, David J., and Mark Coeckelbergh, eds. 2021. Routledge Handbook of Philosophy of Information. Routledge 

    • Wide-ranging chapters on informational objects, agency, and artifact ontology.

  8. Schwitzgebel, Eric, and Mara Garza. 2015. “A Defense of the Rights of Artificial Intelligences.” Midwest Studies in Philosophy 39(1): 98–119. 

    • Conditional case for future AI rights given certain functional/phenomenal thresholds.

  9. Gabriel, Iason. 2020. “Artificial Intelligence, Values, and Alignment.” Minds and Machines 30(3): 411–437. 

    • Ontology bleeds into normativity: what kind of thing AI is determines what aligning it could mean.

  10. The Ethics and Ontology of AI Systems (2024) Robert Schaefer, Sven Nyholm, eds.

    • Recent volume centering the metaphysics of artificial agents, moral status, and system boundaries.

Epistemology and AI

Machine knowledge vs. human knowledge; explainability/interpretability; epistemic risks (bias, hallucination, misinfo).

  1. Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (rev. ed. 2023).

    • Accessible synthesis on what current systems actually “know” and where they fail. 

  2. Weidinger, Laura, et al. 2022. “Taxonomy of Risks Posed by Language Models.” arXiv:2112.04359v4. 

    • ​​Comprehensive map of epistemic and social risks including hallucination and mis/disinformation.

  3. Jacovi, Alon, and Yoav Goldberg. 2020. “Towards Faithfully Interpretable NLP Systems.” arXiv:2004.03685. 

    • Distinguishes faithful explanations from merely plausible rationalizations.

  4. Bommasani, Rishi, et al. 2021. “On the Opportunities and Risks of Foundation Models.” JEP (Stanford CRFM). arXiv:2108.07258

    • Field-defining synthesis of capabilities, generalization, and emergent behaviors.

  5. Chiang, Ted. 2023. “ChatGPT Is a Blurry JPEG of the Web.” The New Yorker. Link

    • Popular but incisive epistemic metaphor for statistical compression and error.​

  6. Linardatos, Pantelis, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2021. “A Survey of Explainable AI.” Information 12(3): 96.

    • Broad overview of XAI methods and evaluation issues across modalities.

  7. Elish, Madeleine Clare. 2019. “Moral Crumple Zones.” Engaging Science, Technology, and Society 5: 40–48. DOI: 10.17351/ests2019.277

    • Shows how accountability collapses onto human operators when AI systems err, masking epistemic opacity.

AI and Human Flourishing

 

1. AI in healthcare, education, spiritual formation

2. AI as collaborator vs. competitor

3. The “alignment problem” and value learning

 

Aesthetics and AI

 

1. AI-generated art, music, and literature

2. Creativity, authorship, and originality

3. The aesthetic value of machine-generated works

Consciousness and AI

 

1. Strong AI and the possibility of artificial consciousness

2. The “hard problem” applied to AI

3. Functionalism, panpsychism, and dualist critiques

4. Phenomenal consciousness vs. access consciousness in machines

 

Philosophy of Mind and AI

 

1. Mind–body problem and implications for AI

2. Computationalism and connectionism

3. Intentionality and representation in AI systems

4. Mental causation and artificial agents

 

Phenomenology and AI

 

1. AI through Merleau-Ponty, Heidegger, Ihde

2. Embodied perception and AI design

3. Human–AI interaction as lived experience

 

Embodiment and AI

 

1. Physical instantiation of intelligence

2. The role of the body in cognition (embodied cognition theories)

3. Robotics and sensory-motor grounding

 

Future Directions

Speculative Futures

AGI and superintelligence scenarios

Human–AI symbiosis

Collapse and resilience planning

 

Emerging Philosophical Questions

The metaphysics of synthetic minds

AI in moral reasoning

Rethinking human identity in light of machine intelligence

Philosophy of Technology Logo

Can AI Think?

Research Bibliography

Updated frequently.
Updated August 27, 2025

Can Large Language Models Think, Reason, Believe?

  1. Dziri, N., Lu, X., Sclar, M., Li, X. L., Jiang, L., Lin, B. Y., West, P., Bhagavatula, C., Le Bras, R., Hwang, J. D., Sanyal, S., Welleck, S., Ren, X., Ettinger, A., Harchaoui, Z., & Choi, Y. (2023). Faith and fate: Limits of transformers on compositionality. Advances in Neural Information Processing Systems (NeurIPS2023).https://arxiv.org/abs/2305.18654  

  2. Hong, P., Majumder, N., Ghosal, D., Aditya, S., Mihalcea, R., & Poria, S. (2024). Evaluating LLMs’ mathematical and coding competency through ontology-guided interventions. arXiv. https://arxiv.org/abs/2401.09395

  3. Jiang, B., Xie, Y., Hao, Z., Wang, X., Mallick, T., Su, W. J., Taylor, C. J., & Roth, D. (2024). A peek into token bias: Large language models are not yet genuine reasoners. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 4722–4756). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.emnlp-main.272 

  4. Kambhampati, S. (2024). Can large language models reason and plan? arXiv. https://arxiv.org/abs/2403.04121

  5. Lewis, M., & Mitchell, M. (2024). Using counterfactual tasks to evaluate the generality of analogical reasoning in large language models. arXiv. https://doi.org/10.48550/arXiv.2402.08955 

  6. McCoy, R. T., Yao, S., Friedman, D., Hardy, M. D., & Griffiths, T. L. (2024). Embers of autoregression show how large language models are shaped by the problem they are trained to solve. Proceedings of the National Academy of Sciences, 121(41), e2322420121. https://doi.org/10.1073/pnas.2322420121 

  7. Mirzadeh, S. I., Alizadeh, K., Shahrokhi, H., Tuzel, O., Bengio, S., & Farajtabar, M. (2025). GSM-Symbolic: Understanding the limitations of mathematical reasoning in large language models. In International Conference on Learning Representations (ICLR 2025). https://openreview.net/forum?id=AjXkRZIvjB 

  8. Mondorf, P., & Plank, B. (2024). Beyond accuracy: Evaluating the reasoning behavior of large language models—A survey. arXiv. https://arxiv.org/abs/2404.01869 

  9. Nezhurina, M., Cipolina-Kun, A., Cherti, M., Borovykh, A., Oseledets, I., & Jitsev, A. (2024). Alice in Wonderland: Simple tasks showing complete reasoning breakdown in state-of-the-art large language models. arXiv.https://arxiv.org/abs/2406.02061 

  10. Prabhakar, A., Griffiths, T. L., & McCoy, R. T. (2024). Deciphering the factors influencing the efficacy of chain-of-thought: Probability, memorization, and noisy reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 3710–3724). Association for Computational Linguistics. https://aclanthology.org/2024.findings-emnlp.212

  11. Qiu, L., Jiang, L., Lu, X., Sclar, M., Pyatkin, V., Bhagavatula, C., Wang, B., Kim, Y., Choi, Y., Dziri, N., & Ren, X. (2024). Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hypothesis refinement. In International Conference on Learning Representations (ICLR 2024). LINK  

  12. Srivastava, S., Annarose, M. B., Anto, P. V., Menon, S., Sukumar, A., Samod, A., Philipose, A., Prince, S., & Thomas, S. (2024). Functional benchmarks for robust evaluation of reasoning performance, and the reasoning gap. arXiv. https://arxiv.org/abs/2402.19450 

  13. Wu, Z., Qiu, L., Ross, A., Akyürek, E., Chen, B., Wang, B., Kim, N., Andreas, J., & Kim, Y. (2024). Reasoning or reciting? Exploring the capabilities and limitations of language models through counterfactual tasks. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL): Human Language Technologies (pp. 1819–1862). Association for Computational Linguistics. LINK

  14. Yan, J., Wang, C., Huang, J., & Zhang, W. (2024). Do large language models understand logic or just mimick context? arXiv. LINK

bottom of page