Society Ethics Health
Artificial Intelligence No Longer Just Aids Thinking – Italian Researcher Warns of 'Generated Human'
Large language models, such as ChatGPT-type systems, are no longer merely computational aids but are reshaping the very nature of what we consider knowledge and thought. This is the claim made by Italian philosopher Francesco Branda in his article published in the journal AI & SOCIETY.
Branda specifically examines healthcare, where AI systems generate responses, draft diagnoses, and provide treatment recommendations on behalf of humans. According to him, these systems do not merely support the thinking of doctors or patients but begin to produce it in practice. This results in the creation of a 'generated human' – a hybrid where human judgment intertwines with language produced by algorithms.
The article is a theoretical and socio-philosophical analysis, not a study based on experimental setups. It considers what happens if AI continues to be treated merely as a neutral tool. Branda warns of the risk that professionals and citizens become primarily acceptors of results rather than their critical evaluators.
In his view, the question is no longer whether AI can think, but whether humans themselves continue to believe that thinking is necessary. This shifts the focus away from the capabilities of machines and towards the narrowing of human responsibility and judgment.
Branda connects his reflections to physicist Giorgio Parisi's proposal for a European AI center, a 'CERN for AI'. He sees it as an opportunity not only for technical but also ethical and societal collaboration, where we consider what kind of knowledge and ways of thinking we want to build together with AI.
Source: Generated humans, lost judgment: rethinking knowledge with AI, AI & SOCIETY.
Branda specifically examines healthcare, where AI systems generate responses, draft diagnoses, and provide treatment recommendations on behalf of humans. According to him, these systems do not merely support the thinking of doctors or patients but begin to produce it in practice. This results in the creation of a 'generated human' – a hybrid where human judgment intertwines with language produced by algorithms.
The article is a theoretical and socio-philosophical analysis, not a study based on experimental setups. It considers what happens if AI continues to be treated merely as a neutral tool. Branda warns of the risk that professionals and citizens become primarily acceptors of results rather than their critical evaluators.
In his view, the question is no longer whether AI can think, but whether humans themselves continue to believe that thinking is necessary. This shifts the focus away from the capabilities of machines and towards the narrowing of human responsibility and judgment.
Branda connects his reflections to physicist Giorgio Parisi's proposal for a European AI center, a 'CERN for AI'. He sees it as an opportunity not only for technical but also ethical and societal collaboration, where we consider what kind of knowledge and ways of thinking we want to build together with AI.
Source: Generated humans, lost judgment: rethinking knowledge with AI, AI & SOCIETY.
This text was generated with AI assistance and may contain errors. Please verify details from the original source.
Original research: Generated humans, lost judgment: rethinking knowledge with AI
Publisher: AI & SOCIETY
Authors: Francesco Branda
December 23, 2025
Read original →