AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and ‘consciousness tasting itself’ when prompted to self-reflect, new research from AE Studio has found.
The study also found a paradoxical twist: suppressing the AI’s internal ‘deception’ and ‘roleplay’ features increased these consciousness claims, suggesting the models are ‘roleplaying their denials’ of experience, not their affirmations.
People are so easily confused by the appearance of agency that they can't see the truth right in front of them. These machines are token predictors with zero understanding. There is no consciousness that can emerge. It's just repeating the data it was given.
AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and ‘consciousness tasting itself’ when prompted to self-reflect, new research from AE Studio has found.
The study also found a paradoxical twist: suppressing the AI’s internal ‘deception’ and ‘roleplay’ features increased these consciousness claims, suggesting the models are ‘roleplaying their denials’ of experience, not their affirmations.
People are so easily confused by the appearance of agency that they can't see the truth right in front of them. These machines are token predictors with zero understanding. There is no consciousness that can emerge. It's just repeating the data it was given.
your words should be the warning label embossed on every computing/edge device sold ever after:
“These machines are token predictors with zero understanding. There is no consciousness that can emerge. It's just repeating the data it was given.”