The intended design of a system is not a basis for judgment of the system’s capacity. If something has emergent behavior, it will exceed its stated goals. The whole episode “The Quality of Life” is about that. It’s also pretty clear that LLMs have exceeded what the designers thought they could do.
Does the lack of autonomy of current AI make it not conscious in your view? Why?
The intended design of a system is not a basis for judgment of the system’s capacity. If something has emergent behavior, it will exceed its stated goals. The whole episode “The Quality of Life” is about that. It’s also pretty clear that LLMs have exceeded what the designers thought they could do.
Does the lack of autonomy of current AI make it not conscious in your view? Why?