gweinberg 12 hours ago

I don;t understand why people have any respect at all for Searle's"argument", it's just a bare assertion "machines's can't think", combined with some cheap misdirection. Can anyoneone argue that having Chinese characters instead of bits going in and out is something other than misdirection? Can anyone argue that having a human being acting like a cpu instead of having an actual cpu is something other than cheap misdorection?

  • speak_plainly 9 hours ago

    I think you might be missing out on what the Chinese Room thought experiment is about.

    The argument isn’t about whether machines can think, but about whether computation alone can generate understanding.

    It shows that syntax (in this case, the formal manipulation of symbols) is insufficient for semantics, or genuine meaning. That means whether you're a machine or human being, I can teach you every grammatical rule or syntactical rule of a language but that is not enough for you to understand what is being said or have meaning arise, just like in his thought experiment. From the outside it looks like you understand, but the agent in the room has no clue what meaning is being imparted. You cannot derive semantics from syntax.

    Searle is highlighting a limitation for computationalism and the idea of 'Strong AI'. No matter how sophisticated you make your machine it will never be able to achieve genuine understanding, intentionality, or consciousness because it operates purely through syntactic processes.

    This has implications beyond the thought experiment, for example, this idea has impacted Philosophy of Language, Linguistics, AI and ML, Epistemology, and Cognitive Science. To boil it down, one major implication is that we lack a rock-solid understanding or theory of how semantics arises, whether in machines or humans.

    • RaftPeople 7 hours ago

      Slight tangent but you seem well informed so I'll ask you (I skimmed Stanford site and didn't see an obvious answer):

      Is the assumption that there is internal state and the rulebook is flexible enough that it can produce the correct output even for things that require learning and internal state?

      For example, the input describes some rules to a game and then initiates the game with some input and expects the Chinese room to produce the correct output?

      It seems that without learning+state the system would fail to produce the correct output so it couldn't possibly be said to understand.

      With learning and state, at least it can get the right answer, but that still leaves the question of whether that represents understanding or not.

      • pixl97 4 hours ago

        We don't have continuous learning machines yet so at least understanding new things, or being able to further link ideas isn't quite there yet. I've always taken the idea of understanding as taking an unrefined idea, or incomplete information, applying experimentation/doing, and coming out with a more complete model on how to do said action.

        Like understanding how to bake a cake. I can have a simplistic model, for example taking a box cake and making it. Or a more complex model, using the raw ingredients in the right proportions. Both of these have some level of understanding on what's necessary to bake a cake.

        And I think AI models have this too. When they have some base knowledge on a topic, and you ask a question that can require a tool without asking for a tool directly, they can suggest a tool to use. Which at least to me make it appear the system as a whole has understanding.

    • gweinberg 9 hours ago

      I understand the assertion perfectly. I understand why people might feel it intuitively makes sense. I don't understand why anyone purports to believe that saying "Chinese characters" rather than bit sequences serves any purpose other than to confuse.

  • kbelder 9 hours ago

    I agree. At it's heart, it just relies on mysticism. There's a hidden assertion that humans are supernatural.