09.19.2017
- Turing mentions on pages 56 and 57 that some argue a “thinking machine” will be easy to spot because it will not feel the emotions but simply make decisions. Is this what Turing means when he says that the question “can machines think” is problematic? That our preconceptions about what is “thought” is an issue, and he merely means that the machine can make a decision which it does independently of some operator pushing a prescribed button and getting a prescribed result?
- Turing discusses the idea that a thinking machine will not be a complete reproduction of a Human, but it’s own thing entirely. He talks about not comparing a man in a race to an aeroplane, and thus not judging a thinking machine on it’s inability to compete in a beauty pagent. He also says that the machine’s best strategy may actually be not to mimic man in the imitation game. Does this mean that thinking machines will never actually get so close to being human-like that it raises other issues, or have we simply surpassed Turing’s imagination of what a machine can accomplish due to the advancements in technology?
- Weaver talks extensively about the entrophy of language, and of how this theory is applied to machines and to this “freedom of choice” which goes into the information source’s production of the signal. Is this entrophy the area where “thinking machines” has some room to generate thought? where we can start to “teach” a machine probabilities and it can select the most likely answers?