I have a lot of opinions about recent developments in AI, namely deep learning and LLMs, some of which are half-writ up in my drafts. Today I'm going to mention a horse. there is a famous story about a horse. not the duplicitous chestnut-colored horse Fauvel, that conniving stallion; don't worry about him. The story is about Clever Hans, a horse so named because if you asked him a math question, eg "what is 2+2?", he would stamp his hoof with the answer (eg, 4 times). Allegedly this horse was real, and allegedly the reason he was able to perform this feat was because his trainer (who could do basic arithmetic and also understand human speech) was able to solve the problem and communicate the answer to the horse through body language, which it would then give. Allegedly, this was like undetectable or something, and the trainer didn't even know he was doing it. I'm not sure how much of that story I believe actually happened. The fact that there was a horse named Clever Hans, that one seems safe. (I think we have a picture of him, even.) Could the horse actually deliver answers accurately? I never saw a demonstration. What was his hit rate? Was the communication between trainer and horse actually subtle, or if I had seen the actual live trick would I have said, "Well it's obviously the trainer prompting him to do that."? I don't know. aside: Not only have I not looked into these facts, these are also the sorts of facts that accounts tend to distort, and it is only with a shrewd amount of skepticism that the interested observer can reconstruct what likely actually happened. The tendency of the rest of humanity to fabricate and embellish these details that would be extremely interesting and important to various scientific and philosophical questions, eg about animal cognition, has been extremely disappointing to me. I don't think that people who mislead in this way even realize that when they swear up and down that their misleading description of an event really happened, they are doing wrong or even describing it inaccurately from their own viewpoint. anyway, this Clever Hans thing is often used to explain certain aspects of parapsychology or regular bad science. now that I've introduced you to the horse, it'll explain what he has to do with LLMs. obviously, llms are an amazing new leap in what you can do with the computer etc etc. But people are often wondering, when can we use these machines to reason? I myself had assumed that once a computer could speak coherently, that was basically something that required reason as a prerequisite and so the thinking machine was nigh. Alas, it seems not to be; perhaps one could call this "the *second* bitter lesson". aside: It's more like it uses a lossy search engine to search over the entire corpus of human text that it was trained on to try to find related examples of reasoning, and then mush them back together into text for you to read. once I realized this is basically how LLMs work, a lot of their strengths and weaknesses became more apparent to me. This is only a rough heuristic for how they work, but I think it is a pretty good rough heuristic. anyway, sometimes people seem to think the LLMs will be able to reason soon because once it gives you the wrong answer, you can correct it and it will go oh yeah sorry and then tell you the right answer. they usually call this "prompting" (in the ordinary English sense, not the specialized llm sense) or, if they feel it took a large amount of effort, "handholding". apparently these people have somehow managed to overlook the fact that they are explaining the right answer back to themselves.