WritingWithLlmsIsLearning
ai, programmingI use LargeLanguageModels to fact-check my writing now. It usually finds several issues with the stuff I've written, and provides very useful feedback. For example, when I was writing the WhyLlmsCantCountLetters article, it quickly explained that LLMs don't really forget the input -- they process and represent it differently. I also was accidentally using the word "decode" to explain the tokenization process -- and it helpfully reminded me that decoding was more related to the generation of output from vectors, and not converting them into input, which is really "encoding." I was also previously suggesting that LLM's can't count letters in a word because the LLM was working on vectors, when really the issue was that the LLM was not being trained on the letter-counting task. All of this greatly deepened my understanding of the issue -- and also highlighted the fact that while I understood the issue generally, I really didn't fully understand the depth of the issue.