Considering modern AI (LLMs)
Feb. 2nd, 2025 03:29 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
With using LLMs in two of my three latest jobs, I have been trying to embrace them a little more than I was inclined to initially, to try to see what uses I may have for them. There are some applications that don't need reliable perfection, after all. I can see, for instance, that, for image editing, one can probably readily see if the prompting and model achieved the desired effect.
It also occurred to me that, when I ask friends for their opinion or thoughts on something, I don't expect perfection from them either, and that with an LLM I don't have to consider if I might try its patience. Sometimes, I can appreciate a good guess.
As an experiment, I tried asking an OpenAI model about something I still don't understand: how the hidden variables theory was disproved (i.e., why we are sure that God plays dice) and it span me an interesting explanation of how hidden variables would violate locality. I should have tried probing that a little more but I instead wondered if quantum entanglement doesn't also violate locality and OpenAI seemed to think it does, so I shrugged and got back to my actual work. I can see the appeal of taking a bit more time to interrogate it and, with that kind of question, it has probably been trained on enough material to become fairly trustworthy. It's not as if anything relies on my being correct about such matters anyway.
It also occurred to me that, when I ask friends for their opinion or thoughts on something, I don't expect perfection from them either, and that with an LLM I don't have to consider if I might try its patience. Sometimes, I can appreciate a good guess.
As an experiment, I tried asking an OpenAI model about something I still don't understand: how the hidden variables theory was disproved (i.e., why we are sure that God plays dice) and it span me an interesting explanation of how hidden variables would violate locality. I should have tried probing that a little more but I instead wondered if quantum entanglement doesn't also violate locality and OpenAI seemed to think it does, so I shrugged and got back to my actual work. I can see the appeal of taking a bit more time to interrogate it and, with that kind of question, it has probably been trained on enough material to become fairly trustworthy. It's not as if anything relies on my being correct about such matters anyway.