
this phenomenon, i’ve been told, is called hallucination. im curious how network automation vendors who say they’re implementing AI into their tools are going to prevent this.
with quote tweet:
i asked chatGPT to explain a few concepts to me directly out of the 802.3 standard & it 100% lied, completely confidently—only backtracking when i called it out on the lie.
both versions 3 & 4 did this. it has no fact checking. pls stop using it to just google things