>>1022
I used to work for a big financial firm, around 2019 they started implementing OCR AI into their lockbox operations (local AI to do data entry for financial data). As they explained it, it is like asking for the statistically most likely answer based on the user input and the training set. The AI goes "based on my data, I am 90% sure this is the letter 'a'" and so on. As I understand it LLMs work similarly, but instead of letters it is a statistical probability of the most likely sentence chunk (token). The models are collections of statistical probabilities based upon the training set along with a bit of random noise so you don't get the same result every time.
You know what happened? In early 2020 the firm laid of a bunch of people. Now in 2026 they're begging for people to come back.
As I like to say: AI is Actually Sophisticated Statistics, or ASS. It is pretty much pretending to think, which is why lefties think it is intelligent: just like them, it just gives the most likely answer based upon what it knows. This is assuming a raw, unmodified AI. Of course, most of the commercial AIs want you to keep using them, so they're automated simps.
True human innovation, however, comes from the statistically unlikely. Accidents and imperfection can discover something new.