When the Internet took off, a lot of us felt threatened by the abundance of information. You could no longer be the “smart one” because anybody with access to the Internet could get educated on every topic imaginable. The amount of data on the Internet kept growing at an unstoppable rate, with more and more “sacred” information getting published every single day.
Unfortunately, rapid growth goes hand in hand with increase in noise. Fake and misleading information began to spread like crazy and the signal-to-noise ratio started plummeting. For every argument, you could find a counterargument. You can even find “evidence“ that the sky is in fact green. Navigating the Internet became increasingly difficult as you had to verify every piece of information. Now only the “smart ones” could find legit information and the rest was drowning in information noise. In most cases, not knowing is better than having the facts mixed up.
In 2022 a similar phenomenon happened. An AI breakthrough (ChatGPT) became available to the public. At first glance, this large language model’s answers seemed undistinguishable from what a human would say. But upon using it for a while certain flaws became evident. It was hallucinating (providing fake information in a confident tone – something that an expert would not do). A lot of the times it generated “empty responses”. I define them as a conglomerate of words that lack deeper meaning. These sentences make sense grammatically (and maybe even visually) but semantically they convey no meaning. Similar to the famous sentence from Chomsky: “Colorless green ideas sleep furiously”. Some replies from LLMs resemble those of professional bullshitters.
Alas,
Remember, AI is trained on human-generated data. Previously, only stuff date by humans was posted on the internet but nowadays more published content has been generated by AI. If it starts training on data generated by … itself, it will further reinforce its mistakes and give wrong answers more confidently.
The rise in availability of medical information did not make doctors obsolete. There still has to be a person responsible and accountable for the diagnostic and therapeutic process. How many people are brave (or stupid) enough to experiment with their own health bearing a high risk of severe adverse effects?
I don’t see a viable substitute for clear thinking in the near future. Maybe it’s only reserved for humans.
- Calculators hinder your ability to calculate
- Autocorrect atrophies your ability to write without mistakes.
- The Internet
- LLMs handicap your ability to think and write
- Text-to-image models impede your ability to create visual art
Or in other words:
- The Internet educates the smart and
- TikTok keeps undisciplined people away from books
- LLMs make clear thinkers more prominent
- Text-to-image models make creative artists more influential
As the facade keeps growing and the amount of posers increase, true cognoscenti become even more at peace.