Six Tiny Musings on AI

Here are six tiny musings and stories that provide different perspectives on the current trend of AI.

Black and white photo of Albert Camus in Henri Cartier-Bresson’s lens colored using palette.fm

Black and white photo of Albert Camus in Henri Cartier-Bresson’s lens colored using palette.fm.

1. Bicycle of the Mind

In a 1990 interview, Steve Jobs recalled an article he read when he was 12 that measured the efficiency of various species' locomotion. How many kilocalories did they expend to get from point A to point B? The condor was the most efficient, while humans ranked only about a third down the list.

However, someone had the imagination to test the efficiency of a human riding a bicycle. The results were astonishing: a human on a bicycle was more efficient than even the mighty condor, soaring to the top of the list.

This article made a lasting impression on Steve Jobs, as it highlighted how humans create tools to amplify their abilities. Humans are tool builders. We build tools like computers as the "bicycle of our mind" that can take us far beyond our inherent abilities.

Today, artificial intelligence is the new "bicycle of the mind". It's not just a tool, but an extension of our will, allowing us to accomplish feats that were once thought impossible.

Humans are tool builders. We build tools like computers as the "Bicycle of the Mind" that take us beyond our inherent abilities.

2. Colored Lens for Everything

In a recent episode of South Park, the character Stan used an AI chatbot to generate romantic texts for his girlfriend because his girlfriend didn't like his always only-a-thumps-up replies to her messages. Along with his classmates, he later used it to cheat on his school assignments.

Meanwhile, their teacher Mr.Garrison found that all of the students' recent essays were getting unexpectedly long and detailed. After discovering the new AI chatbot, Mr.Garrison decides also to use AI to grade the essays. This raises an interesting question: what would happen if teachers used AI to generate assignments, students used AI to write them, and teachers used AI to grade them? No one is learning anything, and AI will produce a bunch of text back and forth that no one will ever read.

The situation also raises other possibilities: What if writers use AI to expand a brief text into longer articles, which are then summarized by AI again into the brief text for readers? What if everyone uses AI to text back and forth in daily conversations? We might be using AI as a colored lens that shapes and filters everything in our view.

You can find the plot of this episode in the South Park Archives at Fandom. Thanks to Miguel Novelo for sharing this episode with me.

What would happen if teachers use AI to generate assignments, students use AI to write them, and teachers use AI to grade them?

3. Skepticism in the Age of AI

This was from Ben Thompson's interview with Daniel Gross and Nat Friedman when they discussed the Balenciaga Pope, the picture of the Pope in a giant puffy jacket. People thought the joke was that the real picture looked like an AI image. But it turns out this is another AI-generated image.

Within a year or two, AI will be so advanced that we won't be able to discern what is real and what is fake on the Internet without detailed forensic analysis. The inability to distinguish what's real from what's fake will lead to increased skepticism on the Internet. However, this could potentially be beneficial because it makes people skeptical by default about things they see.

People have been doctoring words and images forever on the Internet, and too many people have been aloof. If everyone knows AI can produce convincing fake stuff, then we must develop critical thinking, which can reduce the spread of misinformation. In Ben Thompson's words: "People are believing too much crap, so we're going to completely and utterly immerse you in crap until you realize it's all crap." This risk of AI can also carry potential benefits.

AI will progress to make it impossible to discern between real and fake content. Increased skepticism on the Internet can lead to surprisingly positive outcomes in promoting critical thinking.

4. Imperfection in Chess Playing

This is mentioned by Sam Altman in the interview with Lex Fridman. When Deep Blue beat Chess world champion Garry Kasparov in 1997, people claimed, "Chess is now over". They started to question what's the point of playing chess.

Many jobs will be replaced soon by AI. What's the point of learning the skills required for those jobs if AI can perform perfectly? And if AI is powerful enough that we have perfect real-time language translation, what's the point of learning a new language?

Twenty-six years after Deep Blue's victory, chess has become more popular than ever. Chess is far from over. Technically, two artificial Intelligences playing chess together would be a better game. However, it's the drama and imperfections of human players that make the game interesting and worth playing. While AI can have perfect games, human players have something more fascinating: personalities and flaws.

When AI beat the chess world champion in 1997, it was claimed that "Chess is over". But 26 years later, chess is more popular than ever.

5. AI Needs to be Charming

This is from Ted Chiang's novella The Lifecycle of Software Objects, included in the collection Exhalation: Stories.

The story follows Blue Gamma, an AI company that sells digital AI pets, known as digients, which can learn from their experiences. The company's approach to AI design is centered on the idea that experience is the best teacher. Instead of pre-programing an AI pet with all the knowledge, the company sells digients to customers and encourages them to teach the digients themselves. To motivate customers to put effort into teaching the digients, the company ensures that every aspect of the AI pets is charming, including their personalities.

Revisiting this 2010 piece reminds me of ChatGPT, which shares a similar design philosophy. Usability plays an essential role in incentivizing users to engage with ChatGPT. Making ChatGPT fun to use and easy to share allows more users to play with it, which improves AI through Reinforcement Learning from Human Feedback (RLHF). Echoing what Sam Altman said in this interview with Lex Fridman, the success of ChatGPT can be attributed more to its usability than the underlying model itself.

We don't have to wait too long to see AI pet startups pop up and follow the same philosophy.

Rather than program an AI with knowledge, create one capable of learning and have users teach them. Make AI charming so that users will be motivated to put effort into teaching them.

6. Blurry JPEG of the Web

I cannot state it better than Ted Chiang. In his words in the New Yorker Article:

"Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp"

"But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large-language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large-language models and lossy compression is useful. Repeatedly resaving a creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse."

"Indeed, a useful criterion for gauging a large-language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either. Conversely, if a model starts generating text so good that it can be used to train new models, then that should give us confidence in the quality of that text. (I suspect that such an outcome would require a major breakthrough in the techniques used to build these models.) If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable."

We must recognize that AI is only as good as the data it is fed and the algorithms used to process that data.

ChatGPT is like a lossy compression of all the text on the Web. If it takes material generated by itself as training data for next model, the output will be worse. Can LLM's output be as good as its own input?

Reference:

  1. ‘A Bicycle of the Mind’ — Steve Jobs on the Computer
  2. Deep Learning
  3. Ben Thompson’s interview with Daniel Gross and Nat Friedman about the AI Product Revolution
  4. Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast
  5. Exhalation: Stories
  6. ChatGPT Is a Blurry JPEG of the Web

Thanks to Angelica Kosasih and Robert Chang for reading the draft of this.


Comment this article on: Twitter | Facebook.
To get updates when I write, sign up my newsletter below:

You might also like