The rapid advancement of Large Language Models (LLMs) like ChatGPT and DALL-E is sparking important questions about their impact on education and society. Renowned psychologist Howard Gardner delves into these concerns, urging us to reconsider the focus of education in the age of AI.
ChatGPT's impressive capabilities have exceeded expectations, raising both excitement and anxieties. As these AI systems continue to improve, they will likely excel at tasks that can be described and captured in notation, from writing essays to creating art.
However, Gardner emphasizes that superior AI performance does not negate the value of human engagement in these activities. Instead, he envisions a future where humans and AI collaborate, leveraging each other's strengths.
One of the primary concerns surrounding AI is the potential for misuse. These systems could rapidly spread misinformation, exacerbating issues like climate change and increasing the risk of conflict. While these threats are real, Gardner acknowledges that such calamities could occur even without the digital revolution.
With AI systems capable of performing many tasks, Gardner proposes a radical shift in educational priorities. He argues that education should increasingly focus on the "human condition"—exploring what it means to be human, our strengths and weaknesses, and our accomplishments throughout history.
This approach involves understanding our human nature from biological, psychological, cultural, and historical perspectives. By doing so, we can better preserve our planet and launch successful human-computer ventures.
Gardner draws inspiration from his teacher, Jerome Bruner, whose middle school curriculum centered on three fundamental questions:
Addressing these questions requires disciplinary knowledge, rigor, and tools. While existing curricular frameworks may not need to be completely abandoned, they must be rethought and adapted to address these overarching concerns. You can explore more on educational approaches at The Good Project.
Gardner acknowledges that conceptions of "good work" and "good citizenship" vary across cultures and time periods. While AI systems like ChatGPT can synthesize different perspectives, human beings may not always abide by these syntheses.
To survive as a planet, Gardner suggests the need for a global belief system that promotes the well-being of all entities, including plants, animals, and the environment. This system should be inclusive and adaptable, like a "world language" that transcends cultural boundaries.
Gardner's theory of multiple intelligences is also relevant in the context of AI. While LLMs can master tasks related to linguistic, logical, musical, and spatial intelligences, the question remains whether they can truly replicate bodily and personal intelligences.
Can a robot capture the artistry of ballet or inspire people like historical figures? The answer depends on how we define human intelligence—whether it's based solely on behavioral outputs or the meanings that humans create and convey.
Ultimately, Gardner believes that only human beings can fully realize and experience what it means to be human. This understanding should be at the heart of education in our post-ChatGPT world.
Sarah Bakewell's book, Humanly Possible, offers further insights into humanism and its relevance to our current challenges.
As AI continues to evolve, it's crucial to consider its implications for education and society. By shifting our focus to the human condition, we can equip future generations with the knowledge, skills, and values they need to thrive in a human-computer era. Don't forget to sign up for the Howard Gardner newsletter to stay informed!
References