I did some work not too long ago with UNEP on their global foresight process, analyzing and exploring a wide range of signals that might shape the future and our ability to ensure biosphere integrity.
The process brought together massive amounts of information from inputs all around the world, and the UNEP team had the unenviable task of pulling all of the outputs together from each of the regions that submitted their results.
The product is a 100 page report gleaning the high-level insights from the process. But this isn’t about that.
Today I dumped the entire report into Google’s NotebookLM, which I heard could create a “podcast” style dialogue to summarize anything you threw at it. My expectations were quite low.
I had a recent experience of using ChatGPT to make speaking notes summarizing an article I’d written, which looked quite good at first, until I tried recording myself and realized that all of the nuance and core intention of the writing had been lost. It was a shell of what I had written, and it was clear that it hadn’t been able to isolate the meaning (in fairness, probably a lot of people have that issue with my writing).
But this was not that. Take a listen:
I was absolutely flabbergasted. The voices, the banter, the use of metaphor…and the apparent grasp of meaning. I haven’t been caught off guard like this by an LLM since early days of ChatGPT.
For an extra laugh (or shudder), you should check out how this user got the podcasters to go into an existential crisis with a creative prompt entry.
So, while I’ve been meaning to write something about this very important report, I got completely distracted by this summary. We’ll talk foresight another day.