7 Comments
Oct 15·edited Oct 15Liked by Luke Burgis, Cluny Journal

What a fascinating experiment. I'm left with two thoughts. One is that I agree with Robin's observation that there was insufficient engagement, but I think that is a consequence of your process, Luke. Seems very fixable to me.

The second also picks up from Robin's observation that "the current generation of AI tools has exposed us all as post-modernists, and they might turn us into empiricists. The way that LLMs construct knowledge is entirely contextual. Rather than defining terms and building strict hierarchies of knowledge, it looks at the context that words and ideas show up in and turns that into coordinates in some high dimensional space. The disorienting thing is it actually “works”—***at least within the context of the words and patterns that the models have collected***."

It occurs to me that there are three things that are destabilizing about AI, and I think you can order them on a continuum beginning with rational/empirical and ending with broader ontology and meaning. The first is destablizing thing is that it "works," as Robin said. The second, arising from the first, is that whether it says something truthful or not, LLMs "react" plausibly enough to be broadly believable when we present then with inductive (rather than deductive/summarizing/reductionist) exercises. Interacting with them "feels" real, despite the fact that, from the perspective of sampling theory, there is no basis whatsoever to believe that anything the models "think" or "say" reflect truths in the population given that they are a convenience sample of data being processed by a subjective algorithm. And finally, there is the when-not-if concern of AGI and what happens then?

It seems like your triumvirate is speaking to different aspects of this destabilizing continuum. Zohar is looking at the far end. James and Robin feel a bit closer to the near end. I think if you look at society more broadly, you see more people worried about the stuff we can see now, and it's all about trying to figure out whether we as humans are still special. But this is a humanity that, as Zohar observes (rightly in my view), demonstrates its specialness by doomscrolling and excessive consumerism.

Please carry on with this, Luke. Even if we didn't get much interaction, the difference in perspectives between Athens, Silicon Valley, and Jerusalem were interesting and valuable in and of themselves.

Expand full comment
author

Thank you, JD. I think you nailed it in your second to least paragraph: it does have to do with whether or not we are special. I think AI is going to usher in a return to various forms of personalism, Christian and other forms of it, and a deeper understanding of the human person on an existential level. Perhaps maybe even a return of existentialism itself.

I definitely plan to tweak the format, and I am very open to ideas on the best way to do that. I'm very encouraged by what happened here, even if it was a rough and tumble experiment this time!

Expand full comment
Oct 16Liked by Luke Burgis

When talking about AI and rationality, we often misses an important point: Taking a Talebian perspective, true rationality is not about perfect logical consistency or computational power, but about survival and risk management in an uncertain world. we should be less concerned with whether AI can replicate human style reasoning and more focused on whether it can contribute to long term human flourishing without introducing existential risks. Which is why I think Zohar's perspective resonated most with me given his emphasis on human flourishing and the importance of meaning ...which has proven crucial to human survival. Survival and evolution is the ultimate arbiter of rationality.

Expand full comment

An acutely conceived and by the looks of it pretty successful attempt to take on the communicative version of the ‘three-body problem’ – how to organise investigative conversations between more than two people (two being hard enough on its own) for maximum mutually beneficial outcome.

To my mind the most adhesive of all the many, many sticking points in this pursuit is that debate/discussion goes further when participants grasp one another’s argument before proceeding further with their own/with disputation. This has a kind of meta-reflection in the subject here because, in respect of AI and artificial reasoning, not only do people often not grasp their interlocutor’s points, they often don’t even have definitional consensus on key terms. So you have double-decker misconceptions.

I did something similar to trialogue in a framework of my own, which I called the ‘Trialectic’ – which makes for a slightly more competitive format engineered more towards a debate dynamic –I hope you won’t find it to be too vulgar if I link it below.

https://heirtothethought.substack.com/p/maxis-tool-kit-vol1-an-introduction

Would love to compare methodologies and approaches if you’d find that interesting, Luke. I’ll look forward to future installments in the series either way.

Expand full comment

I think Zohar made a great point about AI being more of a tool of sophistry than truth discovery. In a way, this seems to continue the trend of the shortening attention span with social media and the information age, at least how I have both perceived and experienced it.

Expand full comment

Really interesting process. I disagree with the approach of not seeing AI as environmental but a tool. I’m sure McLuhan would disagree with this quote: “Our desire to sing and dance will remain unchanged by stronger AI, *just as electricity didn’t alter human emotions*.”

At the end, rationality allows us to refocus our attention, overcoming instinctual inertia and integrating a broader contextual framework with a longer timeframe. Seeing AI as environmental, the moment that the technology turns into our main medium, we will outsource our attention. AI+AR could atrophy the self as the need for mediation will vanish: user, medium, and environment can be one. What will be rationality under this reality?

Expand full comment
Oct 20·edited Oct 20

Although inevitably the Trialogue format will evolve, for myself I found all of this discussion really helped me get past my immediate bias in favour of Zohar Atkins' perspective. Thank you. What worries me about the future in an AI world, is that very few people seem to be paying attention to the symbolic-emotional connections that are always part of our experience of communication. How will a deeper rationality that includes symbolic and emotional experience arising just behind our human awareness, survive when those symbols and emotional connections are effectively supercharged by AI processes? We can already see this in the symbolic/emotional reactivity that pervades social media: the communication process becomes effectively non-rational before you know it. What happens when AI tools add to the velocity of that reactivity? Rationality as presently conceived will become less and less feasible

Expand full comment