🎧 Listen to this Episode On Spotify
🎧 Listen to this Episode On Apple Podcasts
About Podcast Episode
When I recorded this podcast with Wenjing, I was initially hesitant to delve too deeply into the current state of generative AI. The pace at which this field is evolving is astounding, with significant advancements appearing nearly every week. Even in the past couple of weeks since we recorded this podcast, numerous new products have been released. A notable example is Google’s generative AI chat product, Google Bard, which is rapidly gaining recognition as a strong competitor to ChatGPT.
Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.
For that reason, in our conversation I aimed to focus on more overarching “evergreen” topics that would remain relevant despite the rapid progression of these Generative AI products. I wanted this discussion to retain its value not only for this week but for time to come.
The inspiration for this conversation was first sparked by my personal experience with these AI tools. Further, at the Internet Identity Workshop (IIW) in April, I attended a session hosted by Wenjing on “Digital Trust in the Age of AI.” This session triggered many thoughts for me about the impact of AI on digital identity and trust, as well as the intersection between advancements in digital trust and the benefits and potential risks of AI. We both decided it would be interesting to have a longer discussion in a podcast format!
We began our conversation with the parable of the blind men and the elephant. In this story, a group of blind men encounter an elephant for the first time. Each touches a different part of the elephant and describes it based on their limited, individual experience. This leads to disagreements about the elephant’s true nature, illustrating how a singular perspective can limit one’s understanding of a larger concept.
This parable is relevant when considering artificial intelligence (AI). AI is a complex, multifaceted field. Individuals often form opinions on it based on their personal experiences and knowledge, potentially resulting in a limited or skewed understanding, much like the blind men’s perception of the elephant.
For this, it is important that we have more nuanced discussions about the perceived positives and negatives of such a disruptive new technology.
Some of the topics discussed between Wenjing and I in this podcast conversation include:
- Exponential Data Growth and AI Systems: Discussion centered around how the volume of data, particularly from AI systems like GPT-4, is exponentially increasing. As AI starts generating more content, this could create a feedback loop leading to astronomical levels of content.
- Interaction with LLM as Protocol: The question arose whether the pre-trained Language Model (LLM) could be equated to a protocol and if individuals might interact directly with the LLM in the future.
- Digital Identity and Trust in the Age of AI and Deepfakes: Concerns were discussed regarding the rise of AI and deepfakes, particularly their implications for digital identity and trust. The challenges of bi-directional authentication and the potential risk to content-based authentication methods were highlighted.
- Future of Digital Trust Protocols and Authentication: The potential of AI to generate content was related to the future of digital trust protocols. The necessity of digital signatures for authentication was suggested as a possible direction for the future.
- Reframing Identity: A broader understanding of identity was proposed, questioning whether a reframing of identity could influence our understanding of concepts like authentication.
- Trust in AI Bots vs. Humans: Personal observations on the level of trust in AI bots vs. humans were shared, suggesting a quicker formation of trust with bots. The implications for future human-bot relationships were considered.
- Potential Risks and Benefits of Technological Advancements: The discussion acknowledged both potential risks and substantial benefits of technological advancements. A greater level of trust in bots due to perceived lesser risk was noted, along with a significant move towards open source models in the digital trust space.
- Open-source Model for LLMs and Other Systems: The pros and cons of adopting an open-source model for Large Language Models (LLMs) and other complex systems were questioned.
About Guest
Wenjing is a senior director of technology strategy at Futurewei leading initiatives focused on trust in the future of computing. His long career encompasses early Internet Routing development, optical Internet backbones, security operating system, Wi-Fi and 5G mobile networks, cloud native services and responsible artificial intelligence.
He is a founding Steering Committee member of the Trust over IP Foundation. He contributed as the primary author of the Trust over IP Technology Architecture specification in which he articulated the layered approach to decompose the trust protocol stack and defined the core requirements of the trust spanning layer. Following that work, he is currently a co-Chair of the Trust Spanning Protocol task force proposing the Inter-Trust Domain Protocol (ITDP) as the trust spanning protocol bridging different trust domains across the Internet. He is also a co-Chair of the AI and Metaverse task force currently drafting the white paper “Digital Trust in the Age of Generative AI”.
Wenjing is a founding Board Member of the newly launched OpenWallet Foundation with a mission to enable a trusted digital future with interoperability for a wide range of wallet use cases and also serves in its Technical Advisory Council (TAC). He is a strong advocate of human-centric digital trust as a foundation to responsible deployment of advanced artificial intelligence technologies.
Where to find Wenjing ➡️ LinkedIn: https://www.linkedin.com/in/wenjingchu/