🎧 Listen to this Episode On Spotify 🎧
🎧 Listen to this Episode On Apple Podcasts 🎧
About Podcast Episode
Earlier this year, Wenjing and I delved into the fascinating intersection of Digital Trust and Generative AI in an SSI Orbit podcast episode (here), which serves as an excellent primer on the subject for those unfamiliar with it. Since that episode, the landscape of generative AI solutions has undergone significant transformations, prompting us to record a follow-up discussion.
In this Part 2 episode, we were joined by Sankarshan, who co-chairs the ‘AI and Metaverse Task Force’ at the Trust over IP Foundation, alongside Wenjing. His insights added a valuable dimension to our conversation, particularly as we tackled prevalent misconceptions in the field of AI.
One key misconception we discussed is the belief that AI systems are purely statistical and random. We emphasized that, in reality, the world itself is inherently statistical, and AI systems are designed to mirror human decision-making processes, which are also based on statistical reasoning.
Another topic we explored is the common overestimation of AI’s logical reasoning capabilities and the underestimation of the constraints within AI systems. This discussion led us into the realm of AI governance. We debated the suitability of non-profit models for managing emerging technologies like AI, considering their capital-intensive nature and ongoing maturation. Examples like OpenAI and Signal were brought up to illustrate these points.
We aimed to focus part of the discussion on identifying practical solutions and low-hanging fruits for building digital trust tools. These tools are crucial for consumers to discern truth in an era increasingly dominated by generative AI-driven content. Additionally, we delved into strategies to mitigate and prevent the risks of AI abuse, especially as AI systems become more integrated with our daily lives.
A topic that resonated with me was the importance of provenance and authentication in AI systems, especially in content creation. We underscored the need for more robust tools to authenticate AI-generated content. However, we also acknowledged the limitations of current tools in ensuring provenance and their inability to address all security risks comprehensively.
Our discussion also touched upon the influence of human nature on AI development and governance. We pondered whether the current ideal-driven structures would evolve towards more traditional capitalist models, accompanied by regulation. This led to a broader conversation about the purpose of technology, which usually gets categorized as ‘a benefit to mankind’ by its creators. Yet, the complexity of implementing this ideal becomes apparent when considering who should govern these technologies and how they should be regulated.
This podcast episode not only sheds light on the complexities of digital trust in the age of AI but also opens up avenues for further exploration and discussion. As we move towards 2024, these conversations are more relevant than ever, guiding us towards a future where technology is both a tool for advancement and a subject of thoughtful governance.
—
The full list of topics discussed between Wenjing, Sankarshan and myself in this podcast include:
- OpenAI Drama – Discussion about the recent OpenAI drama, including Sam Altman’s firing and rehiring, and Microsoft’s involvement.
- AI Governance and AI Misconceptions – Why we should focus on AI governance rather than getting caught up in daily news and doom scenarios. Discussion about misconceptions in AI, specifically statistical and computational misconceptions. Exploring practical solutions for digital trust in AI.
- Provenance and Authentication in AI – The importance of provenance in AI applications, such as robotics or content creation. Challenges in authentication and the need for stronger solutions. The applicability of these solutions across various domains.
- Content Authenticity – Defining content authenticity and its governance. The distinction between machine-generated and human-verified content.
- Digital Trust and Content Authenticity – Discussing digital trust in the context of AI. Separating protocols from end-user platforms. The role of content authenticity in digital trust. Examples of current approaches to content authenticity on the internet.
- Truth and Trust Registries – Human pursuit of truth and the influence of context and consumption. The role of trust registries in providing inputs for trust decisions. The need for widespread inputs to aid in making trust decisions.
About Guests
Wenjing Chu is a senior director of technology strategy at Futurewei leading initiatives focused on trust in the future of computing. His long career encompasses early Internet Routing development, optical Internet backbones, security operating system, Wi-Fi and 5G mobile networks, cloud native services and responsible artificial intelligence.
He is a founding Steering Committee member of the Trust over IP Foundation. He contributed as the primary author of the Trust over IP Technology Architecture specification in which he articulated the layered approach to decompose the trust protocol stack and defined the core requirements of the trust spanning layer. Following that work, he is currently a co-Chair of the Trust Spanning Protocol task force proposing the Inter-Trust Domain Protocol (ITDP) as the trust spanning protocol bridging different trust domains across the Internet. He is also a co-Chair of the AI and Metaverse Task Force .
Wenjing is a founding Board Member of the newly launched OpenWallet Foundation with a mission to enable a trusted digital future with interoperability for a wide range of wallet use cases and also serves in its Technical Advisory Council (TAC). He is a strong advocate of human-centric digital trust as a foundation to responsible deployment of advanced artificial intelligence technologies.
Where to find Wenjing?
➡️ LinkedIn: https://www.linkedin.com/in/wenjingchu/
Sankarshan Mukhopadhyay works on Standards, Community and Customer Experience at Dhiway. He is also a Trustee at the Sovrin Foundation and a co-author of the Principles of SSI published and maintained by the Sovrin Foundation. Digital trust ecosystems, governance models of such ecosystems, and economic approaches that incubate innovation are a few of the topics in the digital ID systems that interest him.
Where to find Sankarshan?
➡️ LinkedIn: https://www.linkedin.com/in/sankarshan/