<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="rss.xslt" ?>
<rss
    xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
    xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:spotify="http://www.spotify.com/ns/rss"
    xmlns:psc="http://podlove.org/simple-chapters/"
    xmlns:media="https://search.yahoo.com/mrss/"
    xmlns:podcast="https://podcastindex.org/namespace/1.0"
    version="2.0">
    <channel>
        <title>UNESCO’s Hands-On AI Supervision</title>
                    <link>https://podcast.ausha.co/unesco-s-hands-on-ai-supervision</link>
                <atom:link rel="self" type="application/rss+xml" href="https://feed.ausha.co/egjJ3cKn5eeg"/>
        <description>
UNESCO’s Hands-On AI Supervision: Lessons from Practice is a six-episode mini podcast series showcasing concrete lessons from the 2nd Expert Roundtable on AI Supervision, convened by UNESCO. Each episode distils insights from hands-on exercises with leading experts on AI risk mapping, evaluations, red teaming, benchmarking, cybersecurity, and engagement with market actors. Designed for regulators, policymakers, and practitioners, the series explores practical methodologies, emerging challenges, and the institutional capacities needed for effective AI oversight. Through focused conversations with specialists, the series provides accessible, actionable knowledge to strengthen technical readiness and foster ongoing dialogue across the global AI supervision community.

Hosted on Ausha. See ausha.co/privacy-policy for more information.</description>
        <language>en</language>
        <copyright>UNESCO</copyright>
        <lastBuildDate>Mon, 30 Mar 2026 13:45:53 +0000</lastBuildDate>
        <pubDate>Mon, 30 Mar 2026 13:45:53 +0000</pubDate>
        <webMaster>feeds@ausha.co (Ausha)</webMaster>
        <generator>Ausha (https://www.ausha.co)</generator>
                    <spotify:countryOfOrigin>fr</spotify:countryOfOrigin>
        
        <itunes:author>UNESCO</itunes:author>
        <itunes:owner>
            <itunes:name>UNESCO</itunes:name>
            <itunes:email>E.Rudowski@unesco.org</itunes:email>
        </itunes:owner>
        <itunes:summary>
UNESCO’s Hands-On AI Supervision: Lessons from Practice is a six-episode mini podcast series showcasing concrete lessons from the 2nd Expert Roundtable on AI Supervision, convened by UNESCO. Each episode distils insights from hands-on exercises with leading experts on AI risk mapping, evaluations, red teaming, benchmarking, cybersecurity, and engagement with market actors. Designed for regulators, policymakers, and practitioners, the series explores practical methodologies, emerging challenges, and the institutional capacities needed for effective AI oversight. Through focused conversations with specialists, the series provides accessible, actionable knowledge to strengthen technical readiness and foster ongoing dialogue across the global AI supervision community.

Hosted on Ausha. See ausha.co/privacy-policy for more information.</itunes:summary>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>no</itunes:block>
        <podcast:block>no</podcast:block>
        <podcast:locked>yes</podcast:locked>
        <itunes:type>episodic</itunes:type>
                
        <googleplay:author>UNESCO</googleplay:author>
        <googleplay:email>E.Rudowski@unesco.org</googleplay:email>
        <googleplay:description>
UNESCO’s Hands-On AI Supervision: Lessons from Practice is a six-episode mini podcast series showcasing concrete lessons from the 2nd Expert Roundtable on AI Supervision, convened by UNESCO. Each episode distils insights from hands-on exercises with leading experts on AI risk mapping, evaluations, red teaming, benchmarking, cybersecurity, and engagement with market actors. Designed for regulators, policymakers, and practitioners, the series explores practical methodologies, emerging challenges, and the institutional capacities needed for effective AI oversight. Through focused conversations with specialists, the series provides accessible, actionable knowledge to strengthen technical readiness and foster ongoing dialogue across the global AI supervision community.

Hosted on Ausha. See ausha.co/privacy-policy for more information.</googleplay:description>
        <googleplay:explicit>false</googleplay:explicit>

                    <podcast:funding url="">Support us!</podcast:funding>
        
        <category>Science</category>
    
        <itunes:category text="Science">
                    <itunes:category text="Social Sciences"/>
            </itunes:category>
    
                    
            <itunes:image href="https://image.ausha.co/PDB1E0ODl04l4zEFVZJV0XBcFtDOHfpETW5qXbxT_1400x1400.jpeg?t=1769677920"/>
            <googleplay:image href="https://image.ausha.co/PDB1E0ODl04l4zEFVZJV0XBcFtDOHfpETW5qXbxT_1400x1400.jpeg?t=1769677920"/>
        
                    <item>
                <title>Dialogue with Market Actors: Cooperation for Better AI Oversight</title>
                <guid isPermaLink="false">1762acdb60f07edd9bd47e7fa02a0ad99d5ea065</guid>
                <description><![CDATA[<p>Supervision cannot succeed without structured engagement with developers, deployers, and industry partners. In this episode, Huub Jannsen discusses best practices for market dialogue, transparency expectations, and collaborative mechanisms that support compliance while fostering innovation.</p><p>The episode highlights how oversight bodies can build trust and shared responsibility across the AI ecosystem.</p><p>Speaker: Huub Jannsen (RDI)<br>Interviewer: Dafna Feinholz, Director of the Division a.i. &amp; Chief of Section for Bioethics and Ethics of Science and Technology, UNESCO.</p><p><br></p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></description>
                <content:encoded><![CDATA[<p>Supervision cannot succeed without structured engagement with developers, deployers, and industry partners. In this episode, Huub Jannsen discusses best practices for market dialogue, transparency expectations, and collaborative mechanisms that support compliance while fostering innovation.</p><p>The episode highlights how oversight bodies can build trust and shared responsibility across the AI ecosystem.</p><p>Speaker: Huub Jannsen (RDI)<br>Interviewer: Dafna Feinholz, Director of the Division a.i. &amp; Chief of Section for Bioethics and Ethics of Science and Technology, UNESCO.</p><p><br></p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></content:encoded>
                <pubDate>Tue, 10 Mar 2026 23:00:00 +0000</pubDate>
                <enclosure url="https://audio.ausha.co/nZ8ZJFla6J6l.mp3?t=1770645863" length="26237286" type="audio/mpeg"/>
                                    <link>https://podcast.ausha.co/unesco-s-hands-on-ai-supervision/dialogue-with-market-actors-cooperation-for-better-ai-oversight</link>
                
                                <itunes:author>UNESCO</itunes:author>
                <itunes:explicit>false</itunes:explicit>
                                    <itunes:keywords>AI,unesco,Cybersecurity,Compliance,Responsible AI,ai governance,eu ai act,ai supervision,ai regulation,algorithmic accountability,ai risk assessment,ai evaluation,ai benchmarking,ai safety,red teaming,trustworthy ai,ai oversight,ai standards,supervisory authorities,ai policy</itunes:keywords>
                                <itunes:duration>27:19</itunes:duration>
                <itunes:episodeType>full</itunes:episodeType>
                                <itunes:subtitle>
Supervision cannot succeed without structured engagement with developers, deployers, and industry partners. In this episode, Huub Jannsen discusses best practices for market dialogue, transparency expectations, and collaborative mechanisms that suppor...</itunes:subtitle>

                
                <googleplay:author>UNESCO</googleplay:author>
                                <googleplay:explicit>false</googleplay:explicit>

                                    <itunes:image href="https://image.ausha.co/XcN9ZkTsQ6ZlOju5xUGfvPYDJQdgVZkdqteyyUfS_1400x1400.jpeg?t=1770645964"/>
                    <googleplay:image href="https://image.ausha.co/XcN9ZkTsQ6ZlOju5xUGfvPYDJQdgVZkdqteyyUfS_1400x1400.jpeg?t=1770645964"/>
                
                                    <psc:chapters version="1.1">
                                            </psc:chapters>
                
                            </item>
                    <item>
                <title>Cybersecurity for AI Supervision: Protecting Systems, Data, and Institutions</title>
                <guid isPermaLink="false">94f71fa8bc8b9483f7d391846e9171499d62f59a</guid>
                <description><![CDATA[<p>As AI systems introduce new attack surfaces, cybersecurity becomes a foundational element of oversight. Carlos Antunes outlines key threat vectors, resilience strategies, and practical measures supervisory authorities can implement.</p><p>This episode gives listeners a clear roadmap for integrating cybersecurity considerations into AI supervision workflows.</p><p>Speaker: Carlos Antunes (Portugal National Cybersecurity Agency)<br>Interviewer: Yannic Duller, Project Consultant, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></description>
                <content:encoded><![CDATA[<p>As AI systems introduce new attack surfaces, cybersecurity becomes a foundational element of oversight. Carlos Antunes outlines key threat vectors, resilience strategies, and practical measures supervisory authorities can implement.</p><p>This episode gives listeners a clear roadmap for integrating cybersecurity considerations into AI supervision workflows.</p><p>Speaker: Carlos Antunes (Portugal National Cybersecurity Agency)<br>Interviewer: Yannic Duller, Project Consultant, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></content:encoded>
                <pubDate>Tue, 03 Mar 2026 23:00:00 +0000</pubDate>
                <enclosure url="https://audio.ausha.co/YD2DOIjrXp9q.mp3?t=1770645693" length="32350952" type="audio/mpeg"/>
                                    <link>https://podcast.ausha.co/unesco-s-hands-on-ai-supervision/cybersecurity-for-ai-supervision-protecting-systems-data-and-institutions</link>
                
                                <itunes:author>UNESCO</itunes:author>
                <itunes:explicit>false</itunes:explicit>
                                    <itunes:keywords>AI,unesco,Cybersecurity,Compliance,Responsible AI,ai governance,eu ai act,ai supervision,ai regulation,algorithmic accountability,ai risk assessment,ai evaluation,ai benchmarking,ai safety,red teaming,trustworthy ai,ai oversight,ai standards,supervisory authorities,ai policy</itunes:keywords>
                                <itunes:duration>33:41</itunes:duration>
                <itunes:episodeType>full</itunes:episodeType>
                                <itunes:subtitle>
As AI systems introduce new attack surfaces, cybersecurity becomes a foundational element of oversight. Carlos Antunes outlines key threat vectors, resilience strategies, and practical measures supervisory authorities can implement.
This episode gives...</itunes:subtitle>

                
                <googleplay:author>UNESCO</googleplay:author>
                                <googleplay:explicit>false</googleplay:explicit>

                                    <itunes:image href="https://image.ausha.co/ZJYbaA6JXbQJeZN8Fp1bVUIRf7A2bVRIwljDFEHc_1400x1400.jpeg?t=1770645771"/>
                    <googleplay:image href="https://image.ausha.co/ZJYbaA6JXbQJeZN8Fp1bVUIRf7A2bVRIwljDFEHc_1400x1400.jpeg?t=1770645771"/>
                
                                    <psc:chapters version="1.1">
                                            </psc:chapters>
                
                            </item>
                    <item>
                <title>Red Teaming as a Supervisory Tool: Stress-Testing AI Systems</title>
                <guid isPermaLink="false">5b5a8e99d9d73a93e92d9c90a4aee506fbb0f335</guid>
                <description><![CDATA[<p>Red teaming is rapidly becoming a critical component of AI oversight. In this episode, Rumman Chowdhury explains how structured adversarial testing can uncover system vulnerabilities, model failures, and misuse pathways.</p><p>The discussion focuses on practical red-teaming approaches that supervisory authorities can adopt, even with limited resources.</p><p>Speaker: Rumman Chowdhury (Human Intelligence)<br>Interviewer: Mirela Kmetic-Marceau, Project Consultant, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></description>
                <content:encoded><![CDATA[<p>Red teaming is rapidly becoming a critical component of AI oversight. In this episode, Rumman Chowdhury explains how structured adversarial testing can uncover system vulnerabilities, model failures, and misuse pathways.</p><p>The discussion focuses on practical red-teaming approaches that supervisory authorities can adopt, even with limited resources.</p><p>Speaker: Rumman Chowdhury (Human Intelligence)<br>Interviewer: Mirela Kmetic-Marceau, Project Consultant, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></content:encoded>
                <pubDate>Tue, 24 Feb 2026 23:00:00 +0000</pubDate>
                <enclosure url="https://audio.ausha.co/40206IDN6a9n.mp3?t=1770645446" length="25996906" type="audio/mpeg"/>
                                    <link>https://podcast.ausha.co/unesco-s-hands-on-ai-supervision/red-teaming-as-a-supervisory-tool-stress-testing-ai-systems</link>
                
                                <itunes:author>UNESCO</itunes:author>
                <itunes:explicit>false</itunes:explicit>
                                    <itunes:keywords>AI,unesco,Cybersecurity,Compliance,Responsible AI,ai governance,eu ai act,ai supervision,ai regulation,algorithmic accountability,ai risk assessment,ai evaluation,ai benchmarking,ai safety,red teaming,trustworthy ai,ai oversight,ai standards,supervisory authorities,ai policy</itunes:keywords>
                                <itunes:duration>27:04</itunes:duration>
                <itunes:episodeType>full</itunes:episodeType>
                                <itunes:subtitle>
Red teaming is rapidly becoming a critical component of AI oversight. In this episode, Rumman Chowdhury explains how structured adversarial testing can uncover system vulnerabilities, model failures, and misuse pathways.
The discussion focuses on prac...</itunes:subtitle>

                
                <googleplay:author>UNESCO</googleplay:author>
                                <googleplay:explicit>false</googleplay:explicit>

                                    <itunes:image href="https://image.ausha.co/2uFblicurMbzMF2qucsR7ojZBld500WSdtzzHcRy_1400x1400.jpeg?t=1770645534"/>
                    <googleplay:image href="https://image.ausha.co/2uFblicurMbzMF2qucsR7ojZBld500WSdtzzHcRy_1400x1400.jpeg?t=1770645534"/>
                
                                    <psc:chapters version="1.1">
                                            </psc:chapters>
                
                            </item>
                    <item>
                <title>Evaluating AI Systems: Metrics, Methods, and Measurement Gaps</title>
                <guid isPermaLink="false">087e2451f5fbadb9041a419e1a8a9b8d0edef776</guid>
                <description><![CDATA[<p>A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.</p><p>The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.</p><p>Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)<br>Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></description>
                <content:encoded><![CDATA[<p>A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.</p><p>The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.</p><p>Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)<br>Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></content:encoded>
                <pubDate>Tue, 17 Feb 2026 23:00:00 +0000</pubDate>
                <enclosure url="https://audio.ausha.co/6wRw6CX0GW9m.mp3?t=1770645334" length="31818730" type="audio/mpeg"/>
                                    <link>https://podcast.ausha.co/unesco-s-hands-on-ai-supervision/evaluating-ai-systems-metrics-methods-and-measurement-gaps</link>
                
                                <itunes:author>UNESCO</itunes:author>
                <itunes:explicit>false</itunes:explicit>
                                    <itunes:keywords>AI,unesco,Cybersecurity,Compliance,Responsible AI,ai governance,eu ai act,ai supervision,ai regulation,algorithmic accountability,ai risk assessment,ai evaluation,ai benchmarking,ai safety,red teaming,trustworthy ai,ai oversight,ai standards,supervisory authorities,ai policy</itunes:keywords>
                                <itunes:duration>33:08</itunes:duration>
                <itunes:episodeType>full</itunes:episodeType>
                                <itunes:subtitle>
A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.
The conversation high...</itunes:subtitle>

                
                <googleplay:author>UNESCO</googleplay:author>
                                <googleplay:explicit>false</googleplay:explicit>

                                    <itunes:image href="https://image.ausha.co/Km7IK9QWmpvW0SgjnSYtj0HFR7HnRSjYrLs4DWxn_1400x1400.jpeg?t=1770645387"/>
                    <googleplay:image href="https://image.ausha.co/Km7IK9QWmpvW0SgjnSYtj0HFR7HnRSjYrLs4DWxn_1400x1400.jpeg?t=1770645387"/>
                
                                    <psc:chapters version="1.1">
                                            </psc:chapters>
                
                            </item>
                    <item>
                <title>Mapping AI Risks: From Principles to Practice</title>
                <guid isPermaLink="false">620f6708a5e487f1a9dbbd40656c70a5fc299b14</guid>
                <description><![CDATA[<p>This episode explores how supervisory authorities can translate high-level AI risk principles into practical, operational risk-mapping processes. Nathalie Cohen discusses evaluation frameworks, data considerations, and real-world challenges identified during the roundtable exercise, providing regulators with concrete steps for structuring risk identification and prioritisation.</p><p>Speaker: Nathalie Cohen (OECD)<br>Interviewer: Max Kendrick, AI Strategy Coordinator &amp; Senior Advisor, Office of the Director General, UNESCO</p><p><br></p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></description>
                <content:encoded><![CDATA[<p>This episode explores how supervisory authorities can translate high-level AI risk principles into practical, operational risk-mapping processes. Nathalie Cohen discusses evaluation frameworks, data considerations, and real-world challenges identified during the roundtable exercise, providing regulators with concrete steps for structuring risk identification and prioritisation.</p><p>Speaker: Nathalie Cohen (OECD)<br>Interviewer: Max Kendrick, AI Strategy Coordinator &amp; Senior Advisor, Office of the Director General, UNESCO</p><p><br></p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></content:encoded>
                <pubDate>Tue, 10 Feb 2026 23:00:00 +0000</pubDate>
                <enclosure url="https://audio.ausha.co/8PjPEUP0v8mL.mp3?t=1770645075" length="34888040" type="audio/mpeg"/>
                                    <link>https://podcast.ausha.co/unesco-s-hands-on-ai-supervision/mapping-ai-risks-from-principles-to-practice</link>
                
                                <itunes:author>UNESCO</itunes:author>
                <itunes:explicit>false</itunes:explicit>
                                    <itunes:keywords>AI,unesco,Cybersecurity,Compliance,Responsible AI,ai governance,eu ai act,ai supervision,ai regulation,algorithmic accountability,ai risk assessment,ai evaluation,ai benchmarking,ai safety,red teaming,trustworthy ai,ai oversight,ai standards,supervisory authorities,ai policy</itunes:keywords>
                                <itunes:duration>36:20</itunes:duration>
                <itunes:episodeType>full</itunes:episodeType>
                                <itunes:subtitle>
This episode explores how supervisory authorities can translate high-level AI risk principles into practical, operational risk-mapping processes. Nathalie Cohen discusses evaluation frameworks, data considerations, and real-world challenges identified...</itunes:subtitle>

                
                <googleplay:author>UNESCO</googleplay:author>
                                <googleplay:explicit>false</googleplay:explicit>

                                    <itunes:image href="https://image.ausha.co/YNSDJVKyNcS1qv9BGlThyJ8BPzBnTVWpBc1HQrmn_1400x1400.jpeg?t=1770645269"/>
                    <googleplay:image href="https://image.ausha.co/YNSDJVKyNcS1qv9BGlThyJ8BPzBnTVWpBc1HQrmn_1400x1400.jpeg?t=1770645269"/>
                
                                    <psc:chapters version="1.1">
                                            </psc:chapters>
                
                            </item>
                    <item>
                <title>AI Safety &amp; Benchmarking: Building Trustworthy Evaluation Ecosystems</title>
                <guid isPermaLink="false">3ef9cc22a575e1cb7af6cf92e84b483622d975d4</guid>
                <description><![CDATA[<p>Effective AI supervision requires reliable benchmarking ecosystems. Nicholas Miailhe discusses why benchmarks matter, how they should be constructed, and what regulators need to know about safety evaluations. The conversation highlights emerging international efforts to standardise safety testing and ensure comparability across models.</p><p>Speaker: Nicholas Miailhe (PRISM Eval)</p><p>Interviewer: Doaa Abu Elyounes, Programme Specialist, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></description>
                <content:encoded><![CDATA[<p>Effective AI supervision requires reliable benchmarking ecosystems. Nicholas Miailhe discusses why benchmarks matter, how they should be constructed, and what regulators need to know about safety evaluations. The conversation highlights emerging international efforts to standardise safety testing and ensure comparability across models.</p><p>Speaker: Nicholas Miailhe (PRISM Eval)</p><p>Interviewer: Doaa Abu Elyounes, Programme Specialist, Ethics of AI Unit, UNESCO</p><br/><p>Hosted on Ausha. See <a href="https://ausha.co/privacy-policy">ausha.co/privacy-policy</a> for more information.</p>]]></content:encoded>
                <pubDate>Sun, 01 Feb 2026 23:00:00 +0000</pubDate>
                <enclosure url="https://audio.ausha.co/GgWgDS8LP7zQ.mp3?t=1769678322" length="33168106" type="audio/mpeg"/>
                                    <link>https://podcast.ausha.co/unesco-s-hands-on-ai-supervision/ai-safety-benchmarking-building-trustworthy-evaluation-ecosystems</link>
                
                                <itunes:author>UNESCO</itunes:author>
                <itunes:explicit>false</itunes:explicit>
                                    <itunes:keywords></itunes:keywords>
                                <itunes:duration>34:32</itunes:duration>
                <itunes:episodeType>full</itunes:episodeType>
                                <itunes:subtitle>
Effective AI supervision requires reliable benchmarking ecosystems. Nicholas Miailhe discusses why benchmarks matter, how they should be constructed, and what regulators need to know about safety evaluations. The conversation highlights emerging inter...</itunes:subtitle>

                
                <googleplay:author>UNESCO</googleplay:author>
                                <googleplay:explicit>false</googleplay:explicit>

                                    <itunes:image href="https://image.ausha.co/PKxjhFDfgkNSFcLMGA4uoJdmbR5GQRZ92BkPctG0_1400x1400.jpeg?t=1769678518"/>
                    <googleplay:image href="https://image.ausha.co/PKxjhFDfgkNSFcLMGA4uoJdmbR5GQRZ92BkPctG0_1400x1400.jpeg?t=1769678518"/>
                
                                    <psc:chapters version="1.1">
                                            </psc:chapters>
                
                            </item>
            </channel>
</rss>
