Find Thousands of Podcast Partners
Podcast Image

MLOps.community

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.

Categories

Last Episode Date: 15 October 2024

Total Episodes: 378

Collaboration
Podcast Interviews
Affiliate and Join Ventures
Sponsorships
Promo Swaps
Feed swaps
Guest/Interview swaps
Monetization
Advertising and Sponsors
Affiliate and JVs
Paid Interviews
Products, Services or Events
Memberships
Donations
Exploring the Impact of Agentic Workflows // Raj Rikhy // #268
15 October 2024
Exploring the Impact of Agentic Workflows // Raj Rikhy // #268

Raj Rikhy is a Senior Product Manager at Microsoft AI + R, enabling deep reinforcement learning use cases for autonomous systems. Previously, Raj was the Group Technical Product Manager in the CDO for Data Science and Deep Learning at IBM. Prior to joining IBM, Raj has been working in product management for several years - at Bitnami, Appdirect and Salesforce. // MLOps Podcast #268 with Raj Rikhy, Principal Product Manager at Microsoft. // Abstract In this MLOps Community podcast, Demetrios chats with Raj Rikhy, Principal Product Manager at Microsoft, about deploying AI agents in production. They discuss starting with simple tools, setting clear success criteria, and deploying agents in controlled environments for better scaling. Raj highlights real-time uses like fraud detection and optimizing inference costs with LLMs, while stressing human oversight during early deployment to manage LLM randomness. The episode offers practical advice on deploying AI agents thoughtfully and efficiently, avoiding over-engineering, and integrating AI into everyday applications. // Bio Raj is a Senior Product Manager at Microsoft AI + R, enabling deep reinforcement learning use cases for autonomous systems. Previously, Raj was the Group Technical Product Manager in the CDO for Data Science and Deep Learning at IBM. Prior to joining IBM, Raj has been working in product management for several years - at Bitnami, Appdirect and Salesforce. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.microsoft.com/en-us/research/focus-area/ai-and-microsoft-research/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Raj on LinkedIn: https://www.linkedin.com/in/rajrikhy/

51 min
The Only Constant is (Data) Change // Panel // DE4AI
11 October 2024
The Only Constant is (Data) Change // Panel // DE4AI

//Abstract If there is one thing that is true, it is data is constantly changing. How can we keep up with these changes? How can we make sure that every stakeholder has visibility? How can we create a culture of understanding around data change management? //Bio - Benjamin Rogojan: Data Science And Engineering Consultant @ Seattle Data Guy - Chad Sanderson: CEO & Co-Founder @ Gable - Christophe Blefari: CTO & Co-founder @ NAO - Maggie Hays: Founding Community Product Manager, DataHub @ Acryl Data A big thank you to our Premium Sponsors  @Databricks ,  @tecton8241 , &  @onehouseHQ for their generous support!

40 min
The AI Dream Team: Strategies for ML Recruitment and Growth // Jelmer Borst and Daniela Solis // #267
9 October 2024
The AI Dream Team: Strategies for ML Recruitment and Growth // Jelmer Borst and Daniela Solis // #267

The AI Dream Team: Strategies for ML Recruitment and Growth // MLOps Podcast #267 with Jelmer Borst, Analytics & Machine Learning Domain Lead, and Daniela Solis, Machine Learning Product Owner, of Picnic. // Abstract Like many companies, Picnic started out with a small, central data science team. As this grows larger, focussing on more complex models, it questions the skillsets & organisational set up. Use an ML platform, or build ourselves? A central team vs. embedded? Hire data scientists vs. ML engineers vs. MLOps engineers How to foster a team culture of end-to-end ownership How to balance short-term & long-term impact // Bio Jelmer Borst Jelmer leads the analytics & machine learning teams at Picnic, an app-only online groceries company based in The Netherlands. Whilst his background is in aerospace engineering, he was looking for something faster-paced and found that at Picnic. He loves the intersection of solving business challenges using technology & data. In his free time loves to cook food and tinker with the latest AI developments. Daniela Solis Morales As a Machine Learning Lead at Picnic, I am responsible for ensuring the success of end-to-end Machine Learning systems. My work involves bringing models into production across various domains, including Personalization, Fraud Detection, and Natural Language Processing. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Jelmer on LinkedIn: https://www.linkedin.com/in/japborst Connect with Daniela on LinkedIn: https://www.linkedin.com/in/daniela-solis-morales/

58 min
Making Your Company LLM-native // Francisco Ingham // #266
6 October 2024
Making Your Company LLM-native // Francisco Ingham // #266

Francisco Ingham, LLM consultant, NLP developer, and founder of Pampa Labs. Making Your Company LLM-native // MLOps Podcast #266 with Francisco Ingham, Founder of Pampa Labs. // Abstract Being an LLM-native is becoming one of the key differentiators among companies, in vastly different verticals. Everyone wants to use LLMs, and everyone wants to be on top of the current tech but - what does it really mean to be LLM-native? LLM-native involves two ends of a spectrum. On the one hand, we have the product or service that the company offers, which surely offers many automation opportunities. LLMs can be applied strategically to scale at a lower cost and offer a better experience for users. But being LLM-native not only involves the company's customers, it also involves each stakeholder involved in the company's operations. How can employees integrate LLMs into their daily workflows? How can we as developers leverage the advancements in the field not only as builders but as adopters? We will tackle these and other key questions for anyone looking to capitalize on the LLM wave, prioritizing real results over the hype. // Bio Currently working at Pampa Labs, where we help companies become AI-native and build AI-native products. Our expertise lies on the LLM-science side, or how to build a successful data flywheel to leverage user interactions to continuously improve the product. We also spearhead, pampa-friends - the first Spanish-speaking community of AI Engineers. Previously worked in management consulting, was a TA in fastai in SF, and led the cross-AI + dev tools team at Mercado Libre. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: pampa.ai --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Francisco on LinkedIn: https://www.linkedin.com/in/fpingham/ Timestamps: [00:00] Francisco's preferred coffee [00:13] Takeaways [00:37] Please like, share, leave a review, and subscribe to our MLOps channels! [00:51] A Literature Geek [02:41] LLM-native company [03:54] Integrating LLM in workflows [07:21] Unexpected LLM applications [10:38] LLM's in development process [14:00] Vibe check to evaluation [15:36] Experiment tracking optimizations [20:22] LLMs as judges discussion [24:43] Presentaciones automatizadas para podcast [27:48] AI operating system and agents [31:29] Importance of SEO expertise [35:33] Experimentation and evaluation [39:20] AI integration strategies [41:50] RAG approach spectrum analysis [44:40] Search vs Retrieval in AI [49:02] Recommender Systems vs RAG [52:08] LLMs in recommender systems [53:10] LLM interface design insights

57 min
Unpacking 3 Types of Feature Stores // Simba Khadder // #265
1 October 2024
Unpacking 3 Types of Feature Stores // Simba Khadder // #265

Simba Khadder is the Founder & CEO of Featureform. He started his ML career in recommender systems where he architected a multi-modal personalization engine that powered 100s of millions of user’s experiences. Unpacking 3 Types of Feature Stores // MLOps Podcast #265 with Simba Khadder, Founder & CEO of Featureform. // Abstract Simba dives into how feature stores have evolved and how they now intersect with vector stores, especially in the world of machine learning and LLMs. He breaks down what embeddings are, how they power recommender systems, and why personalization is key to improving LLM prompts. Simba also sheds light on the difference between feature and vector stores, explaining how each plays its part in making ML workflows smoother. Plus, we get into the latest challenges and cool innovations happening in MLOps. // Bio Simba Khadder is the Founder & CEO of Featureform. After leaving Google, Simba founded his first company, TritonML. His startup grew quickly and Simba and his team built ML infrastructure that handled over 100M monthly active users. He instilled his learnings into Featureform’s virtual feature store. Featureform turns your existing infrastructure into a Feature Store. He’s also an avid surfer, a mixed martial artist, a published astrophysicist for his work on finding Planet 9, and he ran the SF marathon in basketball shoes. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: featureform.comBigQuery Feature Store // Nicolas Mauti // MLOps Podcast #255: https://www.youtube.com/watch?v=NtDKbGyRHXQ&ab_channel=MLOps.community --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Simba on LinkedIn: https://www.linkedin.com/in/simba-k/ Timestamps: [00:00] Simba's preferred coffee [00:08] Takeaways [02:01] Coining the term 'Embedding' [07:10] Dual Tower Recommender System [10:06] Complexity vs Reliability in AI [12:39] Vector Stores and Feature Stores [17:56] Value of Data Scientists [20:27] Scalability vs Quick Solutions [23:07] MLOps vs LLMOps Debate [24:12] Feature Stores' current landscape [32:02] ML lifecycle challenges and tools [36:16] Feature Stores bundling impact [42:13] Feature Stores and BigQuery [47:42] Virtual vs Literal Feature Store [50:13] Hadoop Community Challenges [52:46] LLM data lifecycle challenges [56:30] Personalization in prompting usage [59:09] Contextualizing company variables [1:03:10] DSPy framework adoption insights [1:05:25] Wrap up

67 min
Reinvent Yourself and Be Curious // Stefano Bosisio // MLOps Podcast #264
27 September 2024
Reinvent Yourself and Be Curious // Stefano Bosisio // MLOps Podcast #264

Stefano Bosisio is an accomplished MLOps Engineer with a solid background in Biomedical Engineering, focusing on cellular biology, genetics, and molecular simulations. Reinvent Yourself and Be Curious // MLOps Podcast #264 with Stefano Bosisio, MLOps Engineer at Synthesia. // Abstract This talk goes through Stefano's experience, to be an inspirational source for whoever wants to jump on a career in the MLOps sector. Moreover, Stefano will also introduce his MLOps Course on the MLOps community platform. // Bio Sai Bharath Gottam Stefano Bosisio is an MLOps Engineer, with a versatile background that ranges from biomedical engineering to computational chemistry and data science. Stefano got an MSc in biomedical engineering from the Polytechnic of Milan, focusing on cellular biology, genetics, and molecular simulations. Then, he landed in Scotland, in Edinburgh, to earn a PhD in chemistry from the University of Edinburgh, where he developed robust physical theories and simulation methods, to understand and unlock the drug discovery problem. After completing his PhD, Stefano transitioned into Data Science, where he began his career as a data scientist. His interest in machine learning engineering grew, leading him to specialize in building ML platforms that drive business success. Stefano's expertise bridges the gap between complex scientific research and practical machine learning applications, making him a key figure in the MLOps field. Bonus points beyond data: Stefano, as a proper Italian, loves cooking and (mainly) baking, playing the piano, crocheting and running half-marathons. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://medium.com/@stefanobosisio1First MLOps Stack Course: https://learn.mlops.community/courses/languages/your-first-mlops-stack/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Stefano on LinkedIn: https://www.linkedin.com/in/stefano-bosisio1/ Timestamps: [00:00] Stephano's preferred coffee [00:12] Takeaways [01:06] Stephano's MLOps Course [01:47] From Academia to AI Industry [09:10] Data science and platforms [16:53] Persistent MLOps challenges [21:23] Internal evangelization for success [24:21] Adapt communication skills to diverse individual needs [29:43] Key components of ML pipelines are essentia l[33:47] Create a generalizable AI training pipeline with Kubeflow [35:44] Consider cost-effective algorithms and deployment methods [39:02] Agree with dream platform; LLMs require simple microservice [42:48] Auto scaling: crucial, tricky, prone to issues [46:28] Auto-scaling issues with Apache Beam data pipelines [49:49] Guiding students through MLOps with practical experience [53:16] Bulletproof Problem Solving: Decision trees for problem analysis [55:03] Evaluate tools critically; appreciate educational opportunities [57:01] Wrap up

57 min
Global Feature Store // Gottam Sai Bharath & Cole Bailey // #263
24 September 2024
Global Feature Store // Gottam Sai Bharath & Cole Bailey // #263

Global Feature Store: Optimizing Locally and Scaling Globally at Delivery Hero // MLOps Podcast #263 with Delivery Hero's Gottam Sai Bharath, Senior Machine Learning Engineer & Cole Bailey, ML Platform Engineering Manager. // Abstract Delivery Hero innovates locally within each department to develop MLOps practices most effective in that particular context. We also discuss our efforts to reduce redundancy and inefficiency across the company. Hear about our experiences in creating multiple micro feature stores within our departments, and our goal to unify these into a Global Feature Store that is more powerful when combined. // Bio Sai Bharath Gottam With a passion for translating complex technical concepts into practical solutions, Sai excels at making intricate topics accessible and engaging. As a Senior Machine Learning Engineer at Delivery Hero, Sai works on cutting-edge machine learning platforms that guarantee seamless delivery experiences. Always eager to share insights and innovations, Sai is committed to making technology understandable and enjoyable for all. Cole Bailey Bridging data science and production-grade software engineering. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.deliveryhero.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Sai on LinkedIn: https://www.linkedin.com/in/sai-bharath-gottam/ Connect with Cole on LinkedIn: www.linkedin.com/in/cole-bailey Timestamps: [00:00] Sai and Cole's preferred coffee [00:42] Takeaways [01:51] Please like, share, leave a review, and subscribe to our MLOps channels! [02:08] Life changes in Delivery Hero [05:21] Global Feature Store and Pandora [12:21] Tech integration strategies [20:08] Defining Feature and Feature Store [22:46] Feature Store vs Data Platform [26:26] Features are discoverable [32:56] Onboarding and Feature Testing [36:00] Data consistency [41:07] Future Vision Feature Store [44:17] Multi-cloud strategies [46:33] Wrap up

50 min
RAG Quality Starts with Data Quality // Adam Kamor // #262
20 September 2024
RAG Quality Starts with Data Quality // Adam Kamor // #262

Adam Kamor is the Co-founder of Tonic, a company that specializes in creating mock data that preserves secure datasets. RAG Quality Starts with Data Quality // MLOps Podcast #262 with Adam Kamor, Co-Founder & Head of Engineering of Tonic.ai. // Abstract Dive into what makes Retrieval-Augmented Generation (RAG) systems tick—and it all starts with the data. We’ll be talking with an expert in the field who knows exactly how to transform messy, unstructured enterprise data into high-quality fuel for RAG systems. Expect to learn the essentials of data prep, uncover the common challenges that can derail even the best-laid plans, and discover some insider tips on how to boost your RAG system’s performance. We’ll also touch on the critical aspects of data privacy and governance, ensuring your data stays secure while maximizing its utility. If you’re aiming to get the most out of your RAG systems or just curious about the behind-the-scenes work that makes them effective, this episode is packed with insights that can help you level up your game. // Bio Adam Kamor, PhD, is the Co-founder and Head of Engineering of Tonic.ai. Since completing his PhD in Physics at Georgia Tech, Adam has committed himself to enabling the work of others through the programs he develops. In his roles at Microsoft and Kabbage, he handled UI design and led the development of new features to anticipate customer needs. At Tableau, he played a role in developing the platform’s analytics/calculation capabilities. As a founder of Tonic.ai, he is leading the development of unstructured data solutions that are transforming the work of fellow developers, analysts, and data engineers alike. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.tonic.ai Various topics about RAG and LLM security are available on Tonic.ai's blogs: https://www.tonic.ai/blog https://www.tonic.ai/blog/how-to-prevent-data-leakage-in-your-ai-applications-with-tonic-textual-and-snowpark-container-services https://www.tonic.ai/blog/rag-evaluation-series-validating-the-rag-performance-of-the-openais-rag-assistant-vs-googles-vertex-search-and-conversation https://www.youtube.com/watch?v=5xdyt4oRONU https://www.tonic.ai/blog/what-is-retrieval-augmented-generation-the-benefits-of-implementing-rag-in-using-llms --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Adam on LinkedIn: https://www.linkedin.com/in/adam-kamor-85720b48/ Timestamps: [00:00] Adam's preferred coffee [00:24] Takeaways [00:59] Huge shout out to Tonic.ai for supporting the community! [01:03] Please like, share, leave a review, and subscribe to our MLOps channels! [01:18] Naming a product [03:38] Tonic Textual [08:00] Managing PII and Data Safety [10:16] Chunking strategies for context [14:19] Data prep for RAG [17:20] Data quality in AI systems [20:58] Data integrity in PDFs [27:12] Ensuring chatbot data freshness [33:02] Managed PostgreSQL and Vector DB [34:49] RBAC database vs file access [37:35] Slack AI data leakage solutions [42:26] Hot swapping [46:06] LLM security concerns [47:03] Privacy management best practices [49:02] Chatbot design patterns [50:39] RAG growth and impact [52:40] Retrieval Evaluation best practices [59:20] Wrap up

59 min
Who's MLOps for Anyway? // Jonathan Rioux // #261
17 September 2024
Who's MLOps for Anyway? // Jonathan Rioux // #261

Jonathan Rioux is a Managing Principal of AI Consulting for EPAM Systems, where he advises clients on how to get from idea to realized AI products with the minimum of fuss and friction. Who's MLOps for Anyway? // MLOps Podcast #261 with Jonathan Rioux, Managing Principal, AI Consulting at EPAM Systems. // Abstract The year is 2024 and we are all staring into the cliff towards the abyss of disillusionment for Generative AI. Every organization, developer, and AI-adjacent individual is now talking about "making AI real" and "turning a ROI on AI initiatives". MLOps and LLMOps are taking the stage as the solution; equip your AI teams with the best tools money can buy, grab tokens by the fistful, and look at value raking in. Sounds familiar and eerily similar to the previous ML hype cycles? From solo devs to large organizations, how can we avoid the same pitfalls as last time and get out of the endless hamster wheel? // Bio Jonathan is a Managing Principal of AI Consulting for EPAM, where he advises client on how to get from idea to realized AI products with the minimum of fuss and friction. He's obsessed with the mental models of ML and how to organize harmonious AI practices. Jonathan published "Data Analysis with Python and PySpark" (Manning, 2022). // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: raiks.ca --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Jonathan on LinkedIn: https://www.linkedin.com/in/jonathanrx/ Timestamps: [00:00] Jonathan's preferred coffee [00:25] Takeaways [01:44] MLOps as not being sexy [03:49] Do not conflate MLOps with ROI [06:21] ML Certification Business Idea [11:02] AI Adoption Missteps [15:40] Slack AI Privacy Risks [18:17] Decentralized AI success [22:00] Michelangelo Hub-Spoke Model [27:45] Engineering tools for everyone [33:38 - 35:20] SAS Ad [35:21] POC to ROI transition [42:08] Repurposing project learnings [46:24] Balancing Innovation and ROI [55:35] Using classification model [1:00:24] Chatbot evolution comparison [1:01:20] Balancing Automation and Trust [1:06:30] Manual to AI transition [1:09:57] Wrap up

70 min
Alignment is Real // Shiva Bhattacharjee // #260
13 September 2024
Alignment is Real // Shiva Bhattacharjee // #260

Shiva Bhattacharjee is the Co-founder and CTO of TrueLaw, where we are building bespoke models for law firms for a wide variety of tasks. Alignment is Real // MLOps Podcast #260 with Shiva Bhattacharjee, CTO of TrueLaw Inc. // Abstract If the off-the-shelf model can understand and solve a domain-specific task well enough, either your task isn't that nuanced or you have achieved AGI. We discuss when is fine-tuning necessary over prompting and how we have created a loop of sampling - collecting feedback - fine-tuning to create models that seem to perform exceedingly well in domain-specific tasks. // Bio 20 years of experience in distributed and data-intensive systems spanning work at Apple, Arista Networks, Databricks, and Confluent. Currently CTO at TrueLaw where we provide a framework to fold in user feedback, such as lawyer critiques of a given task, and fold them into proprietary LLM models through fine-tuning mechanics, resulting in 7-10x improvements over the base model. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: www.truelaw.ai --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Shiva on LinkedIn: https://www.linkedin.com/in/shivabhattacharjee/ Timestamps: [00:00] Shiva's preferred coffee [00:58] Takeaways [01:17] DSPy Implementation [04:57] Evaluating DSPy risks [08:13] Community-driven DSPy tool [12:19] RAG implementation strategies [17:02] Cost-effective embedding fine-tuning [18:51] AI infrastructure decision-making [24:13] Prompt data flow evolution [26:32] Buy vs build decision [30:45] Tech stack insights [38:20] Wrap up

40 min
Contact Us
First
Last
Discover New Podcast Partnerships

Subscribe To Our Weekly Newsletter

Get notified about new partnerships

Enter your name and email For Gifts, Deals and Prizes