Data Science at Home

Francesco Gadaleta
undefined
May 16, 2023 • 21min

Warning! Mathematical Mayhem Ahead: Demystifying Liquid Time-Constant Networks (Ep. 228)

Hold on to your calculators and buckle up for a wild mathematical ride in this episode! Brace yourself as we dive into the fascinating realm of Liquid Time-Constant Networks (LTCs), where mathematical content reaches new heights of excitement. In this mind-bending adventure, we demystify the intricacies of LTCs, from complex equations to mind-boggling mathematical concepts, we break them down into digestible explanations.   References https://www.science.org/doi/10.1126/scirobotics.adc8892 https://spectrum.ieee.org/liquid-neural-networks#toggle-gdpr  
undefined
May 11, 2023 • 34min

Efficiently Retraining Language Models: How to Level Up Without Breaking the Bank (Ep. 227)

Get ready for an eye-opening episode! 🎙️ In our latest podcast episode, we dive deep into the world of LoRa (Low-Rank Adaptation) for large language models (LLMs). This groundbreaking technique is revolutionizing the way we approach language model training by leveraging low-rank approximations. Join us as we unravel the mysteries of LoRa and discover how it enables us to retrain LLMs with minimal expenditure of money and resources. We'll explore the ingenious strategies and practical methods that empower you to fine-tune your language models without breaking the bank. Whether you're a researcher, developer, or language model enthusiast, this episode is packed with invaluable insights. Learn how to unlock the potential of LLMs without draining your resources. Tune in and join the conversation as we unravel the secrets of LoRa low-rank adaptation and show you how to retrain LLMs on a budget. Listen to the full episode now on your favorite podcast platform! 🎧✨   References LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 Low-rank approximation https://en.wikipedia.org/wiki/Low-rank_approximation Attention is all you need https://arxiv.org/pdf/1706.03762.pdf  
undefined
May 3, 2023 • 44min

Revolutionize Your AI Game: How Running Large Language Models Locally Gives You an Unfair Advantage Over Big Tech Giants (Ep. 226)

This is the first episode about the latest trend in artificial intelligence that's shaking up the industry - running large language models locally on your machine. This new approach allows you to bypass the limitations and constraints of cloud-based models controlled by big tech companies, and take control of your own AI journey. We'll delve into the benefits of running models locally, such as increased speed, improved privacy and security, and greater customization and flexibility. We'll also discuss the technical requirements and considerations for running these models on your own hardware, and provide practical tips and advice to get you started. Join us as we uncover the secrets to unleashing the full potential of large language models and taking your AI game to the next level! Sponsors AI-powered Email Security Best-in-class protection against the most sophisticated attacks, from phishing and impersonation to BEC and zero-day threats https://www.mimecast.com/     References https://agi-sphere.com/llama-models/ https://crfm.stanford.edu/2023/03/13/alpaca.html https://beebom.com/how-run-chatgpt-like-language-model-pc-offline/ https://sharegpt.com/ https://stability.ai/
undefined
Apr 26, 2023 • 27min

Rust: A Journey to High-Performance and Confidence in Code at Amethix Technologies (Ep. 225)

The journey of porting our projects to Rust was intense, but it was a decision we made to improve the quality of our software. The migration was not an easy task, as it required a considerable amount of time and resources. However, it was worth the effort as we have seen significant improvements in code reusability, code cleanliness, and performance. In this episode I will tell you why you should consider taking that journey too.    
undefined
Apr 18, 2023 • 36min

The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)

In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks. First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more manageable components. Next, we turn our attention to Generative Graph Models, which enable the creation of new graph structures that are similar to those in a given dataset. We discuss the inner workings of these models and their potential applications in fields such as drug discovery and social network analysis. Finally, we delve into the essential Pooling Mechanism, which allows for the efficient passing of information across different parts of the graph neural network. We examine the various types of pooling mechanisms and their advantages and disadvantages. Whether you're a seasoned graph neural network expert or just starting to explore the field, this episode has something for you. So join us for a deep dive into the power and potential of Graph Neural Networks.   References Machine Learning with Graphs - http://web.stanford.edu/class/cs224w/ A Comprehensive Survey on Graph Neural Networks - https://arxiv.org/abs/1901.00596
undefined
Apr 11, 2023 • 28min

The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)

In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data points in a graph structure. I also delve into the various real-world applications of GNNs, from drug discovery to recommendation systems, and how they are outperforming traditional machine learning models. Join me and demystify this exciting area of AI research and discover the power of graph neural networks.
undefined
Apr 4, 2023 • 25min

Leveling Up AI: Reinforcement Learning with Human Feedback (Ep. 222)

In this episode, we dive into the not-so-secret sauce of ChatGPT, and what makes it a different model than its predecessors in the field of NLP and Large Language Models. We explore how human feedback can be used to speed up the learning process in reinforcement learning, making it more efficient and effective. Whether you're a machine learning practitioner, researcher, or simply curious about how machines learn, this episode will give you a fascinating glimpse into the world of reinforcement learning with human feedback.   Sponsors This episode is supported by How to Fix the Internet, a cool podcast from the Electronic Frontier Foundation and Bloomberg, global provider of financial news and information, including real-time and historical price data, financial data, trading news, and analyst coverage.   References Learning through human feedback https://www.deepmind.com/blog/learning-through-human-feedback   Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback https://arxiv.org/abs/2204.05862
undefined
Mar 30, 2023 • 30min

The promise and pitfalls of GPT-4 (Ep. 221)

In this episode, we explore the potential of the highly anticipated GPT-4 language model and the challenges that come with its development. From its ability to generate highly coherent and creative text to concerns about ethical considerations and the potential misuse of such technology, we delve into the promise and pitfalls of GPT-4. Join us as we speak with experts in the field to gain insights into the latest developments and the impact that GPT-4 could have on the future of natural language processing.    
undefined
Mar 14, 2023 • 13min

AI’s Impact on Software Engineering: Killing Old Principles? (Ep. 220)

In this episode, we dive into the ways in which AI and machine learning are disrupting traditional software engineering principles. With the advent of automation and intelligent systems, developers are increasingly relying on algorithms to create efficient and effective code. However, this reliance on AI can come at a cost to the tried-and-true methods of software engineering. Join us as we explore the pros and cons of this paradigm shift and discuss what it means for the future of software development.
undefined
Mar 9, 2023 • 21min

Edge AI applications for military and space [RB] (Ep. 219)

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app