

The New Stack Podcast
The New Stack
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Episodes
Mentioned books

Jul 27, 2023 • 26min
Platform Engineering Not Working Out? You're Doing It Wrong.
In this episode of The New Stack Makers, Purnima Padmanabhan, a senior vice president at VMware, discusses three common mistakes organizations make when trying to move faster in meeting customer needs. The first mistake is equating application modernization with solely moving to the cloud, often resulting in a mere lift and shift of applications, without reaping the full benefits. The second mistake is a lack of automation, particularly in operations, which hinders the development process's speed. The third mistake involves adding unnecessary complexity by adopting new technologies or procedures, which slows down developers.As a solution, Padmanabhan introduces the concept of platform engineering, which not only accelerates development but also reduces toil for operations engineers and architects. However, many organizations struggle with implementing it effectively, as they often approach platform engineering in fragmented ways, investing in separate components without fully connecting them.To succeed in adopting platform engineering, Padmanabhan emphasizes the need for a mindset shift. The platform team must treat platform engineering as a continuously evolving product rather than a one-time delivery, ensuring that service-level agreements are continuously met, and regularly updating and improving features and velocity. The episode discusses the benefits of a well-implemented "golden path" for entire organizations and provides insights on how to start a platform engineering team.Learn more from The New Stack about Platform Engineering and VMware:Platform Engineering Overview, News and TrendsPlatform Engineers: Developers Are Your CustomersOpen Source Platform Engineering: A Decade of Cloud Foundry

Jul 26, 2023 • 21min
What Developers Need to Know About Business Logic Attacks
In this episode of The New Stack Makers, Peter Klimek, director of technology in the Office of the CTO at Imperva, discusses the vulnerability of business logic in a distributed, cloud-native environment. Business logic refers to the rules and processes that govern how applications function and how users interact with them and other systems. Klimek highlights the increasing attacks on APIs that exploit business logic vulnerabilities, with 17% of attacks on APIs in 2022 coming from malicious bots abusing business logic.The attacks on business logic take various forms, including credential stuffing attacks, carding (testing stolen credit cards), and newer forms like influence fraud, where algorithms are manipulated to deceive platforms and users. Klimek emphasizes that protecting business logic requires a cross-functional approach involving developers, operations engineers, security, and fraud teams.To enhance business logic security, Klimek recommends conducting a threat modeling exercise within the organization, which helps identify potential risk vectors. Additionally, he suggests referring to the Open Web Application Security Project (OWASP) website's list of automated threats as a checklist during the exercise.Ultimately, safeguarding business logic is crucial in securing cloud-native environments, and collaboration among various teams is essential to effectively mitigate potential threats and attacks.More from The New Stack, Imperva, and Peter Klimek:Why Your APIs Aren’t Safe — and What to Do about ItZero-Day Vulnerabilities Can Teach Us About Supply-Chain SecurityGraphQL APIs: Greater Flexibility Breeds New Security Woes

Jul 18, 2023 • 28min
Why Developers Need Vector Search
In this episode of The New Stack Makers podcast, the focus is on the challenges of handling unstructured data in today's data-rich world and the potential solutions offered by vector databases and vector searches. The use of relational databases is limited when dealing with text, images, and voice data, which makes it difficult to uncover meaningful relationships between different data points.Vector databases, which facilitate vector searches, have become increasingly popular for addressing this issue. They allow organizations to store, search, and index data that would be challenging to manage in traditional databases. Semantic search and Large Language Models have sparked interest in vector databases, providing developers with new possibilities.Beyond standard applications like information search and recommendation bots, vector searches have also proven useful in combating copyright infringement. Social media companies like Facebook have pioneered this approach by using vectors to check copyrighted media uploads.Vector databases excel at finding similarities between data objects, as they operate in vector spaces and perform approximate nearest neighbor searches, sacrificing a bit of accuracy for increased efficiency. However, developers need to understand their specific use cases and the scale of their applications to make the most of vector databases and search.Frank Liu, the director of operations at Zilliz, advised listeners to educate themselves about vector databases, vector search, and machine learning to leverage the existing ecosystem of tools effectively. One notable indexing strategy for vectors is Hierarchical Navigable Small Worlds (HNSW), a graph-based algorithm created by Yury Malkov, a distinguished software engineer at VerSE Innovation who also joined us along with Nils Reimers of Cohere.It's crucial to view vector databases and search as additional tools in the developer's toolbox rather than replacements for existing database management systems or document databases. The ultimate goal is to build applications focused on user satisfaction, not just optimizing clicks. To delve deeper into the topic and explore the gaps in current tooling, check out the full episode.Listen on PoduramaLearn more about vector databases at thenewstack.ioVector Databases: What Devs Need to Know about How They WorkVector Primer: Understand the Lingua Franca of Generative AIHow Large Language Models Fuel the Rise of Vector Databases

Jul 13, 2023 • 37min
How Byteboard’s CEO Decided to Fix the Broken Tech Interview
Sargun Kaur, co-founder of Byteboard, aims to revolutionize the tech interview process, which she believes is flawed and ineffective. In an interview with The New Stack for our Tech Founder Odyssey podcast series, Kaur compared assessing technical skills during interviews to evaluating the abilities of basketball star Steph Curry by asking him to draw plays on a whiteboard instead of watching him perform on the court. Kaur, a former employee of Symantec and Google, became motivated to change the interview process after a talented engineer she had coached failed a Google interview due to its impractical format.Kaur believes that traditional tech interviews overly emphasize theoretical questions that do not reflect real-world software engineering tasks. This not only limits the talent pool but also leads to mis-hires, where approximately one in four new employees is unsuitable for their roles or teams. To address these issues, Kaur co-founded Byteboard in 2018 with Nicole Hardson-Hurley, another former Google employee. Byteboard offers project-based technical interviews, adopted by companies like Dropbox, Lyft, and Robinhood, to enhance the efficiency and fairness of their hiring processes. In recognition of their work, Kaur and Hardson-Hurley received Forbes magazine's "30 Under 30" award for enterprise technology.Kaur's journey into the tech industry was unexpected, considering her initial disinterest in her father's software engineering career. However, exposure to programming and shadowing a female engineer at Microsoft sparked her curiosity, leading her to study computer science at the University of California, Berkeley. Overcoming initial challenges as a minority in the field, Kaur eventually joined Google as an engineer, content with the work environment and mentorship she received. However, her dissatisfaction with the interview process prompted her to apply to Google's Area 120 project incubator, leading to the creation of Byteboard. Kaur's experience with Byteboard's development and growth taught her valuable lessons about entrepreneurship, the power of founders in fundraising meetings, and the potential impact of AI on tech hiring processes.Check out more episodes in The Tech Founder Odyssey series:A Lifelong ‘Maker’ Tackles a Developer Onboarding ProblemHow Teleport’s Leader Transitioned from Engineer to CEOHow 2 Founders Sold Their Startup to Aqua Security in a Year

Jul 7, 2023 • 29min
A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem
Shanea Leven, co-founder and CEO of CodeSee, shared her journey as a tech founder in an episode of the Tech Founder Odyssey podcast series. Despite coming to programming later than many of her peers, Leven always had a creative spark and a passion for making things. She initially pursued fashion design but taught herself programming in college and co-founded a company building custom websites for book authors. This experience eventually led her to a job at Google, where she worked in product development.While at Google, Leven realized the challenge of deciphering legacy code and onboarding developers to it. Inspired by a presentation by Bret Victor, she came up with the idea for CodeSee—a developer platform that helps teams understand and review code bases more effectively. She started working on CodeSee in 2019 as a side project, but it soon received venture capital funding, allowing her to quit her job and focus on the startup full-time.Leven candidly discussed the challenges of juggling a day job and a startup, particularly after receiving funding. She also shared advice on raising money from venture capitalists and building a company culture.Listen to the full episode and check out more installments from The Tech Founder Odyssey.How Teleport’s Leader Transitioned from Engineer to CEOHow 2 Founders Sold Their Startup to Aqua Security in a YearHow Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur

Jun 29, 2023 • 16min
5 Steps to Deploy Efficient Cloud Native Foundation AI Models
In deploying cloud-native sustainable foundation AI models, there are five key steps outlined by Huamin Chen, an R&D professional at Red Hat's Office of the CTO. The first two steps involve using containers and Kubernetes to manage workloads and deploy them across a distributed infrastructure. Chen suggests employing PyTorch for programming and Jupyter Notebooks for debugging and evaluation, with Docker community files proving effective for containerizing workloads.The third step focuses on measurement and highlights the use of Prometheus, an open-source tool for event monitoring and alerting. Prometheus enables developers to gather metrics and analyze the correlation between foundation models and runtime environments.Analytics, the fourth step, involves leveraging existing analytics while establishing guidelines and benchmarks to assess energy usage and performance metrics. Chen emphasizes the need to challenge assumptions regarding energy consumption and model performance.Finally, the fifth step entails taking action based on the insights gained from analytics. By optimizing energy profiles for foundation models, the goal is to achieve greater energy efficiency, benefitting the community, society, and the environment.Chen underscores the significance of this optimization for a more sustainable future.Learn more at thenewstack.ioPyTorch Takes AI/ML Back to Its Research, Open Source RootsPyTorch Lightning and the Future of Open Source AIJupyter Notebooks: The Web-Based Dev Tool You've Been SeekingKnow the Hidden Costs of DIY Prometheus

Jun 22, 2023 • 26min
A Good SBOM is Hard to Find
The concept of a software bill of materials (SBOM) aims to provide consumers with information about the components inside a software, enabling better assessment of potential security issues. Justin Hutchings, Senior Director of Product Management at GitHub, emphasizes the importance of SBOMs and their potential to facilitate patching without relying solely on the vendor. He spoke with Alex Williams in this episode of The New Stack Makers.Creating a comprehensive SBOM poses challenges. Each software package is unique, such as an Android application that combines the developer's code with numerous open-source dependencies obtained through Maven packages. The SBOM should ideally serve as a machine-readable inventory of all these dependencies, enabling developers to evaluate their security.Hutchings notes that many SBOMs fall short in being fully machine-readable, and the vulnerability landscape is even more problematic. To achieve the standards Hutchings envisions, several actions are necessary. For instance, certain programming languages make it difficult to inspect build contents, while the lack of a centralized distribution point for dependencies in languages like C and C++ complicates the enumeration and standardization of machine-readable names and versions. Addressing these issues across the entire software supply chain is imperative.SBOMs hold potential for enhancing software security, but the current state of implementation and machine-readability needs improvement, particularly concerning diverse programming languages and dependency management.Learn more at thenewstack.ioCreating a 'Minimum Elements' SBOM Document in 5 MinutesEnhance Your SBOM Success with SLSAHow to Create a Software Bill of Materials

Jun 21, 2023 • 14min
The Developer's Career Path: Discover's Approach
Angel Diaz, Vice President of Technology, Capabilities, and Innovation at Discover Financial Services, spoke with TNS Host Alex Williams at the Open Source Summit in Vancouver, BC. Diaz emphasizes the importance of learning and collaboration among software engineers. He leads The Discover Technology Academy, a community of 15,000 engineers, which he describes as a place where craftsmen come together rather than an ivory tower institution.Developers and engineers at Discover define and develop processes for software development. They start their journey by contributing atomic elements of knowledge, such as articles, blogs, videos, and tutorials, and then democratize that knowledge. Open source principles, communities, guilds, and established practices play a vital role in their work and discovery process.Discover's developer experience revolves around the concept of the golden path, which goes beyond consuming content and includes aspects like code, automation, and setting up development environments. Pair programming and a cultural approach to learning are also incorporated into Discover's talent system.Diaz highlights that Discover's work extends beyond their financial services company, as they share their knowledge and open source work with the external community through platforms like technology.discovered.com. This enables engineers to gain merit badges, such as maintainers or contributors, and showcase their expertise on professional platforms like LinkedIn.Learn more at thenewstack.ioThe Future of Developer CareersPlatform Engineer vs Software EngineerHow Donating Open Source Code Can Advance Your Career

Jun 14, 2023 • 19min
The Risks of Decomposing Software Components
The Linux Foundation's Open Source Security Foundation (OSSF) is addressing the challenge of timely software component updates to prevent security vulnerabilities like Log4J. In an interview with Alex Williams of The New Stack at the Open Source Summit in Vancouver, Omkhar Arasaratnam, the new general manager of OSSF, and Brian Behlendorf, CTO of OSSF, discuss the importance of making software secure from the start and the need for rapid response when vulnerabilities occur. In this conversation, they highlight the significance of Software Bill of Materials (SBOMs), which provide a complete list of software components and supply chain relationships. SBOMs offer data that can aid decision-making and enable reputation tracking of repositories. The interview also touches on the issues with package managers and the quantification of software vulnerability risks. Overall, the goal is to improve the efficiency and effectiveness of software component updates and leverage data to enhance security in enterprise and production environments.Learn more from The New Stack:Creating a 'Minimum Elements' SBOM Document in 5 MinutesEnhance Your SBOM Success with SLSA

Jun 8, 2023 • 17min
How Apache Airflow Better Manages ML Pipelines
Apache Airflow is an open-source platform for building machine learning pipelines. It allows users to author, schedule, and monitor workflows, making it well-suited for tasks such as data management, model training, and deployment. In a discussion on The New Stack Makers, three technologists from Amazon Web Services (AWS) highlighted the improvements and ease of use in Apache Airflow.Dennis Ferruzzi, a software developer at AWS, is working on updating Airflow's logging and metrics backend to the OpenTelemetry standard. This update will provide more granular metrics and better visibility into Airflow environments. Niko Oliveria, a senior software development engineer at AWS, focuses on reviewing and merging pull requests as a committer/maintainer for Apache Airflow. He has worked on making Airflow a more pluggable architecture through the implementation of AIP-51.Raphaël Vandon, also a senior software engineer at AWS, is contributing to performance improvements and leveraging async capabilities in AWS Operators, which enable seamless interactions with AWS. The simplicity of Airflow is attributed to its Python base and the operator ecosystem contributed by companies like AWS, Google, and Databricks. Operators are like building blocks, each designed for a specific task, and can be chained together to create workflows across different cloud providers.The latest version, Airflow 2.6, introduces sensors that wait for specific events and notifiers that act based on workflow success or failure. These additions aim to simplify the user experience. Overall, the growing community of contributors continues to enhance Apache Airflow, making it a popular choice for building machine learning pipelines.Check out the full article on The New Stack:How Apache Airflow Better Manages Machine Learning Pipelines