Google Cloud Platform Podcast cover image

Google Cloud Platform Podcast

Latest episodes

undefined
5 snips
Aug 17, 2022 • 48min

Google Cloud for Higher Education with Laurie White and Aaron Yeats

On the podcast this week, our guests Laurie White and Aaron Yeats talk with Stephanie Wong and Kelci Mensah about higher education and how Google Cloud is helping students realize their potential. As a former educator, Laurie has seen the holes in tech education and, with the help of Google, is determined to aid faculty and students in expanding learning to include cloud education as well as the standard on prem curriculum. Aaron and Laurie work together toward this goal with programs like their Speaker Series. Laurie’s approach involves supporting faculty as they design courses that incorporate cloud technologies. With the busy lives of students today, she recognizes that the best way to get the information into the hands of students is through regular coursework, not just through elective activities outside the regular classroom. Aaron’s work with students and student organizations rounds out their support of higher education learning. He facilitates the creation of student clubs that use Cloud Skills Boost, a program in which students navigate full pathways as they learn the skills they need to create and manage cloud builds. Soon, Aaron will offer hack-a-thons that encourage students to attend weekend events to work together on passion projects outside of regular classwork. Our guests talk more about the specifics of Google Cloud Higher Education Programs and the importance of incorporating certifications into the higher education learning process. Aaron talks about expanding the program and his hopes for reaching out to more schools and students and Laurie talks about the funding for students and how Google Cloud’s system of credits for students enables them to use real cloud tools without a credit card. Laurie and Aaron tell us fun stories about past student successes, conference interactions, and hack-a-thon projects that went well. Laurie White Laurie taught CS in higher ed for over 30 years where her biggest frustration was trying to keep the curriculum up with the field. She thought she was retiring seven years ago but got the call from Google to a job where she could help faculty around the world keep their curriculum up with cloud computing, so here she is. Aaron Yeats Aaron Yeats has been working in education outreach for two decades. His work in education has included Texas government education programs including public health, non-profit advocacy, and education. Cool things of the week How Wayfair is reaching MLOps excellence with Vertex AI blog Hidden gems of Google BigQuery blog Google Cloud Innovators site Google Cloud and Apollo24|7: Building Clinical Decision Support System (CDSS) together blog Interview Google Cloud Higher Education Programs site Google Cloud Speaker Series site Google Cloud Skills Boost site CSSI site Tech Equity Collective site GDSC site What’s something cool you’re working on? Steph has been working on an AlphaFold video. You can learn more here. Kelci is working on developing a Neos tutorial for introductory Google Cloud developers to learn how to write HTTP functions in Python all within the Google Cloud environment and wrapping up her summer internship with Google! Hosts Stephanie Wong and Kelci Mensah
undefined
Aug 10, 2022 • 41min

Cloud Functions (2nd gen) with Jaisen Mathai and Sara Ford

Stephanie Wong and Brian Dorsey are joined today by fellow Googlers Jaisen Mathai and Sara Ford to hear all about Cloud Functions (2nd gen) and how it differs from the original. Jaisen gives us some background on Cloud Functions and why it was built. Supporting seven languages, this tool allows clients to write a function without worrying about scaling, devops, and a number of other things that are handled by Cloud Functions automatically. Customer feedback led to new features, and that’s how the second evolution of Cloud Functions came about. Don’t worry, first gen users! This will continue to be available and supported. Features in the 2nd gen fit into three categories: performance, cost, and control. Among other benefits, costs stay low or may even be reduced with some of the new features, larger instances and longer processing times mean better performance, and traffic splitting means better control over projects. Sara details an example illustrating the power of the new concurrency features, and Jaisen helps us understand when Cloud Functions is the right choice for your project and when it’s not. Our guests walk us through getting started with Cloud Functions and using the 2nd gen additions. Companies like Lucille Games are using Cloud Functions, and our guests talk more about how specific users are leveraging the new features of the 2nd gen. Jaisen Mathai Jaisen is a product manager for Cloud Functions. He’s been at Google for about six years and before joining Google was both a developer and product manager. Sara Ford Sara is a Cloud Developer Advocate focusing on Cloud Functions and enjoys working on serverless. Cool things of the week No pipelines needed. Stream data with Pub/Sub direct to BigQuery blog Cloud IAM Google Cloud blog The Diversity Annual Report is now a BigQuery public dataset blog Interview Cloud Functions site Cloud Functions 2nd gen walkthrough video Cloud Functions version comparison docs Lucille Games: Playing to win with Google Cloud Platform site BigQuery site Cloud Run site Eventarc docs Cloud Shell site GCP Podcast Episode 261: Full Stack Dart with Tony Pujals and Kevin Moore podcast Working with Remote Functions docs Cloud Console site Where should I run my stuff? Choosing compute options video What’s something cool you’re working on? Stephanie has been working on GCP Support Shorts. Hosts Stephanie Wong and Brian Dorsey
undefined
Aug 3, 2022 • 26min

Vertex Explainable AI with Irina Sigler and Ivan Nardini

Max Saltonstall and new host Anu Srivastava are in the studio today talking about Vertex Explainable AI with guests Irina Sigler and Ivan Nardini. Vertex Explainable AI was born from a need for developers to better understand how their models determine classifications. Trusting the operation of models for business decision making and easier debugging are two reasons this classification understanding is so important. Explainable models help developers understand and describe how their trained models are making decisions. Google’s managed service, Vertex Explainable AI, offers Feature Attribution and Example Based Explanations to provide better understanding of model decision making. Irina describes these two services and how each works to foster better decision-making based on AI models. One or both services can be used in every stage of model building and to create a more precise model with better results. Example Based Explanations, Irina tells us, also makes it easier to explain the model to those who may not have strong technical backgrounds. Ivan runs us through a sample build of a model taking advantage of the Vertex Explainable AI tools. Presets provide easier setup and use as well. We talk more about the benefits of being able to easily explain your models. When decision-makers understand the importance of your AI tool, it’s more likely to be cleared for production, for example. When you understand why your model is making certain choices, you can trust the model’s outcomes as part of your decision-making process. Irina Sigler Irina Sigler is a Product Manager on the Vertex Explainable AI team. Before joining Google, Irina worked at McKinsey and did her Ph.D. in Explainable AI. She graduated from the Freie Universität Berlin and HEC Paris. Ivan Nardini Ivan Nardini is a customer engineer specialized in ML and passionate about Developer Advocacy and MLE. He is currently collaborating and enabling Data Science developers and practitioners to define and implement MLOps on Vertex AI. He also leads a worldwide hackathon community initiative and he is an active contributor in Google Cloud. Cool things of the week Unify data lakes and warehouses with BigLake, now generally available blog What it’s like to have a hybrid internship at Google blog Interview Vertex AI site Explainable AI site Vertex Explainable AI docs Vertex Explainable AI Notebooks docs Feature Attribution docs AI Explanations Whitepaper site Explainable AI with Google Cloud Vertex AI article Why you need to explain machine learning models blog What’s something cool you’re working on? Anu just got back from a nice vacation and is picking back up on how to use our AI APIs with Serverless workflows. She’s working on some exciting tutorials for our AI backed Translation API. Max just got back from family dance camp and is working to make excellent intern experiences. Hosts Max Saltonstall and Anu Srivastava
undefined
Jul 27, 2022 • 36min

Arm Servers on GCP with Jon Masters and Emma Haruka Iwao

We’re learning all about Arm servers on Google Cloud Platform this week. Hosts Brian Dorsey and Stephanie Wong welcome fellow Googlers Jon Masters and Emma Haruka Iwao to talk about the newest VMs on GCP. To start, our guests dive in to Arm, explaining what it is and how it’s grown over the years. Nowadays, Arm-based chips dominate the mobile market and this volume has allowed them to build both advanced chips for supercomputers and beneficial partnerships. Emma explains how having the Arm architecture available in the cloud helps keep projects efficient and walks us through example setups of an Arm projects, illustrating the ease of setup in Google Cloud. Jon and Emma talk about the T2A VMs running Arm workloads at Google, including their balance of performance and cost. Emma and Jon bust some myths about Arm, emphasizing how performant it is despite its humble beginnings. Jon Masters Jon Masters is a compute architect focused on Arm server architecture, platform standards, and ecosystem with almost a dozen years of experience working on Arm. Emma Haruka Iwao Emma Haruka Iwao is a DevRel engineer focused on Compute products and a computer architecture enthusiast. Cool things of the week Introducing Batch, a new managed service for scheduling batch jobs at any scale blog Examples of Batch for Transcoding site Using Google Kubernetes Engine’s GPU sharing to search for neutrinos blog Interview Arm site Arm Documentation docs Arm VMs on Computer docs Expanding the Tau VM family with Arm-based processors blog Run your Arm workloads on Google Kubernetes Engine with Tau T2A VMs blog Compute Engine site GKE site What’s something cool you’re working on? Brian is switching his focus from VMs to developer tooling. Hosts Stephanie Wong and Brian Dorsey
undefined
Jul 20, 2022 • 37min

Managed Service for Prometheus with Lee Yanco and Ashish Kumar

Hosts Carter Morgan and Anthony Bushong are in the studio this week! We’re talking about Prometheus with guests Lee Yanco and Ashish Kumar and learning about the build process for Google Cloud’s Managed Service for Prometheus and how Home Depot uses this tool to power their business. To begin with, Lee helps us understand what Managed Service for Prometheus is. Prometheus, a popular monitoring solution for Kubernetes, lets you know that your project is up and running and in the event of a failure, Prometheus lets you know what happened. But as Kubernetes projects scale and spread across the globe, Prometheus becomes a challenge to manage, and that’s where Google Cloud’s Managed Service for Prometheus comes in. Lee describes why Prometheus is so great for Kubernetes, and Ashish talks about CNCF’s involvement helps open source tools integrate easily. With the help of Monarch, Google’s Managed Service stands above the competition, and Lee explains what Monarch is and how it works with Prometheus to benefit users. Ashish talks about Home Depot’s use of Google Cloud and the Managed Service for Prometheus, and how Home Depot’s multiple data centers make data monitoring both trickier and more important. With Google Cloud, Home Depot is able to easily ensure everything is healthy and running across data centers, around the world, at an immense scale. He describes how Home Depot uses Managed Service for Prometheus in each of these data center environments from the point of view of a developer and talks about how easy Prometheus and the Managed Service are to integrate and use. Lee and Ashish wrap up the show with a look at how Home Depot and Google have worked together to create and adjust tools for increased efficiency. In the future, tighter integration into the rest of Google Cloud’s suite of products is the focus. Lee Yanco Lee Yanco is the Product Management lead for Google Cloud Managed Service for Prometheus. He also works on Monarch, Google’s planet-scale in-memory time series database, and on Cloud Monitoring’s Kubernetes observability experience. Ashish Kumar Ashish Kumar is Senior Manager for Site Reliability and Production Engineering for The Home Depot. Cool things of the week Cloud Next registration is open site Introducing Parallel Steps for Workflows: Speed up workflow executions by running steps concurrently blog How to think about threat detection in the cloud blog GCP Podcast Episode 218: Chronicle Security with Dr. Anton Chuvakin and Ansh Patniak podcast Interview Prometheus site PromQL site Google Cloud Managed Service for Prometheus docs Kubernetes site CNCF site Monarch: Google’s Planet-Scale In-Memory Time Series Database research Cloud Monitoring site Cloud Logging site Google Cloud’s operations suite site What’s something cool you’re working on? Carter is focusing on getting organized, managing overwhelm, and comedy festivals. Anthony is testing a few new exciting features, working with build provenance in Cloud Build, jobs and network file systems in Cloud Run. Hosts Carter Morgan and Anthony Bushong
undefined
10 snips
Jul 13, 2022 • 36min

Distributed Cloud Edge for Telcos with DP Ayyadevara and Krishna Garimella

Stephanie Wong and Carter Morgan are back this week learning about Google’s Distributed Cloud Edge for telcos with guests Krishna Garimella and DP Ayyadevara. Launched last year, Google Distributed Cloud Edge has benefited companies across many industries. Today, our guests are here to elaborate on how telecommunications companies specifically are leveraging this powerful tool. Because telcos deliver essential services, they tend to create detailed plans for their infrastructure in advance and stick with this setup for many years, DP tells us. Identifying the right tools for their project is vital, and Google has created and improved on many services to aid the telecommunications sector. Contact Center AI, for example, helps with customer service needs. Specifically, our guests elaborate on the modernization of telco networks through managed infrastructure offerings. We learn more about Google Distributed Cloud Edge and the managed hardware and software stack that’s included. Container as a service for optimal network function is the first focus of Google in supporting telcos companies with Distributed Cloud and has been used for 5G rollouts. Google has been working behind the scenes to make Kubernetes more telco friendly as well, so that projects are more portable, scalable, and able to leverage Kubernetes benefits easily. Krishna gives us some great real-life examples of telecommunications companies using GDC Edge in areas like virtual broadband networks. In order to dedicate maximum resources to customer workloads, the team chose to keep the Kubernetes control plane in the cloud while worker nodes are at the edge. With superior security protection, minimal service disruption, and more, GDC Edge boasts fleet management as a core capability. In order to continue satisfying telco’s needs, Google collaborates with many businesses to grow with changing customer needs. Krishna Garimella Krishna is a technology evangelist who has worked with service providers across the globe in the network and media areas. DP Ayyadevara DP is the Product Group Product Manager leading Telco Network Modernization products and solutions at Google Cloud. Cool things of the week Cloud TPU v4 records fastest training times on five MLPerf 2.0 benchmarks blog Show off your cloud skills by completing the #GoogleClout weekly challenge blog Interview Distributed Cloud site Distributed Cloud Edge Documentation docs Contact Center AI site Kubernetes site Anthos site Nephio site BigQuery site Vertex AI site What’s something cool you’re working on? Carter made a test for a video recap version of the recent pi episode. Stephanie recently made a pi video as well and is working on an Alphafold video and the Cloud client library new reference docs homepage rollout. Hosts Carter Morgan and Stephanie Wong
undefined
Jun 29, 2022 • 36min

Disaster Recovery with Cody Ault and Jo-Anne Bourne

Your hosts Max Saltonstall and Carter Morgan talk with guests Cody Ault and Jo-Anne Bourne of Veeam. Veeam is revolutionizing the data space by minimizing data loss impacts and project downtime with easy backups and user-friendly disaster recovery solutions. As a software company, Veeam is able to stay flexible with its solutions, helping customers keep any project safe. Cody explains what is meant by disaster recovery and how different systems might require different levels of fail-safe protection. Jo-Anne talks about the financial cost of downtime and how Veeam can help save money by planning for and preventing downtime. Veeam backup and replication is the main offering that can be customized depending on workloads, Cody tells us. He gives examples of how this works for different types of projects. Businesses can easily make plans for recovery and data backups then implement them with the help of Veeam. Cody talks about cloud migration and how Veeam can streamline this process with its replication services, and Jo-Anne emphasizes the importance of these recovery processes for data in the cloud. The journey from fledgling Veeam to their current suite of offerings was an interesting one, and Cody talks about this evolution, starting with the simple VM backups of version 5. As companies have brought new recovery challenges, Veeam has grown to provide these services. Their partnership with Google has grown as well, as they continue to leverage Google offerings and support Google Cloud customers. We hear examples of Veeam customers and how they use the software, and Cody tells us a little about the future of Veeam. Cody Ault Cody has been at Veeam for over 11 years in various roles and departments including Technical Lead for US Support team, Advisory Architect for Presales Solutions Architect and Staff Solutions Architect for Product Management Alliances. He has acted as the performance, databases, security, and monitoring specialist for North America for the Presales team and has helped develop the Veeam Design Methodology and Architecture Documentation template. Cody is currently working with the Alliances team focusing on Google Cloud, Kubernetes and Red Hat. Jo-Anne Bourne Jo-Anne is a Partner Marketing Strategist who works with global companies to support them in positioning company products with their customer base. She is effective in developing strategic partnerships with International Resellers, CCaaS partners, Systems Integrators, OEM partners and ISV partnerships like Amazon, Microsoft, Avaya, Cisco, Five9, BT to develop strategies to enable sales teams to generate significant revenue and in turn, build profitability for the company. Jo-Anne is a brand steward successful in using analytics to create results-driven campaigns that increase brand awareness, generate sales leads, improve customer engagement and strengthen partner relationships. Cool things of the week Announcing general availability of reCAPTCHA Enterprise password leak detection blog Cloud Podcasts site Bio-pharma organizations can now leverage the groundbreaking protein folding system, AlphaFold, with Vertex AI blog Interview Veeam site Veeam for Google Cloud site VeeamHub site Google Cloud VMware Engine site Cloud SQL site Kasten site Kubernetes site GKE site What’s something cool you’re working on? Carter is working on the new Cloud Podcasts website. Max is working on research papers about how we built and deployed Google’s Zero Trust system for employees, BeyondCorp. Kelci is working on creating a series of blog posts highlighting the benefits of having access to public data sets embedded within BigQuery. Hosts Carter Morgan and Max Saltonstall
undefined
Jun 22, 2022 • 37min

Contact Center AI with Amit Kumar and Vasili Triant

This week on the GCP Podcast, Carter Morgan and Max Saltonstall are joined by Amit Kumar and Vasili Triant. Our guests are here to talk about new features in Contact Center AI. Amit starts the show helping us understand what Contact Center as a Service is and what makes this unified platform so useful for enterprise companies. The scalability helps keep costs down and overall satisfaction up while leveraging advances in cloud. UJET and Google Cloud have worked together to bring this AI advancement, and our guests describe the partnership and evolution CCAI. CCAI has streamlined the Contact Center as a Service space, helping businesses work efficiently and while putting an emphasis on positive experiences for the end customer. CCAI users can use the platform straight out of the box or customize it to build specific experiences with tools like Dialogflow. Amit further describes the tools available like Interactive Voice Response and for which circumstances each tool would be most useful. The journey to CCAI can be easily managed by a team who knows the business well. We learn more about the onboarding experience and the skills required to transition. Vasili talks about the past and future of Contact Center and how customer information is used not just for sales purposes but for bettering the customer service experience. Our guests share success stories from companies like FitBit and how CCAI is used to handle customer interactions through the app. Things like the call back feature save customers the time and frustration of waiting on hold and save businesses money. Amit Kumar Amit is responsible for bringing GCP’s native CCaaS offering to market and helps enterprise customers modernize their contact centers. Previously, Amit worked as a Cloud AI Incubator lead where he helped customers in adopting Google’s conversational AI technology. He also has an extensive background in large scale cloud transformational efforts and have worked with enterprise software companies mainly Salesforce and TIBCO Software. Vasili Triant As UJET’s Chief Operating Officer, Vasili Triant oversees all Go To Market activities including Sales, Channel, Alliances, and Customer Success. Triant brings more than 20 years of experience in Telecoms, Unified Communications (UC), and Contact Center industries, having previously served as VP/GM of Contact Center at Cisco, where he achieved the fastest growth in over a decade through a focus on global alliances and enterprise cloud-readiness. Cool things of the week DALL-E mini site EbSynth site Announcing general availability of Confidential GKE Nodes blog Interview Contact Center AI Platform site Contact Center AI reimagines the customer experience through full end-to-end platform expansion blog UJET site Dialogflow site Google Assistant site One United Bank site FitBit site What’s something cool you’re working on? Max is working on expanding the podcast platform by collecting and adding more content.Carter is working on his Google Project Management: Professional Certificate.Kelci has been working on Google Cloud Skills Boost. Hosts Carter Morgan and Max Saltonstall
undefined
Jun 15, 2022 • 39min

New Pi World Record with Emma Haruka Iwao and Sara Ford

Carter Morgan and Brian Dorsey are working on their math skills today with guests Emma Haruka Iwao and Sara Ford. What kind of computing power does it take to break the world record for pi computations? Emma and Sara are here to tell us. Emma tells us how she started with pi and how she and Sara came to work together to break the record. In 2019, Emma was on the show with her previous world record, and with the advancements in technology and Google products since, she knew she could do even more this year. Her 100 trillion digit goal wasn’t enough to scare people away, and Sara, along with other partners, joined Emma on the pi computation journey. Together, Sara and Emma talk about the hardware required, building the algorithm, how it’s run, and where the data is stored. Running on a personal computer was cheaper and easier than a super computer, and Emma explains why. Performing these immense calculations can also help illustrate just how far computers have come. The storage required for this project was immense, and Emma tells us how they worked around some of the storage limitations. We hear more about Ycruncher and how it was used to help with calculations. Our guests talk about how things might change for computing and specifically for pi computations in the next few years, and Sara tells us about the storage journey from the perspective of a mathematician, and gives us some interesting facts about the algorithms involved, and we learn how world records are verified. Emma Haruka Iwao Emma is a developer advocate for Google Cloud Platform, focusing on application developers’ experience and high performance computing. She has been a C++ developer for 15 years and worked on embedded systems and the Chromium Project. Emma is passionate about learning and explaining the most fundamental technologies such as operating systems, distributed systems, and internet protocols. Besides software engineering, she likes games, traveling, and eating delicious food. Sara Ford Sara Ford is a Developer Advocate on Google Cloud focusing on Serverless. She received a Masters degree in Human Factors (UX) because she wants to make dev tools more usable. Her lifelong dream is to be a 97-year old weightlifter so she can be featured on the local news. Cool things of the week New Cloud Podcasts Website site Even more pi in the sky: Calculating 100 trillion digits of pi on Google Cloud blog Interview GCP Podcast Episode 167: World Pi Day with Emma Haruka Iwao podcast pi.delivery 100 Trillion Digits site pi.delivery Github site A History of Pi book Distributing historically linear calculations of Pi with serverless video Ycruncher site Compute Engine site Cloud Functions site SRE site Terraform site What’s something cool you’re working on? Carter and Brian are working on a new season of VM End to End Hosts Carter Morgan and Brian Dorsey
undefined
Jun 8, 2022 • 40min

FinOps with Joe Daly

On the podcast this week, guest Joe Daly tells Stephanie Wong, Mark “Money” Mirchandani, and our listeners all about FinOps principles and how they’re helping companies take advantage of the cloud while saving their bottom lines. He describes FinOps as financial DevOps, making financial decisions in an effective and optimized way. With his experience in finance and tax accounting, Joe has developed a special knack for navigating the sometimes confusing world of cloud finance policies, and his contributions to the FinOps Foundation have been many. For starters, collaboration with various business departments is important for developing a plan that leverages the benefits of the cloud but keeps the company using resources wisely, Joe explains. He talks about the FinOps Foundation and their focus on creating community for knowledge sharing. By fostering collaboration among different company roles and promoting financial education, companies are better able to determine financial goals while making sure each facet of the company reaps all the benefits of cloud participation. Following the FinOps cycle is the easiest way for community members to get started. The three steps, Joe tells us, are inform, optimize, and operate. The inform phase involves clarity in spending so teams understand how much money is being spent. In the optimize phase, benefits of spending are matched with expenditures to ensure resources are being used to their full potential. Finally, in the operate phase, engineers and finance managers come together to understand why solutions were chosen and understand if these tools are offering the right answers for the company. Every company is different but the sooner it’s possible to start the FinOps journey the easier it will be to maintain in the future. Joe gives us examples of how companies are using the principles for successful strategies and the challenges that some of them have faced. The Foundation has monthly summits that offer perspectives from these companies as well as partner presentations. The FinOpsX conference is coming up soon as well. To wrap up, Joe offers other resources from the FinOps Foundation, including his podcast. Joe Daly Joe set up two FinOps teams at Fortune 100 companies. He joined the FinOps Foundation and has been setting up the ambassador program, supporting meetup groups, and producing FinOpsPod. Cool things of the week AlloyDB for PostgreSQL under the hood: Columnar engine blog GCP Podcast Episode 304: AlloyDB with Sandy Ghai and Gurmeet “GG” Goindi podcast How Google Cloud is helping more startups build, grow, and scale their businesses blog Automate identity document processing with Document AI blog Interview FinOps Foundation site FinOpsX site FinOpsPod podcast Cloud FinOps: The Secret To Unlocking The Economic Potential Of Public Cloud whitepaper Maximize Business Value with Cloud FinOps whitepaper Unlocking the value of cloud FinOps with a new operating model whitepaper Hosts Stephanie Wong and Mark Mirchandani

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode