PING cover image

PING

Latest episodes

undefined
May 28, 2025 • 59min

DELEG: Changing the DNS engine in flight again

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, revisits changes underway in how the Domain Name System (DNS) delegates authority over a given zone and how resolvers discover the new authoritative sources. We last explored this in March 2024.In DNS, the word ‘domain’ refers to a scope of authority. Within a domain, everything is governed by its delegated authority. While that authority may only directly manage its immediate subdomains (children), its control implicitly extends to all subordinate levels (grandchildren and beyond). If a parent domain withdraws delegation from a child, everything beneath that child disappears. Think of it like a Venn diagram of nested circles — being a subdomain means being entirely within the parent’s scope.The issue lies in how this delegation is handled. It’s by way of nameserver (NS) records. These are both part of the child zone (where they are defined) and the parent zone (which must reference them). This becomes especially tricky with DNSSEC. The parent can’t authoritatively sign the child’s NS records because they are technically owned by the child. But if the child signs them, it breaks the trust chain from the parent.Another complication is the emergence of third parties to the delegate, who actually operate the machinery of the DNS. We need mechanisms to give them permission to make changes to operational aspects of delegation, but not to hold all the keys a delegate has regarding their domain name.A new activity has been spun up in the IETF to discuss how to alter this delegation problem by creating a new kind of DNS record, the DELEG record. This is proposed to follow the Service Binding model defined in RFC 9460. Exactly how this works and what it means for the DNS is still up in the air.DELEG could fundamentally change how authoritative answers are discovered, how DNS messages are transported, and how intermediaries interact with the DNS ecosystem. In the future, significant portions of DNS traffic might flow over new protocols, introducing novel behaviours in the relationships between resolvers and authoritative servers.Read more about DELEG on the APNIC Blog and the web:DNS and the proposed DELEG record (APNIC Blog, February 2024)DELEG Working Group Charter (IETF Website)Service Binding and Parameter Specification via the DNS (IETF RFC 9460)
undefined
May 14, 2025 • 37min

DFOH,MVP & GILL: New ways of looking at BGP

In this episode of PING, Professor Cristel Pelsser who holds the chair of critical embedded systems at UCLouvain Discusses her work measuring BGP and in particular the system described in the 2024 SIGCOMM “best paper” award winning research: “The Next Generation of BGP Data Collection Platforms”Cristel and her collaborators Thomas Alfroy, Thomas Holterbach, Thomas Krenc and K. C. Claffy have built a system they call GILL, available on the web at https://bgproutes.io This work also features a new service called MVP, to help find the “most valuable vantage point” in the BGP collection system for your particular needs. GILL has been designed for scale, and will be capable of encompassing thousands of peerings. it also has an innovative approach to holding BGP data, focussed on the removal of demonstrably redundant information, and therefore significantly higher compression of the data stream compared to e.g. holding MRT files.The MVP system exploits machine learning methods to aide in the selection of the most advantageous data collection point reflecting a researchers specific needs. Application of ML methods here permits a significant amount of data to be managed and change reflected in the selection of vantage points.Their system has already been able to support DFOH, an approach to finding forged origin attacks from peering relationships seen online in BGP, as opposed to the peering expected both from location, and declarations of intent inside systems like peeringDB.Read more about Cristel’s work, and their BGP analysis tools on the web:The Next Generation of BGP Data Collection Platforms (Best Paper Award at ACM SIGCOMM 2024)bgproutes.io (web portal to GILL, MVP and DFOH systems)Measuring Internet Routing from the Most Valuable PointsA system to Detect Forged-Origin Hijacks (DFOH)
undefined
Apr 30, 2025 • 46min

The multiple ways to do multiple paths

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, discusses the history and emerging future of how Internet protocols get more than the apparent link bandwidth by using multiple links and multiple paths.Initially, the model was quite simple, capable of handling up to four links of equal cost and delay reasonably well, typically to connect two points together. At the time, the Internet was built on telecommunications services originally designed for voice networks, with cabling laid between exchanges, from exchanges to customers, or across continents. This straightforward technique allowed the Internet to expand along available cable or fibre paths between two points. However, as the system became more complex, new path options emerged, and bandwidth demands grew beyond the capacity of individual or even equal-cost links, increasingly sophisticated methods for managing these connections had to be developed.An interesting development at the end of this process is the impact of a fully encrypted transport layer on the intervening infrastructure’s ability to manage traffic distribution across multiple links. With encryption obscuring the contents of the dataflow, traditional methods for intelligently splitting traffic become less effective. Randomly distributing data can often worsen performance, as modern techniques rely on protocols like TCP to sustain high-speed flows by avoiding data misordering and packet loss.This episode of PING explores how Internet protocols boost bandwidth by using multiple links and paths, and how secure transport layers affect this process.Read more about multipath network protocols on the web:IETF Draft on Multipath for QUIC (IETF, April 2025)Multipath TCP: Revolutionising connectivity one path at a time (Cloudflare Blog, January 2025)RFC 8684 (IETF, 2020)
undefined
Apr 16, 2025 • 41min

Pulse Internet Measurement Forum at APRICOT 2025: Part 2

Last month, during APRICOT 2025 / APNIC 59, the Internet Society hosted its first Pulse Internet Measurement Forum (PIMF). PIMF brings together people interested in Internet measurement from a wide range of perspectives — from technical details to policy, governance, and social issues. The goal is to create a space for open discussion, uniting both technologists and policy experts.In this second special episode of PING, we continue our break from the usual one-on-one podcast format and present a recap of why the PIMF forum was held, and the last 3 short interviews from the workshop.First we hear a repeat of Amreesh Phokeer's presentation. Amreesh is from the Internet Society and discusses his role in managing the Pulse activity within ISOC. Alongside Robbie Mitchell, Amreesh helped organize the forum, aiming to foster collaboration between measurement experts and policy professionals.Next we hear from Beau Gieskens, a Senior Software Engineer from APNIC Information Products. Beau has been working on the DASH system and discusses his PIMF presentation on a re-design to an event-sourcing model which reduced database query load and improved speed and scaling of the service.We then have Doug Madory from Kentik who presented to PIMF on a quirk in how Internet Routing Registries or IRR are being used, which can cause massive costs in BGP filter configuration and is related to some recent route leaks being seen at large in the default free zone of BGP.Finally, we hear from Lia Hestina from the RIPE NCC Atlas project. Lia is the community Development officer, and focusses on Asia Pacific and Africa for the Atlas project. Lia discusses the Atlas system and how it underpins measurements worldwide, including ones discussed in the PIMF meeting.For more insights from PIMF, be sure to check out the PULSE Forum recording on the Internet Society YouTube feed
undefined
Apr 2, 2025 • 44min

DNS Computer says "NO"

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, discusses the surprisingly vexed question of how to say ‘no’ in the DNS. This conversation follows a presentation by Shumon Huque at the recent DNS OARC meeting, who will be on PING in a future episode talking about another aspect of the DNS protocol.You would hope this is a simple, straightforward answer to a question, but as usual with the DNS, there are more complexities under the surface. The DNS must indicate whether the labels in the requested name do not exist, whether the specific record type is missing, or both. Sometimes, it needs to state both pieces of information, while other times, it only needs to state one.The problem is made worse by the constraints of signing answers with DNSSEC. There needs to be a way to say ‘no’ authoritatively, and minimize the risk of leaking any other information.NSEC3 records are designed to limit this exposure by making it harder to enumerate an entire zone. Instead of explicitly listing ‘before’ and ‘after’ labels in a signed response denying a label’s existence, NSEC3 uses hashed values to obscure them. In contrast, the simpler NSEC model reveals adjacent labels, allowing an attacker to systematically map out all existing names — a serious risk for domain registries that depend on name confidentiality. This is documented in RFC 7129.Saying ‘no’ with authority also raises the question of where signing occurs — at the zone’s centre (by the zone holder) or at the edge (by the zone server). These approaches lead to different solutions, each with its own costs and consequences.In this episode of PING, Geoff explores the differences between a non-standard, vendor-explored solution, and the emergence of a draft standard in how to say ‘no’ properly.
undefined
Mar 19, 2025 • 36min

Pulse Internet Measurement Forum at APRICOT Pt 1

At the APRICOT/APNIC59 meeting held in Petaling Jaya in Malaysia last month, The internet society held it's first PIMF meeting. PIMF, or the Pulse Internet Measurement Forum is a gathering of people interested in Internet measurement in the widest possible sense, from technical information all the way to policy, governance and social questions. ISOC is interested in creating a space for the discussion to take place amongst the community, and bring both technologists and policy specialists into the same room.This time on PING, instead of the usual one-on-one format of podcast we've got 5 interviews from this meeting, and after the next episode from Geoff Huston at APNIC Labs we'll play a second part, with 3 more of the presenters from this session.First up we have Amreesh Phokeer from the Internet Society who manages the PULSE activity in ISOC, and along with Robbie Mitchell set up the meeting.Then we hear from Christoph Visser from IIJ Labs in Tokyo, who presented on his measurements of the "Steam" Game distribution platform used by Valve Software to share games. It's a complex system of application-specific source selection, using multiple Content Distribution Networks (CDN) to scale across the world, and allows Christoph to see into the link quality from a public API. No extra measurements required, for an insight into the gamer community and their experience of the Internet.The third interview is with Anand Raje, from AIORI-IMN, India’s Indigenous Internet Measurement System. Anand leads a team which has built out a national measurement system using IoT "orchestration" methods to manage probes and anchors, in a virtual-environment which permits them to run multiple independent measurement systems hosted inside their platform.After this there's an interview with Andre Robachevsky from Global Cyber Alliance (GCA). Andre established the MANRS system, it's platform and nurtured the organisation into being inside ISOC. MANRS has now moved into the care of GCA and Andre moved with it, and discusses how this complements the existing GCA activities.FInally we have a conversation with Champika Wijayatunga from ICANN on the KINDNS project. This is a programme designed to bring MANRS-like industry best practice to the DNS community at large, including authoritative DNS delegates and the intermediate resolver and client supporting stub resolver operators. Champika is interested in reaching into the community to get KINDNS more widely understood and encourage its adoption with over 2,000 entities having completed the assessment process already.Next time we'll here from three more participants in the PIMF session: Doug Madory from Kentik, Beau Gieskins from APNIC Information Products, and Lia Hestina, from the RIPE NCC.PULSE Forum recording (Internet Society YouTube feed)
undefined
Mar 5, 2025 • 59min

Night of the BGP Zombies

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston explores bgp "Zombies" which are routes which should have been removed, but are still there. They're the living dead of routes. How does this happen?Back in the early 2000s Gert Döring in the RIPE NCC region was collating a state of BGP for IPv6 report, and knew each of the 300 or so IPv6 announcements directly. He understood what should be seen, and what was not being routed. He discovered in this early stage of IPv6 that some routes he knew had been withdrawn in BGP still existed when he looked into the repositories of known routing state. This is some of the first evidence of a failure mode in BGP where withdrawal of information fails to propagate, and some number of BGP speakers do not learn a route has been taken down. They hang on to it.Because BGP is a protocol which only sends differences to the current routing state as and when they emerge (if you start afresh you get a LOT of differences, because it has to send everything from ground state of nothing. But after that, you're only told when new things come and old things go away) it can go a long time without saying anything about a particular route: if its stable and up, nothing to say, and if it was withdrawn, you don't have it, to tell people it's gone, once you passed that on. So if somehow in the middle of this conversation a BGP speaker misses something is gone, as long as it doesn't have to tell anyone it exists, nobody is going to know it missed the news.In more recent times, there has been a concern this may be caused by a problem in how BGP sits inside TCP messages and this has even led to an RFC in the IETF process to define a new way to close things out.Geoff isn't convinced this diagnosis is actually correct or that the remediation proposed is the right one. From a recent NANOG presentation Geoff has been thinking about the problem, and what to do. He has a simpler approach which may work better.Read more about BGP zombies at the APNIC Blog and the web:BGP Zombies at NANOG 93 (Geoff Huston, APNIC Blog February 2025)NANOG 93 presentation on BGP Zombies (Iliana Xygkou from Thousand Eyes, NANOG presentation)RFC9687 SendHold Timers (IETF RFC)
undefined
Feb 19, 2025 • 50min

RPKI Views: The archive of RPKI state

In this episode, Job Snijders discusses RPKIViews, his long term project to collect the "views" of RPKI state every day, and maintain an archive of BGP route validation states. The project is named to reflect route views, the long-standing archive of BGP state maintained by the University of Oregon, which has been discussed on PING.Job is based in the Netherlands, and has worked in BGP routing for large international ISPs and content distribution networks as well as being a board member of the RIPE NCC. He is known for his work producing the Open-Source rpki-client RPKI Validator, implemented in C and distributed widely through the OpenBSD project.RPKI is the Resource PKI, Resource meaning the Internet Number Resources, the IPv4, IPv6 and Autonomous System (AS) numbers which are used to implement routing in the global internet. The PKI provides cryptographic proofs of delegation of these resources and allows the delegates to sign over their intentions originating specific prefixes in BGP, and the relationships between the AS which speak BGP to each other.Why rpkiviews? Job explains that there's a necessary conversation between people involved in the operational deployment of secure BGP, and the standards development and research community: How many of the worlds BGP routes are being protected? How many places are producing Route Origin Attestations (ROA) which are the primary cryptographic object used to perform Route Origin Validation (ROV) and how many objects are made? Whats the error rate in production, the rate of growth, a myriad of introspective "meta" questions need to be asked in deploying this kind of system at scale, and one of the best tools to use, is an archive of state, updated frequently, and as for route views collected from a diverse range of places worldwide, to understand the dynamics of the system.Job is using the archive to produce his annual "RPKI Year in review" report, which was published this year on the APNIC Blog (it's posted to operations, research and standards development mailing lists and presented at conferences and meetings normally) and products are being used by the BGPAlerter service developed by Massimo CandelaRead about the rpkiviews archive on the APNIC Blog, and on the web:RPKI's 2024 Year in review - (Job Snijders, APNIC Blog January 2025)RPKIViews - (the RPKI views Web archive)
undefined
Feb 5, 2025 • 59min

How Many DNS Nameservers is enough?

In his first episode of PING for 2025, APNIC’s Chief Scientist, Geoff Huston returns to the Domain Name System (DNS) and explores the many faces of name servers behind domains. Up at the root, (the very top of the namespace, where all top-level domains like .gov or .au or .com are defined to exist) there is a well established principle of 13 root nameservers. Does this mean only 13 hosts worldwide service this space? Nothing could be farther from the truth! literally thousands of hosts act as one of those 13 root server labels, in a highly distributed worldwide mesh known as "anycast" which works through BGP routing.The thing is, exactly how the number of nameservers for any given domain is chosen, and how resolvers (the querying side of the DNS, the things which ask questions of authoritative nameservers) decide which one of those servers to use isn't as well defined as you might think. The packet sizes, the order of data in the packet, how it's encoded is all very well defined, but "which one should I use from now on, to answer this kind of question" is really not well defined at all.Geoff has been using the Labs measurement system to test behaviour here, and looking at basic numbers for the delegated domains at the root. The number of servers he sees, their diversity, the nature of their deployment technology in routing is quite variable. But even more interestingly, the diversity of "which one gets used" on the resolver side suggests some very old, out of date and over-simplistic methods are still being used almost everywhere, to decide what to do.Read more about Geoff's research on DNS nameserver selection and diversity on the APNIC Blog:DNS nameservers: Service performance and resilience (Geoff Huston, APNIC Blog February 2025)
undefined
Jan 22, 2025 • 44min

RISKY BIZ-ness

Welcome back to PING, at the start of 2025. In this episode, Gautam Akiwate, (now with Apple, but at the time of recording with Stanford University) talks about the 2021 Advanced Network Research Prize winning paper, co-authored with Stefan Savage, Geoffrey Voelker and Kimberly Claffy which was titled "Risky BIZness: Risks Derived from Registrar Name Management".The paper explores a situation which emerged inside the supply chain behind DNS name delegation, in the use of an IETF protocol called Extensible Provisioning Protocol or EPP. EPP is implemented in XML over the SOAP mechanism, and is how registry-registrar communications take place, on behalf of a given domain name holder (the delegate) to record which DNS nameservers have the authority to publish the delegated zone. The problem doesn't lie in the DNS itself, but in the operational practices which emerged in some registrars, to remove dangling dependencies in the systems when domain names were de-registered. In effect they used an EPP feature to rename the dependency, so they could move on with selling the domain name to somebody else.The problem is that feature created valid names, which could themselves then be purchased. For some number of DNS consumers, those new valid nameservers would then be permitted to serve the domain, and enable attacks on the integrity of the DNS and the web.Gautam and his co-authors explored a very interesting quirk of the back end systems and in the process helped improve the security of the DNS and identified weaknesses in a long-standing "daily dump" process to provide audit and historical data.Read more about RISKY BIZness and the supply chain attack on the web:The 2021 ANRP paper "Risky BIZness: Risks Derived from Registrar Name Management"2017 Grand Jury indictment of Zhang et al2022 IMC paper "Retroactive Identification of Targeted DNS Infrastructure HijackingThe prevalence, persistence, and perils of lame delegations (APNIC blog, 2021)

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app