The MapScaping Podcast - GIS, Geospatial, Remote Sensing, earth observation and digital geography

MapScaping
undefined
Feb 3, 2026 • 37min

Vibe Coding and the Fragmentation of Open Source

Why Machine-Writing Code is the Best (and Most Dangerous) Thing for Geospatial:   The current discourse surrounding AI coding is nothing if not polarized. On one side, the technofuturists urge us to throw away our keyboards; on the other, skeptics dismiss Large Language Models (LLMs) as little more than "fancy autocomplete" that will never replace a "real" engineer. Both sides miss the nuanced reality of the shift we are living through right now.   I recently sat down with Matt Hansen, Director of Geospatial Ecosystems at Element 84, to discuss this transition. With a 30-year career spanning the death of photographic film to the birth of Cloud-Native Geospatial, Hansen has a unique vantage point on how technology shifts redefine our roles. He isn’t predicting a distant future; he is describing a present where the barrier between an idea and a functioning tool has effectively collapsed.   The "D" Student Who Built the Future Hansen’s journey into the heart of open-source leadership began with what he initially thought was a terminal failure. As a freshman at the Rochester Institute of Technology, he found himself in a C programming class populated almost entirely by seasoned professionals from Kodak. Intimidated and overwhelmed by the "syntax wall," he withdrew from the class the first time and scraped by with a "D" on his second attempt. For years, he believed software simply wasn't his path. Today, however, he is a primary architect of the SpatioTemporal Asset Catalog (STAC) ecosystem and a major open-source contributor. This trajectory is the perfect case study for the democratizing power of AI: it allows the subject matter expert—the person who understands "photographic technology" or "imaging science"—to bypass the mechanical hurdles of brackets and semi-colons. "I took your class twice and thought I was never software... and now here I am like a regular contributor to open source software for geospatial." — Matt Hansen to his former professor.   The Rise of "Vibe Coding" and the Fragmentation Trap   We are entering the era of "vibe coding," where developers prompt AI based on a general description or "vibe" of what they need. While this is exhilarating for the individual, it creates a systemic risk of "bespoke implementations." When a user asks an AI for a solution without a deep architectural understanding, the machine often generates a narrow, unvetted fragment of code rather than utilizing a secure, scalable library. The danger here is a catastrophic loss of signal. If thousands of users release these AI-generated fragments onto platforms like GitHub, we risk drowning out the vetted, high-quality solutions that the community has spent decades building. We are creating a "sea of noise" that could make it harder for both humans and future AI models to identify the standard, proper way to solve a problem.   Why Geospatial is Still "Special" (The Anti-meridian Test)   For a long time, the industry mantra has been "geospatial isn’t special," pushing for spatial data to be treated as just another data type, like in GeoParquet. However, Hansen argues that AI actually proves that domain expertise is more critical than ever. Without specific guidance, AI often fails to account for the unique edge cases of a spherical world. Consider the "anti-meridian" problem: polygons crossing the 180th meridian. When asked to handle spatial data, an AI will often "brute force" a custom logic that works for a small, localized dataset but fails the moment it encounters the wrap-around logic of a global scale. A domain expert knows to direct the AI toward Pete Kadomsky’s "anti-meridian" library. AI is not a subject matter expert; it is a powerful engine that requires an expert navigator to avoid the "Valley of Despair."   Documentation is Now SEO for the Machines   We are seeing a counterintuitive shift in how we value documentation. Traditionally, README files and tutorials were written by humans, for humans. In the age of AI, documentation has become the primary way we "market" our code to the machines. If your open-source project lacks a clean README or a rigorous specification, it is effectively invisible to the AI-driven future of development. By investing in high-quality documentation, developers are engaging in a form of technical SEO. You are ensuring that when an AI looks for the "signal" in the noise, it chooses your vetted library because it is the most readable and reliable option available.   From Software Developers to Software Designers   The role of the geospatial professional is shifting from writing syntax to what Hansen calls the "Foundry" model. Using tools like GitHub Specit, the human acts as a designer, defining rigorous blueprints, constraints, and requirements in human language. The machine then executes the "how," while the human remains the sole arbiter of the "what" and "why." Hansen’s advice for the next generation—particularly those entering a job market currently hostile to junior engineers—is to abandon generalism. Don't just learn to code; become a specialist in a domain like geospatial. The ability to write Python is becoming a commodity, but the ability to design a system that accounts for the nuances of remote sensing is an increasingly rare and valuable asset.   History Repeats: The "Priesthood" of Assembly   This shift mirrors the 1950s, when the "priesthood" of assembly programmers looked at the first compilers with deep suspicion. Kathleen Booth, who wrote the first assembly language, lived in a world where manual coding was an arcane, elite skill. Those early programmers argued that compilers were untrustworthy and that a human could always write "better" code by hand. They were technically right about efficiency, but they were wrong about the future. Just as the compiler was "good enough" to allow us to move "up the stack" and take on more complex problems, AI is the next level of abstraction. We might use a "Ralph Wiggum script"—a loop that feeds AI output back into itself until the task is "done"—and while it may be a brute-force method, it is often more productive than the perfection of the past.   Conclusion: The Future is a Specialist's Game   We are moving away from being the writers of code and toward being the designers of systems. While the "syntax wall" has been demolished, the requirement for domain knowledge has only grown higher. The keyboard isn't dying; it is being repurposed for higher-level architectural thought.   As the industry experiences a "recursive improvement" of these tools, the question for every professional is no longer about whether the machine can do your job. It’s whether you have the specialized expertise to tell the machine what a "good enough" job actually looks like. Are you prepared to stop being a coder and start being a designer?
undefined
Jan 19, 2026 • 37min

A5 Pentagons Are the New Bestagons

Felix Palmer, a developer and maintainer of DeckGL with a background in physics, discusses the innovative A5 discrete global grid system. He explains the challenges of aggregating global data and the importance of choosing the right grid for accurate analysis. Felix highlights how A5 improves upon previous systems like S2 and H3 by reducing area distortion and enabling equal-area projections. He also talks about the role of LLMs in enhancing geospatial tooling and shares insights on building accessible multilingual libraries for analysis.
undefined
Jan 8, 2026 • 36min

The Sustainable Path for Open Source Businesses

The Open-Source Conundrum   Many successful open-source projects begin with passion, but the path from a community-driven tool to a sustainable business is often a trap.   The most common route—relying on high-value consulting contracts—can paradoxically lead to operational chaos. Instead of a "feast or famine" cycle, many companies find themselves with more than enough work, but this success comes at a cost: a fragmented codebase, an exhausted team, and a growing disconnect from the core open-source community.   This episode deconstructs a proven playbook for escaping this trap: the strategic transition from a service-based consultancy to a product-led company.   Through the story of Jeroen Ticheler and his company, GeoCat, we will analyze how this pivot creates a more stable business, a healthier open-source community, and ultimately, a better product for everyone.
undefined
Dec 26, 2025 • 34min

Free Software and Expensive Threats

Open-source software is often described as "free," a cornerstone of the modern digital world available for anyone to download, use, and modify. But this perception of "free" masks a growing and invisible cost—not one paid in dollars, but in the finite attention, time, and mounting pressure placed on the volunteer and community maintainers.   This hidden tax is most acute when it comes to security.   Jody from Geocat, a long-time contributor to the popular GeoServer project, pulled back the curtain on the immense strain that security vulnerabilities place on the open-source ecosystem.   His experiences reveal critical lessons for anyone who builds, uses, or relies on open-source software.
undefined
Dec 18, 2025 • 33min

Mapping Your Own World: Open Drones and Localized AI

What if communities could map their own worlds using low-cost drones and open AI models instead of waiting for expensive satellite imagery? In this episode with Leen from HOT (Humanitarian OpenStreetMap Team), we explore how they're putting open mapping tools directly into communities' hands—from $500 drones that fly in parallel to create high-resolution imagery across massive areas, to predictive models that speed up feature extraction without replacing human judgment. Key topics: Why local knowledge beats perfect accuracy The drone tasking system: how multiple pilots map 80+ square kilometers simultaneously AI-assisted mapping with humans in the loop at every step Localizing AI models so they actually understand what buildings in Chad or Papua New Guinea look like The platform approach: plugging in models for trees, roads, rooftop material, waste detection, whatever communities need The tension between speed and OpenStreetMap's principles Why mapping is ultimately a power game—and who decides what's on the map
undefined
Dec 9, 2025 • 46min

From Data Dump to Data Product

In this conversation, Jed Sundwall, Executive Director of Radiant Earth and an open-data advocate, emphasizes the critical distinction between raw data and cohesive data products. He critiques the current state of open data portals, advocating for intentional design with clear documentation and support. Jed introduces Source Cooperative as an invisible but powerful tool for easy data publishing. He also discusses the concept of 'gazelles'—agile organizations capable of adapting to 21st-century challenges, calling for innovative strategies to sustain long-term data stewardship.
undefined
Dec 2, 2025 • 14min

Reflections from FOSS4G 2025

Reflections from the FOSS4G 2025 conference    Processing, Analysis, and Infrastructure (FOSS4G is Critical Infrastructure) The high volume of talks on extracting meaning from geospatial data—including Python workflows, data pipelines, and automation at scale—reinforced the idea that FOSS4G represents critical infrastructure. AI Dominance: AI took up a lot of space at the conference. I was particularly interested in practical, near-term impact talks like AI assisted coding and how AI large language models can enhance geospatial workflows in QGIS. Typically, AI discussions focus on big data and earth observation, but these topics touch a larger audience. I sometimes wonder if adding "AI" to a title is now like adding a health warning: "Caution, a machine did this". Python Still Rules (But Rust is Chatting): Python remains the pervasive, default geospatial language. However, there was chatter about Rust. One person suggested rewriting QGIS in Rust might make it easier to attract new developers. Data Infrastructure, Formats, and Visualization When geospatial people meet, data infrastructure—the "plumbing" of how data is stored, organized, and accessed—always dominates. Cloud Native Won: Cloud native architecture captured all the attention. When thinking about formats, we are moving away from files on disk toward objects in storage and streaming subsets of data. Key cloud-native formats covered included COGs (Cloud Optimized GeoTIFFs), Zarr, GeoParquet, and PMTiles. A key takeaway was the need to choose a format that best suits the use case, defined by who will read the file and what they will use the data for, rather than focusing solely on writing it. The Spatial Temporal Asset Catalog (STAC) "stole the show" as data infrastructure, and DuckDB was frequently mentioned. Visualization is moving beyond interactive maps and toward "interactive experiences". There were also several presentations on Discrete Global Grid Systems (DGGS). Standards and Community Action Standards Matter: Standards are often "really boring," but they are incredibly important for interoperability and reaping the benefits of network effects. The focus was largely on OGC APIs replacing legacy APIs like WMS and WFS (making it hard not to mention PyGeoAPI). Community Empowerment: Many stories focused on community-led projects solving real-world problems. This represents a shift away from expert-driven projects toward community action supported by experts. Many used OSM (OpenStreetMap) as critical data infrastructure, highlighting the need for locals to fill in large empty chunks of the map. High-Level Takeaways for the Future If I had to offer quick guidance based on the conference, it would be: Learn Python. AI coding is constantly improving and worth thinking about. Start thinking about maps as experiences. Embrace the Cloud and understand cloud-native formats. Standards matter. AI is production-ready and will be an increasingly useful interface to analysis. Reflections: What Was Missing? The conference was brilliant, but a few areas felt underrepresented: Sustainable Funding Models: I missed a focus on how organizations can rethink their business models to maintain FOSS4G as critical infrastructure without maintainers feeling their time is an arbitrage opportunity. Niche Products: I would have liked more stories about side hustles and niche SAS products people were building, although I was glad to see the "Build the Thing" product workshop on the schedule. Natural Language Interface: Given the impact natural language is having on how we interact with maps and geo-data, I was surprised there wasn't more dedicated discussion around it. I believe it will be a dominant way we interact with the digital world. Art and Creativity: Beyond cartography and design talks, I was surprised how few talks focused on creative passion projects built purely for the joy of creation, not necessarily tied to making a part of something bigger.
undefined
Nov 27, 2025 • 42min

Building a Community of Geospatial Storytellers

Karl returns to the Mapscaping podcast to discuss his latest venture, Tyche Insights - a platform aimed at building a global community of geospatial storytellers working with open data. In this conversation, we explore the evolution from his previous company, Building Footprint USA (acquired by Lightbox), to this new mission of democratizing public data storytelling. Karl walks us through the challenges and opportunities of open data, the importance of unbiased storytelling, and how geospatial professionals can apply their skills to analyze and share insights about their own communities. Karl shares his vision for creating something akin to Wikipedia, but for civic data stories - complete with style guides, editorial processes, and community collaboration.   Featured Links Tyche Insights: Main website: https://tycheinsights.com Wiki platform: https://wiki.tycheinsights.com Example project: https://albanydatastories.com Mentioned in Episode: USAFacts: https://usafacts.org QField Partner Program: https://qfield.org/partner Open Data Watch: (monitoring global open data policies)
undefined
Nov 17, 2025 • 19min

I have been making AI slop and you should too

AI Slop: An Experiment in Discovery Solo Episode Reflection: I'm back behind the mic after about a year-long break. Producing this podcast takes more time than you might imagine, and I was pretty burnt out. The last year brought some major life events, including moving my family back to New Zealand from Denmark, dealing with depression, burying my father, starting a new business with my wife, and having a teenage daughter in the house. These events took up a lot of space. The Catalyst for Return: Eventually, you figure out how to deal with grief, stop mourning the way things were, and focus on the way things could be. When this space opened up in my life, AI came into the picture. AI got me excited about ideas again because for the first time, I could just build things myself without needing to pitch ideas or spend limited financial resources. On "AI Slop": I understand why some content is called "slop," but for those of us who see AI as a tool, I don't think the term is helpful. We don't refer to our first clumsy experiments with other technologies—like our first map or first lines of code—as slop. I believe that if we want to encourage curiosity and experimentation, calling the results of people trying to discover what's possible "slop" isn't going to help.   My AI Experimentation Journey My goal in sharing these experiments is to encourage you to go out and try AI yourself. Phase 1: SEO and Content Generation My experimentation began with generating SEO-style articles as a marketing tool. As a dyslexic person, I previously paid freelancers thousands of dollars over the years to help create content for my website because it was too difficult or time-consuming for me to create myself. Early Challenges & Learning: My initial SEO content wasn't great, and Google recognized this, which is why those early experiments don't rank in organic search. However, this phase taught me about context windows, the importance of prompting (prompt engineering), and which models and tools to use for specific tasks. Automation and Agents: I played around with automation platforms like Zapier, make.com, and n8n. I built custom agents, starting with Claude projects and custom GPTs. I even experimented with voice agents using platforms like Vappy and 11 Labs. Unexpected GIS Capabilities: During this process, I realized you can ask platforms like ChatGPT to perform GIS-related data conversions (e.g., geojson to KML or shapefile using geopandas), repro data, create buffers around geometries, and even upload a screenshot of a table from a PDF and convert it to a CSV file. While I wouldn't blindly trust an LLM for critical work, it's been interesting to learn where they make mistakes and what I can trust them for. AI as a Sparring Partner: I now use AI regularly to create QGIS plugins and automations. Since I often work remotely as the only GIS person on certain projects, I use AI—specifically talking to ChatGPT via voice on my phone—as a sparring partner to bounce ideas off of and help me solve problems when I get stuck. Multimodal Capabilities: The multimodal nature of Gemini is particularly interesting; if you share your screen while working in QGIS, Gemini can talk you through solving a problem (though you should consider privacy concerns).   The Shift to Single-Serve Map Applications I noticed that the digital landscape was changing rapidly. LLMs were becoming "answer engines," replacing traditional search on Google, which introduced AI Overviews. Since these models no longer distribute traffic to websites like mine the way they used to, I needed a new strategy. The Problem with Informational Content: Informational content on the internet is going to be completely dominated by AI. The Opportunity: Real Data: AI is great at generating content, but if you need actual data—like contours for your specific plot of land in New Zealand—you need real data, not generated data. New Strategy: My new marketing strategy is to create targeted, single-serve map applications and embed them in my website. These applications do one thing and one thing only, using open and valuable data to solve very specific problems. This allows me to rank in organic search because these are problems that LLMs have not yet mastered. Coding with AI: I started by using ChatGPT to code small client-side map applications, then moved to Claude, which is significantly better than OpenAI's models and is still my coding model of choice. Currently, I use Cursor AI as a development environment, swapping between Claude code, OpenAI's Codex, and other models. A Caveat: Using AI for coding can be incredibly frustrating. The quality of the code drops dramatically once it reaches a certain scale. However, even with flaws, it’s a thousand times better and faster than what I could do myself, making my ideas possible. Crucially, I believe that for the vast majority of use cases, mediocre code is good enough.   Success Story: GeoHound After practicing and refining my methods, I decided to build a Chrome extension. Every GIS professional can relate to the pain point of sifting through HTTP calls in the developer tools networking tab to find the URL for a web service to use in QGIS or ArcGIS. The Impossible Idea Made Possible: I had pitched this idea to multiple developers in the past, who were either uninterested or quoted between $10,000 and $15,000 to build it. The AI Result: Using AI, I had a minimum viable Chrome extension—GeoHound—that filtered out common geo web services within 3 hours. It took a few days of intermittent work before it was published to the Chrome and Edge web stores. Current Use: GeoHound has thousands of users (my own statistics suggest closer to or over 3,000 users, compared to the 1,000 shown on the Chrome store). While not perfect, it is clearly good enough, and this was something that was impossible for me just six months ago.   My Point: Now is the Time to Experiment AI is here, and it will lead to profound change. Experimenting with it is vital because it will: Help you develop the skills and knowledge needed to meet the needs of the people you serve. Help you better understand what is hype and what is not, allowing you to decipher which voices to listen to. We are moving from a world where information is ubiquitous to a world where knowledge is ubiquitous. Now is the time to be making sloppy mistakes. Don't let perfection stop you from learning how to make stuff that is going to be good enough. If your work consists of repetitive tasks that follow step-by-step recipes, that's going to be a tough gig going forward. Long-term, there will be new opportunities, but you need to be experimenting now to be in a position to take advantage of them. Resources Mentioned You will find a list of the tools I've been experimenting with in the show notes. Automation: make.com, n8n, Zapier Voice/Agents: 11 Labs, Vappy, custom GPT (MCP servers) Coding Models: Claude (current choice), OpenAI's Codex, ChatGPT Development Environment: Cursor AI LLMs/Multimodal: Gemini (studio.google.com) Browser Extension: GeoHound (for Chrome and Edge) https://chromewebstore.google.com/detail/nooldeimgcodenhncjkjagbmppdinhfe?utm_source=item-share-cb If you build anything interesting with these tools, please let me know! I'd love to hear about your own experiments.
undefined
Nov 10, 2025 • 49min

Scribble: An AI Agent for Web Mapping

Jonathan Wagner, CEO of Scribble Maps, discusses the innovative Scribble AI agent he's integrated into his platform. Unlike a typical chatbot, Scribble can manage tools, fetch data, and enhance onboarding. Wagner shares insights on the risks of AI, privacy concerns, and the challenges of technical implementation. He highlights the democratization of mapping tools and stresses the importance of AI in geospatial interactions. Listeners will learn how Scribble aims to redefine mapping processes and interaction without replacing human expertise.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app