Satellite image deep learning cover image

Satellite image deep learning

Latest episodes

undefined
Jan 8, 2025 • 17min

Building Damage Assessment

In this episode, I caught up with Caleb Robinson to learn about the building damage assessment toolkit from the Microsoft AI for Good lab. This toolkit enables first responders to carry out an end-to-end workflow for assessing damage to buildings after natural disasters using post-disaster satellite imagery. It includes tools for annotating imagery, fine-tuning deep learning models, and visualizing model predictions on a map. Caleb shared an example where an organisation was able to train a useful model with just 100 annotations and complete the entire workflow in half a day. I believe this represents a significant new capability, enabling more rapid response in times of crisis.* 📺 Video of this conversation on YouTube* 👤 Caleb on LinkedIn* 🖥️ The toolkit on GithubBio: Caleb is a Research Scientist in the Microsoft AI for Good Research Lab. His work focuses on tackling large scale problems at the intersection of remote sensing and machine learning/computer vision. Some of the projects he works on include: estimating land cover, poultry barns, solar panels, and cows from high-resolution satellite imagery. Caleb is interested in research topics that facilitate using remotely sensed imagery more effectively. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
Dec 19, 2024 • 13min

Deepness QGIS plugin

In this episode, I caught up with Marek Kraft to learn about the Deepness QGIS plugin.QGIS is a widely used open-source tool for working with geospatial data. It’s written in Python, and its functionality can be expanded with plugins. One plugin that recently caught my attention is Deepness, developed by Marek and his team.Deepness makes it straightforward to use deep learning models in QGIS. You don’t need specialised hardware like GPUs, and it offers a range of pre-trained models through a model zoo.As a long-time QGIS user, I was thrilled to discover Deepness, and I believe it has the potential to make deep learning much more accessible to geospatial practitioners without deep learning expertise. Marek shared some fascinating examples of how the plugin is being used, and discussed the growing community around it.* 📺 Demo video showcasing Deepness in action* 📺 Video of this conversation on YouTube* 👤 Marek on LinkedIn* 🖥️ PUT Vision Lab* 📖 Deepness documentation* 🖥️ Deepness Github pageBio: Marek Kraft is an assistant professor at the Poznań University of Technology (PUT), where he leads the PUT Computer Vision Lab. The lab focuses on developingintelligent algorithms for extracting meaningful information from images, videos, and signals. This work has applications across diverse fields, including Earthobservation, agriculture, and robotics (including space robotics). Kraft's current research involves close-range remote sensing image analysis, specialising in small object detection for environmental monitoring. He also collaborated on European Space Agency projects aimed at extraterrestrial rover navigation and autonomy, making use of his knowledge of embedded systems. His research has led to over 80 publications, several patents, and a history of securing competitive research grants. Kraft is a member of IEEE and ACM. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
Jul 17, 2024 • 19min

The FLAIR land cover mapping challenge

In this episode, I caught up with Nicolas Gonthier to learn about the FLAIR land cover mapping challenge. In this challenge, 20cm resolution aerial imagery was used to create high-quality annotations. This data was paired with a time series of medium-resolution Sentinel 2 images to create a rich, multidimensional dataset. Participants in the challenge were able to surpass the baseline solution by 10 points in the target metric, representing a significant step forward in land cover classification capabilities. The dataset is now being expanded to cover a larger area and incorporate additional imaging modalities, which have been shown to improve performance on this task. Nicolas also provided important context about the objectives of the organisation running this challenge, such as the need to balance model performance with processing costs. * 🖥️ FLAIR website* 🖥️ Page on the objectives of FLAIR* 📖 The NeuRIPS paper about FLAIR* 🤗 IGN on HuggingFace* 🖥️ IGN datahub* 👤 Nicolas on LinkedIn* 📺 Video of this conversation on YouTubeBio: Nicolas Gonthier is a R&D project manager in the innovation team at IGN the French National Institute of Geographical and Forest Information. He received a MSc. in data science from ISAE Supaero in 2017 and a Ph.D. degree in computer vision from Université Paris Saclay - Télécom Paris in 2021. His work focus on deep learning for earth observation (land cover segmentation, change detection, etc) and computer vision for geospatial data. He participate to different research and innovation projects. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
Jul 4, 2024 • 16min

Meta-learning with Meteor

Expert Marc Rußwurm discusses Meta-learning with Meteor, showcasing its few-shot learning potential in remote sensing tasks like deforestation monitoring and change detection. They explore fine-tuning with minimal examples and the future of this approach in the field of machine learning and remote sensing.
undefined
May 24, 2024 • 26min

Uncertainty Quantification for Neural Networks with Pytorch Lightning UQ Box

In this episode, I caught up with Nils Lehmann to learn about Uncertainty Quantification for Neural Networks. The conversation begins with a discussion on Bayesian neural networks and their ability to quantify the uncertainty of their predictions. Unlike regular deterministic neural networks, Bayesian neural networks offer a more principled method for providing predictions with a measure of confidence. Nils then introduces the Pytorch Lightning UQ Box project on GitHub, a tool that enables experimentation with a variety of Uncertainty Quantification (UQ) techniques for neural networks. Model interpretability is a crucial topic, essential for providing transparency to end users of machine learning models. The video of this conversation is also available on YouTube here* Nils’s website* Lightning UQ box on Github* Further reading: A survey of uncertainty in deep neural networksBio: Nils Lehmann is a PhD Student at the Technical University of Munich (TUM), supervised by Jonathan Bamber and Xiaoxiang Zhu, working on uncertainty quantification for sea-level rise. More broadly his interests lie in Bayesian Deep Learning, uncertainty quantification and generative modelling for Earth Observational data. He is also passionate about open-source software contributions and a maintainer of the Torchgeo package. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
May 7, 2024 • 19min

Field boundary detection with Segment Anything

In this episode I caught up with Samuel Bancroft to learn about segmenting field boundaries using Segment Anything, aka SAM. SAM is a foundational model for vision released by Meta, which is capable of zero shot segmentation. However there are many open questions about how to make use of SAM with remote sensing imagery. In this conversation, Samuel describes how he used SAM to perform segmentation of field boundaries using Sentinel 2 imagery over the UK. His best results were obtained not by fine tuning SAM, but by carefully pre-processing a time series of images into HSV colour space, and using SAM without any modifications. This is a surprising result, and using this kind of approach significantly reduces the amount of work necessary to develop useful remote sensing applications utilising SAM. You can view the recording of this conversation on YouTube here- Samuel on LinkedIn - https://github.com/Spiruel/UKFields Bio: Sam Bancroft is a final year PhD student at the University of Leeds. He is assessing future food production using satellite data and machine learning. This involves exploring new self- and semi- supervised deep learning approaches that help in producing more reliable and scalable crop type maps for major crops worldwide. He is a keen supporter in democratising access to models and datasets in Earth Observation and machine learning. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
Apr 26, 2024 • 21min

Interpretable Deep Learning

In this episode I caught up with Yotam Azriel to learn about interpretable deep learning. Deep learning models are often criticised for being "black box" due to their complex architectures and large number of parameters. Model interpretability is crucial as it enables stakeholders to make informed decisions based on insights into how predictions were made. I think this is an important topic and I learned a lot about the sophisticated techniques and engineering required to develop a platform for model interpretability. You can also view the video of this recording on YouTube.* tensorleap.ai* Yotam on LinkedinBio: Yotam is an expert in machine and deep learning, with ten years of experience in these fields. He has been involved in massive military and government development projects, as well as with startups. Yotam developed and led AI projects from research to production and he also acts as a professional consultant to companies developing AI. His expertise includes image and video recognition, NLP, algo-trading, and signal analysis. Yotam is an autodidact with strong leadership qualities and great communication skills. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
Apr 19, 2024 • 20min

Earthquake detection with Sentinel-1

In this episode I caught up with Daniele Rege Cambrin, to learn about Earthquake detection with Sentinel-1 (SAR) images. Daniele has a key role in organising a new competition on this task, SMAC: Seismic Monitoring and Analysis Challenge. The topics covered include the logistics of organising this competition, and the lessons Daniele learned from organising a previous one. You can also view the recording of this discussion on YouTube.- Daniele on LinkedIn- Competition websiteBio: Daniele Rege Cambrin is currently pursuing his Ph.D. and his research interests lie in deep learning. He is particularly interested in finding efficient and scalable solutions in areas such as remote sensing, computer vision, and natural language processing. Additionally, he has a keen interest in game development, and worked on two machine-learning competitions related to change detection. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
Mar 19, 2024 • 24min

Machine learning with SAR at ASTERRA

In this episode Robin catches up with Inon Sharony to learn about the fascinating world of machine learning with SAR imagery. The unique attributes of SAR imagery, such as its intensity, phase, and polarisation, provide rich information for deep learning models to learn features from. The discussion covers the innovative applications ASTERRA is developing, and the nuances of machine learning with SAR imagery. This video of this episode is available on YouTube* https://asterra.io/* https://www.linkedin.com/in/inonsharony/Bio: Inon Sharony is the Head of AI at ASTERRA, where he is responsible for pushing boundaries in the field of deep learning for earth observation. Sharony brings a decade of experience leading development of cutting-edge AI technology that meets real-world business and product needs. His previous roles include Algorithm Group Manager at Rail Vision Ltd and R&D Group Lead & Head of Automotive Intelligence at L4B Software. He was PhD trained in Chemical Physics at Tel Aviv University and combines his extensive academic background in Physics and his hands-on experience with machine learning to develop strategic AI solutions for ASTERRA. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
undefined
Mar 7, 2024 • 26min

Major TOM: Expandable EO Datasets

In this episode, Robin catches up up with Alistair Francis and Mikolaj Czerkawski to learn about Major TOM, which is a significant new public dataset of Sentinel 2 imagery. Noteworthy for its immense size at 45 TB, Major TOM also introduces a set of standards for dataset filtering and integration with other datasets. Their aim in releasing this dataset is to foster a community-centred ecosystem of datasets, open to bias evaluation and adaptable to new domains and sensors. The potential of Major TOM to spur innovation in our field is truly exciting. Note you can also view the video of this recording on YouTube here. The video also includes a demonstration of accessing the dataset and a walkthrough of the associated Jupyter notebooks.* Dataset on HuggingFace* PaperAlistair Francis is a Research Fellow at the European Space Agency’s Φ-lab in Frascati, Italy. Having studied for his PhD at the Mullard Space Science Laboratory, UCL, his research is focused on image analysis problems in remote sensing, using a variety of supervised, self-supervised and unsupervised approaches to tackle problems such as cloud masking, crater detection and land use mapping. Through this work, he has been involved in the creation of several public datasets for both Earth Observation and planetary science. Mikolaj Czerkawski is a Research Fellow at the European Space Agency’s Φ-lab in Frascati, Italy. He received the B.Eng. degree in electronic and electrical engineering in 2019 from the University of Strathclyde in Glasgow, United Kingdom, and the Ph.D. degree in 2023 at the same university, specialising in applications of computer vision to Earth observation data. His research interests include image synthesis, generative models, and use cases involving restoration tasks of satellite imagery. Furthermore, he is a keen supporter and contributor to open-access and open-source models and datasets in the domain of AI and Earth observation. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode