
THAT BUSINESS OF MEANING Podcast Sam Gregory on Deepfakes & Human Rights
Sam Gregory is the Executive Director of WITNESS, the global human rights group using video and technology to defend rights. A human rights technologist and media authenticity expert, he has led innovation on deepfakes and generative AI, testified before US Congress, and received the Peabody Global Impact Award for WITNESS’s work.
I start all these conversations with the same question, which I borrowed from a friend of mine. She helps people tell their stories. And it’s a big, beautiful question, which is why I use it. But because it’s big and beautiful, I kind of over-explain it — the way that I’m doing right now. And so before I ask it, I want you to know that you’re in total control, and you can answer or not answer any way that you want to.
And the question is: Where do you come from? And again, you’re in total control.
That’s a great question. There’s so many ways you could answer that, I guess. I’ll give maybe two answers to it.
Where do I come from? So, I’m a transplant to the US who grew up in the UK, as part of a family that had also moved from somewhere else. So I’ve continued the evolution of my family moving from Europe to the UK to the US. So that’s one way of thinking about me — as someone who, at this point in my life, has spent exactly half my life outside the US and half my life in the US, as of this year — but still feels very much like someone from outside the US.
The second part of it — where do I come from — it’s interesting because I’ve also spent all of those 25 years, or pretty much most of those 25 years I’ve lived in the US, focused on the same sort of issues. So, like, when you talk about where I come from, it sort of comes out of, like, an endless kind of working around of what it means to trust what we see and hear — which has been what I’ve spent most of my working life thinking about. So, two answers to that question, I guess.
Yeah. Do you have a recollection — young Sam — what did you want to be when you grew up?
So, very young Sam wanted to be an archaeologist. I was fascinated by cave paintings and by medieval history. Subscribed to History Today magazine when I was a little kid, read Herodotus. So very young Sam was an archaeologist and a historian.
I think I wanted to be that till I was about 15. And then, around then, I discovered the two things that kind of have ended up linking together. One was kind of thinking about activism. So, young Sam, around the age of 15, I encountered the Tibet activist movement. The Dalai Lama spoke in an arena near where I lived, and I became part of the Tibet activist movement.
And then also, about the same time, I started thinking about kind of filmmaking, video making. And I think at the time, I wanted to become a documentary filmmaker — was how I saw the two combining. Like, documentary was the way you combine the two. So, archaeologist to historian, to activist and documentary filmmaker were probably the transitions.
That’s amazing. I mean, I can’t — we met many, many years ago, but I have in my mind this conception of Witness. And it seemed to me that Witness was doing a lot of work in far-flung places, right? I love the way you talk about small media. And I’ve been following your work, of course, but it seems like all that stuff is just — it’s now, it’s everywhere. The same questions are everywhere. Maybe I was being very, very naive at that time. Did Witness work? Is that an accurate representation of the evolution of these questions around media and trust? Or not?
You know, we’ve always worked globally, right? I’d always worked also in the US, right? So, you know, human rights happen everywhere. Human rights violations happen everywhere. I think there’s always been this central question of: how do you trust what you see and hear? Which is — I remember we first met around trying to build tools to really imbue trust into media, to prove its authenticity. And those concerns — you just see them playing out in very different ways over the years.
I think one of the things that we were probably — we’d already learned it by the time you and I first met, which I think was probably a little over a decade ago — was that you had to think at multiple levels about how you defend our ability to believe what we see and hear. One is, of course, how does a human rights defender in a favela in Rio film the evidence of a brutal police raid, right? In a way that is trustworthy, ethical, protects the victim, stands up as evidence. But if you don’t do that in a way, in a system that’s then going to make sure that gets seen and trusted — and a system that’s everything from how a platform is built to how AI systems enable us to know what is AI and what is human — then that human rights defender on the front lines is fundamentally disadvantaged.
So one of the things we’ve really wrestled with in our work is how to bridge between that very direct experience of audiovisual storytelling and evidence gathering that a human rights defender has, and these systems that are being built — that can either fundamentally put them at an advantage or fundamentally disadvantage those truth tellers.
And how do you describe Witness to people, for those who aren’t familiar with the organization?
So Witness exists to enable the frontline defenders of human rights and the journalists who document what goes wrong and what’s needed, to show the visual truth of what is happening. Primarily they use video, and increasingly they use AI-mediated tools to show what is happening and to show what’s needed to change that.
Now, how we do that — we also operate at multiple levels. We often describe it as our “thousands, millions, and billions” layers. So, at one layer, we very directly support specific communities who are using video, increasingly using these AI-mediated audiovisual tools, to document war crimes, state violence, land grabs. We do that with thousands of people each year.
Then we try and share the best practices, the good practices that come out of that. What do you need to do to document the police during an election in the age of AI, when everything is going to get undermined by people’s claims that everything can be falsified? We work out how to turn that into guidance and tools that are available to millions of people.
And then the third layer is this billions layer — which is this idea that if you don’t build the fundamental infrastructure of tech and policy in a way that enables us to trust what we see and hear, then we’re fundamentally disadvantaged. An example of that — and it’s an evolution of work we did together — is that a lot of our work over the last five years has been about: how do we build the trust layer in AI that enables us to know the recipe, the mix of ingredients that are AI and human in the videos we see in our timelines? In an age when it’s increasingly hard to discern what’s true and what is synthetic — or what is real and what is synthetic.
Yeah, I’m so curious. So much of the language around this — it just seems like it’s emerging, or not even — it’s not firm yet. But I heard you use the word “synthetic.” I heard you talk about the trust layer. Can you just tell me, what is the trust layer? Where are we in the process of developing a trust layer?
Yeah. So the way we thought about trust — and it really is an evolution of working on this for 15 years — and I can sort of take you back through that evolution of how we built our understanding. So largely, what we think of as the trust layer around our current information environment is — more and more AI content is entering. And it’s sometimes purely AI. Sometimes it’s a mix of synthetic and authentic — synthetic and something that was created by humans in the real world. And sometimes there is purely authentic human content, right? It’s just something that was filmed on a cell phone in a protest and it’s not materially changed by AI. Right? Like, broadly speaking.
And in order to have a trust layer, you’ve got to be able to understand that mix of ingredients in every piece of content. Right? So you have to be able to know if something was made with AI, how it was, maybe what models were used. You need to know how it was edited. You need to understand how humans intervened.
Now, where that layer is at the moment is there’s a lot of work on the technical standards to build that out. An example is something like the C2PA standard. It’s called the Coalition for Content Provenance and Authenticity, which is a coalition of companies — includes groups like Witness — that are trying to build a standard for how you show that recipe so you can basically reveal the recipe of a piece of content. Right?
And that’s not just to show if something’s deceiving you. You might also reveal the recipe because you’re like, this was awesome and creative — how did they make this? You know? So that’s how I think about the trust layer.
I think there’s some lessons we learned from our work on this that inform how we think you do that right and you do it wrong. Right? So, for example, you and I first met around a tool we built called InformaCam with a group called The Guardian Project, which was a tool to create authenticable data within videos. And we kept working on those tools with The Guardian Project, another mobile developer working in the activist space over many years. But increasingly started to think, how did the values from those tools carry over into the mainstream — which is how we started getting involved in the AI trust layer.
And we also talked to the people we worked with. And I’ll give you a really concrete example of the types of things they said: it’s important to have this in the trust layer; it’s really important not to have this. So, for example, many of the people we worked with said, don’t build a trust layer based on identity. Right? So, we don’t want you to make it obligatory for you to say, “Sam made this,” just because you used an AI tool, or “I used an AI tool.” And the reasons they said that were to do with all the risks that we see for human rights defenders, journalists, and frankly, ordinary people — of surveillance, of privacy breaches, of the way governments are trying to track us, as well as corporations.
And so our understanding of how to build that trust layer for the internet also comes out of saying, actually, this doesn’t exist in some sort of place of perfection and an absence of misuse by governments, by states, by corporations, and by individuals. And so how you build this really matters.
What do you love about the work? You’ve been at it for a long time. Where’s the joy in it for you?
Joy comes from a bunch of places. Like, I love the community of people who work in this. I like my colleagues — that’s a good start, right? I also think there is something fundamentally affirming about working with frontline defenders. In the sense that this is really hard work — it is far harder than my work to be a frontline human rights defender — but people generally navigate that with a sense of purpose and optimism and realism, grounded in doing something that matters for their community. Right? And so when I’m working very closely with the people we work with, that is a source of joy.
I’ll also say that I actually find a lot of joy in the fact that, in our work, we’ve been able to be really sort of front-foot-forward on some issues that matter. Joy is an odd word to place there, but when you know that you’re doing the right things around something, and you see it having an impact — I draw joy out of that, or at least satisfaction. I don’t know if it’s joy, but satisfaction. So I think that’s a part of it.
The other thing that folks within my organization, Witness, know is that one of the things I really love doing is trying to make sense of the world and look ahead. Right? So a lot of my role over the last 20 years has been to say, where are we now, but where are things going? Not in an abstract way — not just guessing, not in a kind of detached, “futures” way — but like, if we look at what’s happening, if we understand existing problems and challenges, where can we look ahead to?
Over the last 15 years, I think I’ve engaged a number of times on that. And I get a lot of joy. I spent a lot of time in the 2010s thinking about live streaming and how to think about live streaming in very different ways. And then, around 2017, we started working on deepfakes at a time when many people were saying that just feels like a very niche issue and probably not what a human rights group should focus on. There are bigger issues.
Part of it was — and I was driving this within the organization — a sense of how this brought together many of the issues that really matter to the success of our work: the issues around trust, the issues around how you create authentic or synthetic content, and also the issues of risks. Because the thing that was most visible in those early days of deepfakes in 2017, 2018 was that it was targeting women particularly, but also LGBTQ individuals, with these non-consensual, falsified sexual images — where someone’s face was placed in a sexual scene or on a naked body.
And so, again, it’s a weird word to say joy, but I draw satisfaction personally out of the work we can do — and I can do — to try and be proactive in being ahead of the ways these issues of trust and the ability to have human rights action and reliable information are shifting. And move an organization ahead of those things rather than reactively to them.
Yeah, I mean, I feel like you’re speaking a little bit to something — one of the reasons I reached out, I think, to you is because you’ve been present in that place at a period when there must be a period when people don’t really know why you’re doing what you’re doing. You know what I mean? Until we all catch up and people are like, oh, good Lord, this is what you were talking about. And I guess there’s a piece of me that feels like AI — I think of AI like a storm. It’s some sort of weather system that has arrived in a very strange and abrupt way. You know what I mean? It’s brought all this really strange phenomenon with it, but you can kind of do a before and after with it.
Yeah. I agree.
You know what I mean? And I’m wondering — how do you conceptualize AI? And is that even the right question to talk about AI, or is deepfake your way of talking about AI?
Yeah, you know, it’s interesting. I’ll make two observations. One is, I think there’s been a growing swell that you could see the early signs of in 2017–18. And you could see — and I don’t know weather systems well enough to know if my analogy is correct, Peter, so we’re going to get a meteorologist critique of my description of it — but you could also see the things that were contributing to make it a bigger weather system or a bigger swell even back then.
We organized literally some of the first global meetings that brought together technologists and human rights defenders and companies. I know literally some of the first conversations for folks in companies where they met people to talk about deepfakes were actually in meetings we organized.
And some of the things we heard there — for example, from the human rights defenders — were things like, I don’t see this yet, but this sounds really similar to the issues I already face around the undermining of my evidence, the targeting of my leaders, the intersection of facial recognition and surveillance. Because they saw the way these were playing out.
So they were pointing to this little swell out in the water that we were telling them about. We were saying, this is going to be technically possible. And they were saying, we know how that swell will get bigger — from the societal context.
And so what I’ve been watching — and actively we’ve been trying to intervene in from Witness over the last eight years — has been, you can see that swell growing. And our framing was always: prepare, don’t panic. There are very clear things we could be doing, in some sense to set up the flood walls — or build better flood management systems, or whatever our analogy is for this extreme weather event. Which is that there are things we can do that would make this both manageable and, in many ways, potentially positive.
And that sort of leads to how I think about this space. Although a lot of my work in the last seven or eight years has been around deepfakes — which I think people tend to think about as malicious or deceptive — we’re clearly entering what we talk about as an AI-mediated information ecosystem, where there’s just so much AI-generated information. And it’s competing, it’s supplementing, it’s creating new ways we communicate.
And as someone particularly who comes out of an audiovisual background, some of that is tremendously exciting. I love the way that AI video can be more accessible, more easy to make, more translatable, more personalizable. There’s tremendous accessibility and storytelling potential in what’s happening with AI — as well as the negative consequences, as well as the underlying, fundamental problems we might worry about, like copyright theft and theft of artistic work, and all of those.
And so when I look at where we are now, I tend to think of it as: how do we adapt to a communication system where there’s more and more prevalence of information? We’re now in the sort of tsunami phase. And how I judge how we’re doing — and I’ve been quite critical in public in the last couple of months of where I think people have failed to do things they could have done to make sure that we had better flood walls, better flood defenses — because you could see these seven or eight years ago. Right? This is not a surprise to folks who are close to this.
And there are things we could have done and things we could still do to make sure that we maximize the positive sides of this and find ways to adapt to the negative sides — or, in fact, reduce them or eliminate them. Right? So I think that’s the challenge now — to say that we don’t need to be passive around this. And we don’t need to be either binary — AI is bad or good. We need to take a very deliberate way of dealing with what is happening, and what we need to do in terms of safeguards, in order to channel it in the right way.
Yeah. I think in one of your talks, you talked about reality fatigue. Is that an idea that you’ve talked about — sort of being tired of having to... I mean, it’s just, we don’t — it’s so difficult to tell what is real.
Yeah. I think reality fatigue, and also just a general kind of corrosive fatigue about knowing what is real and what is synthetic, is something that’s been given a lot of sharpness in the last couple of months by the release of tools like Sora — OpenAI’s app-based approach to reality falsification, likeness appropriation.
The reason I point to that is it’s just a way in which we’ve made it very normalized to create things that are across a spectrum — from silly pratfalls to funny cat videos to slightly sinister exploitative videos to hateful videos to full-on deepfakes that are trying to deceive people.
And I think one of the things we’re all trying to calibrate in that landscape is: what is the impact of people constantly having to question not only the really important stuff, but so much of what they look at, and not trust the evidence of their eyes? And how corrosive is that?
So it becomes a problem that’s not only about the big deepfake — and there’s lots of work we do. We run this global rapid response mechanism on the big deepfakes that influence elections and things like that. But it’s also like: what is the overlap of people’s fatigue, and perhaps unwillingness to believe anything, because they’re too used to being deceived by videos in their regular timeline that appear to show reality but aren’t?
And what it does is reinforce something we’ve seen for probably four or five years with the one-off deepfakes. It’s very easy for people to plausibly deny reality and exercise something known as the liar’s dividend.
The liar’s dividend is the idea that the presence of this deceptive AI — these deepfakes — makes it much easier for someone in power to deny something real. They just say, it’s easy to falsify anything, so therefore this real footage could have been made with AI.
And so the prevalence of us all sitting in this fog of confusion also impacts the really critical stuff, because it allows people to exercise the liar’s dividend — to plausibly deny reality. And we already see that in our work. In our deepfakes rapid response force, about a third of the cases we get are cases where something is authentic, and people are trying to claim it’s AI. So two-thirds are AI where you’re trying to prove it’s AI, and one-third is authentic where you’re trying to prove it’s authentic because people are claiming it’s AI.
So you’ve got both sides of that dynamic, and that kind of reality fatigue — that corrosive doubt — has an effect. We still don’t quite know what it is yet, but it has an effect on the ability to dismiss the big stuff as well as the little stuff.
Yeah. It’s really — I find sometimes with this stuff, it’s hard to know what I’m actually talking about or thinking about — with the impact of these kinds of tools on how we communicate with each other. And I guess I’m thinking about my own experience living in a small town and how even social media made it very difficult for us to develop a kind of shared understanding about anything. You know what I mean? And so we have this continued fragmentation where we’re not really sharing anything. We’re all so isolated from each other, and ultimately the reality fatigue — it doesn’t even really matter if anything is true or real. You’re not really evaluating whether something is meant to be true — you’re really only evaluating it as to whether it either entertains you or...
I mean, I feel like there’s a total detachment from what we’re engaging with. But again, I’m thinking of a general person. I know you work with activists and people dealing with human rights abuses, so maybe my context...
No, that problem is — if we can’t trust the evidence of a human rights abuse, and just believe that it’s entirely a matter of opinion, or a matter of emotional affiliation — I think that’s incredibly damaging. And again, this is AI layered on top of what we already have.
Social media pre-existed AI — the algorithmic amplification of division, the echo chambering, the partisan divide that isn’t just about social media, it’s about far deeper economic and social ruptures. AI is layered on top of that.
The way it changes that, though, is — we’ve at least, in some sense, been able to have some contestation around: is this actually factual? Is this actually true, what we’re seeing and hearing? And in certain venues that really matters. We need to know whether something can hold up as evidence in a court. We need to know whether a government communication is real.
Once we move out of the social media realm, we start to get into a space where it really matters to be able to establish some shared basis of facts. And I think there’s something particularly — and obviously, I mainly engage with audiovisual AI, or the audiovisual manifestations — not like hallucinating texts and stuff like that. There’s something profoundly challenging about not being able to trust the evidence of our eyes in a lot of settings where previously we might have thought we could.
So when I go to a Marvel movie or the latest Avengers movie — whenever they release the next one — I know in that context that I’m not watching reality. And it’s not like in the social media context I believe I’m watching reality, but I’m not having to constantly question, does the literal fabric of what I’m looking at — is that real or not?
We’re not cognitively designed to do that. We’re not cognitively designed to second-guess our visual cortex’s experience in every single visual interaction we have in the world. And so that worries me, because it takes us into a different place that isn’t purely about the existing contestation of facts or the fracture along partisan lines. It takes us into a place where we really can’t even look at something and know whether it is what it is.
And we may be doing that minute after minute in our social media timelines, in our information environments. And that’s where the absence of safeguards — the failure to put in those flood defenses — really matters. There are ways we could make that easier.
And going back to what I was saying about this trust layer — the reason to have that is so that you can, you know, see 15 videos in your timeline. The first five, you don’t care they’re AI — they’re funny. Like the cat jumping out of the baked loaf and running across the kitchen floor — I love that. I don’t need to know it’s AI. And if it’s not AI, I don’t care — it’s just funny.
But the sixth video that seems deceptive — I want to be able to dig down and know that AI was used there. Know that it isn’t a realistic representation of an event. I scroll through seven, eight, nine. The tenth video — I maybe need to look at the recipe again.
That ability for us to ask questions of our information environment, in order to know where AI is playing a role, is pretty critical. And that’s a safeguard that we’ve not yet generalized. We built the first parts of it, but we haven’t yet generalized it.
So although I feel this is reinforcing problems we already have in our information environment, I also think there are things we can do about it.
Yeah, yeah. And to return to the trust layer — what has to happen? What’s your vision for the next 10 years in terms of how we build the safeguard? And maybe there’s a question in here too about — where is there hope? Where do you see these safeguards or the trust layer being built, or evidence of us being able to create what we need to survive this ecosystem?
I think this is a case where technology and law and regulation fit together. Regulation’s obviously a dirty word in the US context right now — and challenging even in Europe. Even today, on the day of our conversation, the EU has just announced it’s essentially watering down its landmark AI legislation.
I think there’s a few things we need to do, and this is what they’d look like. One is: we need a robust foundation to know the mix of AI and human in the content and communication we see — that’s easily accessible, that we can look at when we want to, and that helps us as individuals. So we can look at something we find very creative, or something we think is deceptive, and not have to rely on just guessing.
At the moment, most people are just literally guessing that something is AI. They’re looking at it, looking for glitches — and that kind of forensic gaze, it doesn’t work. AI is getting better. That production of images and video and audio — it’s like, looking and listening hard doesn’t work.
We need a way of structurally building in a way to do that, which is probably some combination of rich metadata and ways to retain that in the information and make it super accessible to a user. And it’s interesting — that’s an area where there’s a lot of technical work happening. It just isn’t yet implemented across the internet, and it isn’t yet implemented in a way that continues to protect those key values like privacy and access that are fundamental to doing it right.
So that is totally doable. It could be the work of the next couple of years — it’s not a decade’s work. We just need the impetus there. And there are a number of places where law and regulation is pushing that. So that’s one foundation.
There’s another foundation that’s perhaps more relevant to a core constituency that I have — which is the frontline human rights defenders and the journalists. People will remove that recipe. They’ll try and find ways to pull it out or be deceptive. So you also need to be able to detect when you’re in really malicious and deceptive contexts. You need to use these AI detection tools that exist already.
People will be familiar with them — the most visible manifestations are things like going to “AI or Not” as a website or something like that. The problem with them at the moment is they don’t really work very well in the real world, and they don’t work well in most of the world.
So what I mean by that is: if you’re trying to deal with, for example, one of the cases we’d get in our deepfakes rapid response force — a piece of audio from Cambodia that’s low resolution or compressed, with someone speaking in Khmer — the detectors probably won’t work very well on that. Even if what you’re trying to prove there — and this is a real example of a case from the force — is a former premier demanding an assassination of someone. You’re trying to work out: is this real or is it falsified?
So we need to get those detection tools to actually work. So people can actually use them in real-world contexts to deal with the most high-profile cases.
Then I think there’s another set of things that feel really important — and this is a mix of law and policy — which is the easy likeness appropriation. You see it with Sora, the app, and also the notification apps where people are just dropping in their schoolmate or someone else’s face into an app that turns them into a notified image.
There’s a whole set of problems happening around basically stealing people’s digital likeness — where we both need to make it easier for us to know when that’s happening, and we need much stronger legal safeguards that say: actually, it’s not okay to do this. It’s both morally and legally inappropriate to lose control of your digital likeness in a way that you don’t want.
So I think those are things that — it’s all against the backdrop of the fact that we’re doing this against monopoly AI power. So I think there’s also something here, which is the age-old story, or the story of the last decade, of: how do we put some controls on the platforms so that they are not just purely pumping out synthetic content to us, and have no obligations to think about how their algorithmic curation and amplification reinforces the deeper divides in our societies?
Yeah. I mean, I don’t follow the policy and the regulation side of things, but it strikes me that sometimes when I talk to people, they talk about AI as just another technology — and we should just sort of, you know, it’s a boon, because this is how we operate, and we kind of have to let it go. But I guess, what’s the temperature when it comes to regulating AI? And how does it feel different than other technologies or shifts in media that we’ve gone through? Just another?
Yeah, I think the two — like, definitely the way that the AI companies talk about AI is that it’s not just another thing. It is transformational. It’s the dawn of a new age, etc., etc. And there’s some truth in that. It is a fundamental shift in how we create information, communicate, and may do things in the future.
I think that’s part of the way they’ve also been stifling regulatory approaches — by saying this is so completely different, it’s got so much potential, that we can’t regulate it. We can’t put guardrails on because it’s going to stifle innovation. It’s going to stifle something that’s completely new.
You’re seeing that play out in both the US — where there’s really no meaningful federal regulation on this, though there is state regulation in California — and in places like Europe, where they’re trying to work out how they navigate between innovation and these guardrails, including in this EU AI Act that just got essentially watered down with some announcements just today.
As we respond to it, and as I respond to this from the position that I occupy, I think it’s useful to think: this is not just the same as some other waves before, but it’s not super different, either. You can navigate those.
When it goes back to the conversations I’ve had with people we work with — and I’m primarily thinking about the information environment in AI — it’s, in some sense, a subset of the AI universe, where they’re saying: look, issues of privacy, trust, who gets heard, who gets listened to, whose information gets seen — these are not novel. These are existing issues.
Putting in safeguards that enable transparency for how something was made, that prevent people doing terrible things like notifying their neighbor — these are not things that are inhibiting innovation. They’re actually creating an environment where people can trust the information they see and hear, where we don’t do things that are patently illegal and should be illegal. And frankly, from a business perspective, they’re probably better.
I think a climate in which people don’t understand if what they’re seeing and hearing is real is not good for a business environment. It’s not good for human rights documentation. It’s not good for journalism. So I push back on the idea that we can’t do anything — it’s the dawn of a new age — because A) we can see the corollaries to previous and existing patterns, and B) there are things we can do that would actually reinforce the innovation side of it, because they reinforce basic human values that we care about, like interpersonal decency and transparency about what we’re seeing and hearing — things that matter to ordinary people, but also matter to the business sector.
So I think we can coexist with both of those and navigate a path that recognizes that.
How would you articulate — very often these things are framed as regulation versus innovation, technology versus values — and you’re always in opposition to the thing that’s happening, right? But how would you articulate: what’s your affirmative vision of a business or of a modern information ecosystem? What’s the utopian vision? What’s the best-case scenario?
The utopic vision is that we all have a greater capacity to create and share and access information in the ways we want to create it, in the ways we want to access it — in a way that is super accessible, at a cost point that is valuable to us, and in a way that we’re not reliant on others to do that for us.
So that’s the top layer — the ability to create, produce, share. And that we’ve built that on a foundation that makes sure that we can trust that information, we can query it, and we can be creators and sharers in a way that protects our privacy, that doesn’t lead to being weaponized.
To make that really concrete, from the world I work in: I want every human rights defender I work with, every journalist, to be able to — at their fingertips — edit real video much more easily, translate it into the languages they want, protect the identities of the people they want in it. Just trivially easy edits and changes using the power of AI.
I want them to be able to use synthetic video when it matters, as a way to tell stories they otherwise wouldn’t. I want them to be able to personalize those stories for people who want to see it in a way that matters to them.
I want their information to be going to LLMs in a way that — when someone wants to find out about land rights in Colombia — they can ask for a video, a podcast, a PDF. All the things you can do with multimodal AI. But when you do that, it also makes sure that it hasn’t completely lost the voice, the point of view, the agency of the source material.
So when you see that video, it hasn’t just obscured the fact that all that information came from a critical human rights defender working in a community in Colombia. It’s not anonymous information. It comes from a place, a source, a point of view.
So that’s a vision of access to information. And then you have to have that layer under it — which is, when you see a video online, you can know that it was made with AI, you can know it came from a human. You can do that in a reliable way that enables you to navigate a more complex, more rich information environment without doubting the evidence of your eyes, without feeling that reality fatigue, without saying: I just live in a morass of information and I have no idea what is real and what is false, what is authentic and what is synthetic.
Yeah, beautiful. Well, I want to thank you so much for joining me and sharing the work you do. I mean, I really — more than ever — really appreciate knowing that you’re there doing the work that you’re doing and at Witness. So thank you so much, Sam.
Yeah, I really appreciate it. Good to talk again.
All right.
Get full access to THAT BUSINESS OF MEANING at thatbusinessofmeaning.substack.com/subscribe
