Family Office AI has become a dominant theme at the fancy dinners where families and their advisors chart a course to incorporate new technologies. As wealthy families grapple with the risks and opportunities of AI, institutional rigor and structure hasn't kept up with the often informal world of family offices. This is a mistake High end governance must play a part in the family office AI space.
https://youtu.be/n_KHB_gOc9M
We're going to be talking to TIM PLUNKETT, who's the founder and managing partner of Plunkett PLLC. He advise families on structure, governance and the development of procedure around these exciting, but potentially dangerous concepts. We're going to be talking about best practices for family offices as they deal with the artificial intelligence theme.
Family Office AI
"When looking at AI adoption in family offices it is important to remain true to the culture, operations, reputation and underlying trust among those who built the Office in the first instance. Remain true to your principles and don't get distracted by the new toys." - Tim Plunkett
Family Office AI Transcript
Frazer Rice (00:01)Welcome aboard, Tim.
Tim Plunkett (00:03)Hey Frasier, how are you doing? Thanks for having me.
Frazer Rice (00:05)doing terrific. we're in the midst of Trump tariff season, so it's a little crazy, I'm sure for everybody. yeah. so why don't we, we're going to talk a little bit about family offices and artificial intelligence, which I think is a theme. both themes are, you know, big unto themselves, but how family offices integrate with the space. I think it's something where it's a, it's an area where family offices can be very informal and.
Tim Plunkett (00:11)We're blessed.
Frazer Rice (00:33)Getting some institutional rigor around them is important. And so to that end, you have a lot of broad experiences advising businesses from a governance perspective. Maybe describe your firm for a few minutes and what you do.
Tim Plunkett (00:47)Sure, thanks again. I have three pillars in my firm. I can only do certain things well, so I try and limit what I do. My training is as a litigator, and so I consistently think of things always as having to explain them in front of a judge, which helps with a lot of risk, which goes along hand-in-hand with AI and governance.
The second part is I've done a lot of government relations work, which is working across disciplines and organizations, trying to advocate for certain outcomes and create business environments that are efficient, compliant, ethical. Again, all that ties back to the same foundations in the world of AI. And the third component of it is, is obviously the AI work I do, which came out of working in data privacy and security over the last 10 years. The natural flow was to move towards this sector. And today my practice is
Mostly helping companies learn how to implement strategies that are fair, equitable, just, but also compliant with the laws and keeping in pace with the technological change, is really at breakneck speed and an incredible place to be right now in the world of opportunities in front of all of us. It's very exciting.
Frazer Rice (01:57)So when you're canvassing companies and families that are invested in them, what are the use cases that you're seeing?
Tim Plunkett (02:04)So use cases are, I mean, they're kind of all over the place. you look at in terms of how do you define the practices, have, there's operational use cases. so you have use cases that are like document intelligence and automation. Sometimes in places there's expense tracking and anomaly detection. There's dashboard creation for organizational purposes.
You have investment use cases for deal sourcing. portfolio risk management, alternative data, source and analysis. You have governance use cases for succession planning, philanthropic impact analysis.
So there's a lot of different cases that are out there. Each one of those has lots of different levels beneath them. But back office integration in the family office space, like you said.
Some places are single jurisdictions, some are multiple jurisdictions, some are international, some are local, some are really formalized, and some are not. And so you have basically two buckets that everything fits into.
One is AI for adoption and operational efficiency, and one is for investment. And those are viewed and treated very differently. Others overlap, obviously. But when you're talking about getting down to the fundamentals of building the rigor around these things, and what the institutional rigor looks like. That's where everything emanates from.
Risks
Frazer Rice (03:31)Got it. So, you know, it's difficult to put sort of a roadmap around this. It's all evolving so quickly. And, you know, just when you think you've got everything in mind, there's some new use case that pops up as a litigator, as someone who is trying to advise companies and families around governance so that they stay safe from the various risks that are out there. How do you group those?
Tim Plunkett (03:54)Well, the risks are there's risks that are from compliance. Okay, you have regulatory risk. You have family, know, reputational risks, operational risks. Then you have the obvious investment risks, due diligence, things like that. But and then the fundamental thing about family offices is they're about family, and they're about protecting that asset more than anything else, in my mind, at least. And so and so
What are the risks that go with that? Those are family reputation risks that you want to mitigate as much as possible. There's obvious data risks and security risks. Once you start pulling data in places, then it makes it more of an attractive target.
You have risks that go around that make them more attractive targets because people seem to think that some data family offices don't have a strong data governance strategies or security strategies that they may have decentralized security. There's all kinds of risks once you're inside the office as well between family members, between generations.
One generation looks at technology one way and another generation may look at it differently. That creates a risk from an investment perspective, an operational perspective. the world is fraught with risks, but for every risk, there's a solution pretty much. And a lot of that comes down to really building the governance strategy properly from day one, focusing on what your foundational documents should look like, your AI governance policy, and that is what your, for lack of a better term, your constitution. That's what guides you.
Frazer Rice (05:32)So a client walks into your office and they've got some level of complexity, they've got an interest in the space, they've got wealth and assets in there. It maybe takes us through your process as how you get them to get their arms around the issue and then put structure.
Tim Plunkett (05:50)I think the first thing to do in talking to anybody is finding some common ground. And there are certain principles that guide people, decent people, professionals that have licenses and things like that or certain mandates to do certain things.
Tim Plunkett (06:08)I think that when you're looking at building the bridge, the first thing you have to establish is trust. And trust is something that is in the background of every decision that's made in the world of AI.
So once you've established a level of trust, you can start talking about philosophically what the family is looking for, whether it's from an investment perspective or a philanthropic perspective. But you have to understand what the family is all about, what the family office is all about and their mission.
Before you can start putting on legal tools or technological tools or anything else you have to have that that trust at the beginning.
Once you do that you start to build your your your frameworks Your legal frameworks and that's what I said to your AI governance policy becomes your Constitution The good news is that there's so much information available now on how to set up governance programs.
It's not that hard depending even if you're you know small office or a big office foreign domestic whatever, there's frameworks for everything. But at the foundational level, the first thing is to get the trust together, to get the AI governance policy document together. And that will be comprised, if you go down the line from there, we can get into talking about what the specific core rails are and what you're trying to accomplish there.
Frazer Rice (07:26)Sure, and let's do that. One of the things I think about when we go from paper to operation as many times that, you know, in my world, the trusts or the wills or whatever are well drafted and they stand up to lots of different things. However, the people who are administering them are the weakness on that front. When you're thinking about the guardrails and the legal structures, how are you advising these families as far as staffing them?
Tim Plunkett (07:45)Right.
Okay, so staffing, again, This is about knowing your people. It's about knowing what you have, doing an inventory of what's inside your organization, who's good at what. And there's legal frameworks that you put around those based on what people are good at and what they aren't. So when you're looking at staffing in particularly, you basically want to build a structure where there's accountability.
You have to have, there's expectations in the office for returns on investments and things like that. And then there's also expectations on how these places behave and how they're viewed publicly.
So you have to define the roles and responsibilities very clearly. You're gonna want an executive leadership team to begin with. That's a strategic oversight role. That you're gonna have ethics officers or maybe an ethics committee,


