In this episode of The Visibility Brief, Yext SVP of Marketing Rebecca Colwell sits down with Bill Simpson, Director of Value Consulting at Yext, to explore a question many marketers are still working through: how far is too far when it comes to personalization?
As AI changes how people interact with technology, customers are sharing more personal information than ever before — often without realizing it. But why does AI feel so different from traditional search, and what's making people more willing to open up?
At the same time, brands are under pressure to move faster, adopt new tools, and deliver more personalized experiences. But speed introduces risk. From data exposure to inaccurate AI responses, the line between helpful and harmful is thin — and easy to cross.
Drawing on his years of experience working with brands, Bill shares insights on how marketers can move forward thoughtfully — balancing innovation with responsibility.
The episode breaks down:
How conversational interfaces create a sense of trust
Where personalization starts to feel invasive
Why transparency is the new competitive advantage
The hidden risks in everyday AI workflows
How to safely pilot AI inside your organization
Why human oversight still matters
If you're a marketing leader trying to move quickly with AI while protecting your customers and your brand, this episode will help you understand the real tradeoffs — and how to navigate them with confidence.
Episode Links
Transcript
00:00:00.240 — 00:00:38.280 If you're leading marketing team, chances are you've got AI inside your workflows and you might not even know all the places that it might be touching. And that is where things can get a little bit tricky, because the same tools that are helping you move faster can also be creating risk, especially if you are dealing with customer data or confidential company information. Today, to unpack that, I'm joined by Bill Simpson, who is our supervision director of Value Consulting. He is going to help us unpack how to minimize the risk of AI adoption in a practical way. Let's get into it.
00:00:46.080 — 00:01:29.850 Hi, Bill, thank you so much for joining us today. Oh, it's a pleasure to be here. Thank you. Rebecca. So before we dive in, I would love for our broader listeners to get a better understanding of what you do in your role at Yext. So can you tell us a little bit more about that. Sure. So the official title is I am the supervision director of Value Consulting. And that's that's a long way of just explaining that. I used to do compliance in my previous life, and now I use that experience to help our current clients to better understand the ways that they can leverage our tech platform to make their jobs a little bit easier. I often think of a compliance and rules, creating rules, following rules, making sure everyone else was following the rules. As a kid, were you a rule follower?
00:01:30.970 — 00:01:31.410 I 00:01:32.650 — 00:01:49.930 unfortunately know for my current job, you know, I was just I was not a rule follower at all. I was kind of the class clown, and that led me into a bit of an anti-authoritarian bent. Um, and I was I was a bit of a, uh, 00:01:51.050 — 00:04:36.030 a bit of a punk rock kid. And so, you know, I was getting in mosh pits and and coming out with, you know, broken bones and stuff. It was a good time. But, um, it is not consistent with being compliance officer. I'll say that I love that. Well, I think you have to understand all the different ways rules can be broken before you can be really, really effective at compliance. So maybe we just looked at it as like a going undercover.
That's a great point. I love that. Well, I would love to start our conversation today. Um, talking about trust. So many people who are using LMS right now are divulging a lot of really personal information, you know, health care concerns, financial investment questions. They're divulging a lot of information. That probably seems like more than that. They would they would say to Google, for example, I'm curious from your perspective. Does this shift. Does this indicate there's a bigger shift in consumer attitudes? Like, do we care less about privacy than we care before? Or is there just something different about the LLM experience that's driving this behavior?
I think you really hit the nail on the head there because it does feel different. You know, I was talking with somebody just recently at dinner, and they had mentioned that they are using ChatGPT almost, almost like a therapist. And I think they were they were being facetious, but, you know, not not fully. And so there's a lot of there's a lot of personal back and forth when they are having a AI specific conversation. And that's just not the way that we used to use Google. And, and I think it's because it does feel like there is another person on the side of the, of the tunnel there that, that it allows us to be a little bit more of a person with that technology.
So to your point, I feel like that's when things start to get a little bit creepy, because if there's a situation where that's, that's, uh, AI model then repeats something back to you that you either don't remember telling it or, um, or maybe it found out on its own and or made some sort of insinuation about something. Um, that's that's where we start to get into the world of this difference between what is personalization and what is, uh, you know, a breach of, of data privacy. That makes a lot of sense. I mean, at least with Google, it doesn't, you know, seem to remember as much about what it perhaps it does, but it doesn't make me feel like it's remembered things. So, yeah, that line between between creepy and helpful can be really thin. So I'm curious, what can brands do to help people feel more comfortable?
00:04:37.670 — 00:06:44.040 So this is really where I think, um, things become a question of, uh, competitive advantage here because what some firms are doing and some firms are not doing is understanding that, uh, it is it is more about transparency of the use of the data so that you can give the people that, whose data it is the feeling of control over their data that that's what really differentiates.
Back in school, there was a a story. I'm sure lots of people have heard about this, where there was a person that was pregnant and and they were sending stuff to that person before they had even told anybody that they were pregnant. And so that sticks in my mind when we talk about this stuff, because that was before AI and it was just predictive modeling and, and that sort of stuff. So not much has changed in the way that people that firms are making inferences about people's shopping habits or who they are. It's just now we're through the through the lens of a, a LM, which makes it different. Right.
I, I remember the example that you're referring to, and I believe the inference was this person was buying, um, lotions and products that were similar to what other people might be buying at that stage in their pregnancy. So it made that almost a lookalike audience inference, which is very different than I've been getting information about you specifically as a consumer and then have, um, have come to a conclusion about that. It's really interesting. I really like what you said, though, about it's not necessarily about the data collection. It's the loss of control that concerns people. And so is there a way of putting people back in control of that data, or is it more around the disclosure of what's being done with that data? I think I think the best way that we've seen for this to work is through the disclosure of that data. There are some surveys that came out that said that, um, roughly about 86% of people
00:06:45.200 — 00:12:04.210 say that the transparency of how the data is going to be used is more important than the personalization of that. And to me, that kind of speaks to the fact that there's a disclosure of how that data is being used and gathered and whatnot is really, really important. You may have seen that sometimes you can click on a thing that says, you know, why am I seeing this act? And that disclosure allows you to really understand that, you know, maybe, maybe you had made a search before and and the inference from that led to a different collection of data and, and, and whatnot. Having some transparency into that is really important. What I find, however, is oftentimes when you click that, it says, uh, you know, or any sort of disclosure about the way that data is being delivered is, um, your data may be used to improve search results, which is not a very good disclosure of how data is being used. So there's a very different way of giving that information out that lends itself to the transparency of of use.
I'll put myself, uh, I'll put myself out there as an example, because I'm using AI all the time to help streamline some of our my internal workflows. As a marketing department, we are also using it broadly for, um, workflow efficiencies across the board and to mine intelligence to make make more effective decisions.
So I, I think in some ways we assume the risk is obvious, like we never put confidential information into an LM, but I'm guessing there are more subtle things we're doing that or marketers in general might be doing this a little bit riskier. So could you give me an example of the use case that might feel harmless, but it might actually carry some real risk.
So one thing really jumps to mind for me is, uh, is an example of a chat bot. Um, so when when we put together an external facing chatbot that is going to be talking to customers and providing help in some way or another, support issues, or maybe there's customer service issues. If the chatbot is trying to address there, there may be certain systems that are feeding into this, right? Like, you might have a CRM that is feeding data into it, or you might have a documentation that's about brand standards and whatnot. All of that is well and good, but it requires a tremendous amount of maintenance. And the example that I think of is there was a case recently in Canada where an Air Canada chatbot was feeding information to a customer that was asking questions, and it hallucinated a promotion. And when it hallucinated that promotion, the customer said, hey, that sounds great. And it was something. It was a it was a very, very, uh, positive promotion. I can't remember the exact details, but it was it was something, um, you know, stellar. And so there was there was a disconnect between what the customer was trying to get and what the chatbot was trying to provide. And, and to me, this this is a really great instance of trying to be helpful and not thinking about all of the different ways that the chatbot might need information or gather data in a way that a customer customer might actually require help for. So to me, that that's that's one of the one of the things that I get really nervous about is when we're talking about customer facing chatbots. Now, I will say one of the great things that that we just put out, I think it was what, a week or so ago is a really recent, um, a, a support chatbot on our system that allows people to ask questions. And it's, it is based upon true, real documentation. And it's, it's I've used it as excellent. It's so good. I absolutely love it.
Um, I'm really curious about the, uh, Air Canada hallucination around the promotion, you know, did they have to honor that promotion that was promised to a customer? You know, is the chat bot a representative of the company? and were they liable for that? That's a great question. And it went through the courts and it ended up that's.
Yes. They did have to honor that promotion. Wow. Yeah. Yeah. So there's a lot of risk associated with chatbots. You know, and um, and even if it doesn't hallucinate, you know, it the the chatbot is pulling data from somewhere. And so, um, the promotions have to be updated on a regular basis, because if they're not, then maybe it's pulling stale data. So there's a lot of a lot of maintenance tasks associated with feeding data to a chatbot at all to provide, you know, current information to a client. You know, there's certainly the risk. At the same time, there's a risk of not moving forward with the chatbot, because that might mean customers don't get their questions answered at all, or there's a frustratingly long delay. So when you think about the broader picture, what do you think is the the bigger risk? Moving too fast and having something like a promotion and hallucinogenic promotion slip through, or not innovating for fear of of something like this happening.
00:12:05.530 — 00:18:44.170 So I have to fight against my inner demons as a compliance person. You know, I'm a generally anxious person, but but when it comes to when it comes to this particular topic, I think it's really important to, uh, to move as quickly as you can. And I realize that's scary because, you know, there's a lot of risk associated with moving, moving along here. But, um, but the reason I say that is because there is a competitive advantage aspect to this, and other firms that are out there in the world are going to get it right, and those that get it right quicker are going to be leagues ahead. Um, when when folks are still trying to figure things out. So I think, uh, it is it is a move quickly atmosphere, but especially when you're thinking about people's data. I think the most important thing that I've seen work with a lot of our customers is that we get the right folks at the table to have a discussion about this, so that you can make the right decisions and have the right policies and procedures in place, and all of the things that are required to build out a really robust supervision and monitoring program so that you don't have data problems as you're as you're treating your customers with with the utmost respect. So it sounds like marketing and compliance need to be tightly aligned, you know, become become best friends. What does a really healthy relationship look like between marketing and compliance? And I don't know, imagine there are breakdowns at certain points. Where do you where do you see things break down? So, um, this is this is one thing that I've seen work really, really well.
And some of, uh, in my own past, um, and and sometimes not really that well and, and it really depends upon who is at the table. Because as I mentioned, you mentioned as well. Marketing and compliance absolutely have to be at the table, but legal should be there. Your IT team should be there. Operations should be there. This these are the sorts of things that are are multi departmental. And in order for things to move quickly, everybody needs to be on top of their game in these situations. I think where things get a little bit problematic is um, is there is a critical piece that needs to be decided upon and maybe it's just taking a little bit too long and, um, and one team will move ahead and then the other one will fall behind. And there's not a lot of alignment. And so that's where I think in the past quarterly meetings have been reasonable. But things are moving so quickly nowadays that quarterly might not be often enough. You probably should be shooting for at least monthly. Absolutely. I could definitely see why that has become mandatory at this point.
I want to shift to talk about a couple of more practical tips, because this is really good. When you're sitting at the table and you're talking about a potential new tool that we might want to onboard that touches customer data or confidential customer information. Um, what should we be thinking about before bringing a tool like that on board?
Well, when you're thinking about a tool, it is important to make sure that the the company behind the tool fully understands the importance of treating your data properly. And oftentimes what that means is that's in the process of an RFI or DQ or any of these kind of questionnaire processes that they they have already thought of this, and they are providing this information probably proactively to say, look, I understand what you're trying to do, and we've done this a million times before. We've talked to very rigorous AI committee people and and we've succeeded in those places. So having a company that really is is mature from that perspective is incredibly important. And you can find that out through the RFP, RFI process. Are there some AI use cases that I could safely pilot today that don't take on significant risk?
Oh, certainly. Um, and I think I think we yes, we've done this a lot internally. Um, so we were talking a little bit before about this, but we have this internal tool that is based upon existing documentation, and I believe it's called a rag, which is a um, it essentially pulls from existing documentation and, and only references that documentation. So the risk of hallucination is, is minimized. And because it's an internal facing, uh, AI chatbot, you know, there's it's not going to hallucinate externally and put you at risk that way. So that's a really good example of AI that can really make things a lot easier. I love those, I have one of those that I use myself, where I load it up with customer interview transcripts or data, and then I just start asking questions. In fact, earlier today, I was on a call and we were trying to remember what happened in an earlier conversation. So I pulled the transcript from the zoom call, popped it into Google LM or notebook LM, and then started querying it. And it was great. It answered all of the questions that we had and sort of settled an internal, um, question or dispute we had about what the outcome of a call was. Um, yeah. So that was I actually just built a custom GPT for, for me and my family.
Um, we travel a lot. And so for spring break coming up, we're going to Spain. And, um, and I'm not sure how excited my kids are going to be about tours, which is something that we always love. Uh, so I built, uh, Rick Steves AI, which it it takes all of the places that they've been before, and then I can snap a picture of anything that I'm looking at, and then it will compare it to, you know, the Duomo or something in Florence or whatever, and say, you know, like the Duomo, this piece of architecture has X, Y, and Z associated with it. So, um, it's it's very cool. And, and that's, that's the kind of thing that I think of is, is a really, really good use case in your own personal life. I love Rick Steves, I adore him, I would love access to your LM, and I just saw that almost myself recently when I was in Milan. So that was, um, amazing. Uh, what use cases would you explicitly avoid at the moment?
00:18:46.010 — 00:20:46.420 So, I mean, there are a couple of silly ones, right? Like, um, having an AI that's, uh, that's will trade on your behalf and provide recommendations. Investment recommendations that way. That that to me seems a little silly, but I do. I do always feel a little bit uncomfortable with externally facing communications that are purely developed by AI. One of the things that that we've tried to be very careful about in, in the development of UX products is to make sure that there's always a human in the loop. And and that's something that still is required from, from at least the best practices standpoint, when we are starting to shoot out communications that are purely developed by an AI and is not being checked by anybody, that makes me extremely nervous until, um, I just came back from a CMO conference, and one of the things that we all aligned on was exactly this, that there still needs to be a human in the loop. And we are quite, as individuals, quite good at detecting when something's been written by an LLM. And it doesn't feel good to think like this was just this content was drafted. And so authenticity, especially in communications, will be a competitive advantage. And I think it kind of goes back to what you said at the beginning. The way you treat customer data and you approach privacy and trust can be a competitive advantage.
I very much agree. Yeah, that's a great point. So this was a wonderful conversation. I really appreciate that you brought your perspective today. I also discovered, uh, you were into punk music. That is so great. We have so much to chat about later with respect to that as well. Um, but thank you so much for sharing your perspective. And we'd we'd love to have you back again soon. My pleasure. Rebecca, thank you so much. This has been a pleasure. That's a wrap on this episode of The Visibility Brief. If you found this useful, subscribe, leave us a review or send this to a colleague who needs to hear it. We'll see you next time.














