September 2023 Releases
AI Agent Ethics & Disclosure Timing in 2025: Dear Consumer, It’s Me – Your AI Agent
Imagine this: You hop on a call with a home services brand you've been considering, and a friendly voice on the other end greets you warmly. “Hi, I noticed you’ve been browsing our site for help with HVAC maintenance and repair. Can I assist with any questions?” You start to engage, asking about pricing, availability, and service options, and the conversation flows naturally. You ultimately decide to book.
Now, picture this alternative scenario. You receive the same call, but this time, after you say hello, you’re greeted with, “Hi, I’m Jane, your AI agent here to assist you.” How do you react?
With AI agents becoming increasingly human-like and used for a wide range of contact center interactions—from qualifying prospects to answering FAQs to providing customer support—brands are facing a crucial question around AI agent ethics: Should the AI reveal its identity upfront? If so, exactly when in the conversation and how? Or should disclosure timing only be impacted directly if asked by the customer? Do brands even need to worry about this, or will clear regulatory standards emerge, like the requirement to announce that a call is being recorded?
For now, no uniform standard exists. This leaves brands to decide for themselves. Here are some considerations and lessons from A/B tests run with Regal’s AI Phone Agents to guide your decision.
Legal/Regulatory Considerations
Note: Before discussing legal or regulatory considerations, we want to make clear that nothing in this article should be considered legal advice.
Two key, related regulatory concepts are currently under consideration: the need for consumer consent to interact with generative AI (for outbound calls), and transparency in disclosing when a consumer is actively interacting with generative AI.
As of January 2025, there are no concrete federal laws in the U.S. specifically regulating consent for AI-powered interactions in contact centers. However, in July 2024 the Federal Communications Commission (FCC) issued a Notice of Proposed Rulemaking (NPRM) that roughly proposes the following:
- Consent: They propose to require brands making outbound AI-generated calls or texts to specifically disclose that the consumer is consenting to receive AI-voice/generative calls or texts. (This could mean that during the opt-in process, such as when a customer agrees to receive promotional calls or texts, the language would have to clearly state that AI technology may be used in those communications.)
- Transparency: They propose to require at the outset of any such calls, the AI voice would disclose the call is being made with AI voice technology. (Exactly how and with what language the AI voice agent would disclose that it’s using generative AI technology is not made clear in the proposal.)
There are also emerging state-level regulations on this issue in California, Utah and Colorado. The approaches are inconsistent, with some requiring disclosure that the consumer is interacting with AI upfront “unless it’s obvious to a reasonable person”, while others require disclosure only if asked by the consumer.
Ethical Considerations
Do customers deserve to know when they are speaking to an AI agent – even when they don’t ask? Some argue that transparency in AI is essential to building trust with customers and driving market acceptance. But does revealing the AI upfront inherently frame it as a lesser solution? Perhaps it matters what the alternative is.
If AI agents are replacing IVR (interactive voice response) systems, they are arguably offering a superior alternative to anger-inducing, automated menus of the past. If the AI agent is delivering better results, faster solutions, and a more seamless experience, does it matter whether the customer knows it’s generative AI upfront?
And even if in some use cases AI agents are replacing human agents, it’s not like all human agents perform the same. Yet they don’t have to disclose their competency levels before engaging with customers. You don't hear a customer service rep say, “Hi, I’m Dave on a recorded line. Before we get started you should just know, I joined the company 2 weeks ago, I’m an offshore contractor and I’m in the bottom quartile of performers.”
The strongest AI agent ethics argument for why AI should identify itself upfront is because some AI systems in the past have been shown to be prone to biases or errors, and therefore customers have a right to know that their experience is being shaped by an algorithm. That said, humans are also prone to biases and errors, and human agents have been guided by automated and algorithmic systems in their conversations for years now (e.g., whether to approve a refund and how much, how to respond next based on AI-sentiment detection, etc.) – so if an AI agent is proven to outperform human agents on these dimensions too, is that sufficient?
Brand Considerations
Your brand's identity and values should likely influence how you approach the question of AI disclosure, just as it influences the overall customer experience you aim to deliver.
Some brand identities are fundamentally built around bringing transparency to their industry, for example, Patagonia which is known for its transparency in sourcing materials, manufacturing processes and environmental impact, or Southwest Airlines which provides an itemized breakdown of their ticket cost, so consumers have complete information as to where their money is going. In these cases, it might be beneficial to push the disclosure timing to reveal the AI upfront to stay consistent with the brand’s identity of transparency.
Other brands might prioritize the seamless experience AI can provide and could choose to keep the focus on the service provided, rather than the technology behind it. Amazon comes to mind as a brand predicated on convenient, frictionless experiences enabled by technology. Announcing that a call is generated by AI (unprompted) may actually cause customer friction and hesitation, resulting in the opposite of what the brand stands for.
Finally, still other brands may be known as always at the cutting edge of delivering and utilizing new technologies, so announcing the call is being conducted by “a friendly AI assistant” upfront may actually reinforce their brand identity. Brands like Apple or even leading Neobank and Insurtech brands like Chime and Ethos may be good candidates for this approach.
Performance Impact: What the Data Says
A/B testing has provided some insights into the impact of AI agent disclosure timing in announcing AI upfront – and the way in which it’s announced – versus withholding that information until asked. In experiments using Regal’s AI Phone Agents, customers found that when AI is immediately announced, customers tended to hang up more quickly or give shorter, robotic yes/no responses, treating the AI more like an IVR. This can diminish the value of conversational AI, which is designed to mimic human dialogue and offer a personalized customer experience.
Conversely – when the AI’s identity wasn’t revealed until after a customer had engaged in some upfront small talk or upon being asked directly – customers were more likely to engage in longer, more meaningful conversations, leading to better business outcomes.
In one experiment – an insurance qualification use case – a small change in the timing and framing of how to reveal that the call is powered by AI doubled the number of calls that became meaningful conversations and were transferred to a licensed agent (the business goal).
- Control: “Hi {{Prospect First Name}}! I'm an automated agent from {{Company}} calling from a recorded line. I'm reaching out because I saw you were looking for…”
- Variant: “Hi {{Prospect First Name}}! I'm calling from {{Company}} on a recorded line. How are you today?” Then, after acknowledging their answer: “As a virtual agent, I saw that you were looking for…”
In another experiment – an education company found that when it waited to disclose AI only if it was asked by a prospective student, it found that in only 2.7% of conversations, did a student actually ask if the agent was AI.
Recommendations
Given the current landscape and above considerations, our recommendation around AI agent ethics and disclosure timing is to:
- A/B test different approaches for how to disclose AI to your audience (within your brand guidelines), as it’s likely to be required in the first sentence or two. For example, experiment with alternatives like: “Hi - I’m an AI-powered agent here to assist you” vs. “I’m Danny, a virtual agent on the support team - how are you?” vs. “This call may be recorded, monitored or powered by AI for quality assurance purposes” before the AI Agent speaks. Pay attention to the impact on the percentage of calls lasting <15 seconds of talk time, % of calls escalated to a human representative, and customer sentiment scores.
- Consider also tailoring your approach to different customer segments – some audiences may be more receptive to AI-powered calls than others.
- Stay tuned for regulatory changes. This is a fast-evolving space. Brands need to stay informed about any legal or regulatory updates that could impact how AI agents are introduced in customer conversations. Ensuring compliance will be essential as AI adoption continues to grow.
Latest Blog Posts
Treat your customers like royalty
Ready to see Regal in action?
Book a personalized demo.