I’ve always been impressed by how seriously Intercom takes learning and iterating. This is their early experience with Fin, their new AI agent:
customers hold Fin to a much higher standard than they hold their human team to. Even when Fin is faster than humans, and more accurate more often, the feedback is ‘Fin is too slow’, ‘Fin made too many mistakes’.
For example, Fin can issue refunds to customers. To do so, Fin needs to:
Check the product purchase history
Check the refund policies
Check the customer record
Approve
Talk to the payment system to issue the refund
Get back to the customer to tell them the refund has been approved and issued
Update all customer records.
For any AI Agent to do that accurately, consistently, is impressive. But it might take 90 seconds.
I remember similar impatience in the early days of LLMs. He has a couple of ideas for why this objectively fast and reliable process is perceived as being slow and flaky:
We don’t like the perceived loss of control when a system takes over. Possible solution: be very explicit about processing steps
User education. Tell them it would take longer the manual way.
I don’t think we have great behavioral norms around how to use these agents. Should you say “please” and “thank you” to Alexa? Is kicking over a robot as cruel as it looks (should it be considered assault)? When prompting an LLM, should you order it, or make a polite request? The experience is still very much a work in progress.