AI Can’t Spot Gorillas (But Your Support Team Can!)

Mar 10, 2025

A Gorilla of a Problem

Back in 2020 some researchers did a study of selective attention, students were split into two groups. The first group received specific hypotheses to test against a dataset of daily steps and BMI values; the second group was simply asked to “examine the data appropriately.” 

Those given explicit hypotheses tended to miss the glaring “gorilla” in the data - a pattern so visually obvious that free-form explorers caught it easily. 



The Gorilla in the dot plot



Like those students, when foundational AI models struggled with this same dataset, failing nearly three times as often as humans. The root cause? AI tends to fixate on the instructions it’s given and can overlook glaring anomalies sitting right there in front of it.

In customer support, this can be a real headache. AI excels at the repetitive tasks and plows through known problems with impressive speed, but the moment a customer slips in a subtle new detail - our “gorilla” - the system can start hallucinating, trying to match an unfamiliar issue to an old ticket or give an answer that’s outdated the minute it’s written.

Where AI Stumbles

This “gorilla blindness” happens because AI is only as good as its training and data. If your model has never encountered an odd configuration or a weird bug, it won’t magically spot it. It also can’t pick up on context you didn’t explicitly teach it. While AI sees the world in patterns, humans are wired to notice when something doesn’t fit - like a big ape in the middle of a dot plot.

For a support team, that means AI might gloss over crucial clues hidden in a customer’s question. It might ignore context if the prompt isn’t phrased just right. Your human agents, on the other hand, instantly think, “Wait, that doesn’t sound right,” and spot the anomaly immediately.

The Human Advantage (HITL)

Enter “Human in the Loop” training, or HITL for short. Instead of letting AI responses go straight to the customer, you have your agents review them first. Which is why we at Stylo have built our entire product suite around this concept. By training off your agent’s use of our copilot Stylo Assist, we’re able to close the gap and identify problems faster and more accurately. This extra layer of oversight is surprisingly powerful. AI digs through your knowledge base, drafts a reply, and then a human reads it over before hitting send. Any laughable or off-base replies get corrected on the spot - no embarrassing, gorilla-sized oversights slipping through.

Customer submits a question
      
AI references the latest Knowledge Base
      
AI suggests a response
      
Agent reviews and adjusts via Stylo Assist
      
Customer receives an accurate, personalized reply
      
AI learns from agent adjustments

Over time, every correction informs the AI and makes it smarter. Tools like our free Help Center Scorecard keep your knowledge base squeaky clean, so AI isn’t relying on stale tickets as a context source. That way, it’s prepared for the next oddball customer question that would stump a lesser system.

Making HITL Work for You

Implementing HITL doesn’t have to be a headache. Keep your KB relevant so the AI always references fresh data. Then, train agents to spot anything peculiar in a proposed response, from subtle product quirks to fast-changing policies. Track metrics like how often AI replies need tweaking, and watch those numbers go down as the AI improves.

The payoff? You get the speed and scalability of AI without the risk of glaring mistakes. Customers enjoy faster replies, and agents stop wasting time battling AI-created confusion. Best of all, you set the foundation for predictive support down the line - where AI not only catches gorillas but also prevents them from appearing in the first place.