Q&A – Hilary Aylesworth, Chief Product and Technology Officer at CoachHub

Hilary Aylesworth, Chief Product & Technology Officer at CoachHub, discusses AIMY™, an on-demand, evidence-based AI coaching companion built to deliver support at the moment of need. She explains how AIMY blends validated coaching frameworks with human oversight, the ethical guardrails behind it, and how AI can scale personalised learning while elevating, and not replacing, human coaches.

What inspired the creation of AIMY™, and how does it differ from traditional coaching models?

    AIMY™ was born from a very human frustration: coaching works, but it doesn’t scale. After working with 1,000+ enterprise clients, I kept seeing brilliant talent stuck without access to the kind of support that helps people grow—not because they weren’t deserving, but because coaching was reserved for the top few.

    So we asked: What if coaching showed up when you needed it most, not weeks later after a calendar invite?

    Unlike traditional coaching—which is high-touch, scheduled, and often reserved for senior leadership—AIMY™ is available 24/7, designed to jump in right when someone’s prepping for a tough feedback conversation, reflecting on a presentation flop, or setting stretch goals. It’s built on real coaching science, not improvisation, and it delivers support at the moment of need—without waiting for a Zoom slot to open up.

    Think of it less as replacing coaches, and more like building an on-demand thought partner for foundational challenges—so that human coaches can focus on the complex, nuanced stuff they do best. That’s how we scale reflection and access, without sacrificing depth.

    How did you approach integrating psychological principles into the AI’s design without losing human nuance?

      We started with a rule: no shortcuts.

      We embedded behavioural scientists, coaching psychologists, and certified coaches into every design sprint. AIMY™ was built on validated frameworks like GROW and CLEAR, not GPT guesswork. The goal wasn’t to “sound human”—it was to be helpful in a deeply human way.

      Rather than give answers, AIMY™ asks the right questions. It helps users clarify goals, unpack blockers, and prepare for challenges—without telling them what to think. That’s by design. Because real growth happens through reflection, not instruction.

      And the nuance? It’s not lost—it’s protected. AIMY™ knows when to hand off to a human coach. It avoids prescriptive advice, flags sensitive topics, and creates a space users describe as “judgment-free”—a term we hear so often, it’s basically AIMY’s love language.

      What kind of feedback have you received from beta partners so far?

        The most consistent feedback? “This feels surprisingly personal.”

        Beta testers—ranging from new hires to experienced managers—tell us AIMY™ helps them structure their thinking, especially in moments when they’re unsure where to start. We’ve heard things like: “It’s like having a sparring partner who never gets tired,” and “I was more honest with AIMY™ than I expected to be.”

        Many users say they feel safer opening up to AIMY™ in early moments than with a human coach—especially when trust hasn’t yet formed. And rather than seeing AIMY™ as a replacement, they see it as a warm-up: it helps them practice, gain clarity, and feel more prepared for deeper coaching conversations.

        AIMY™ is becoming a kind of “coach for the coaching gap”—and users are genuinely grateful for it.

        How do you ensure AIMY™ remains ethical, unbiased, and culturally sensitive across global users?

          Ethics isn’t a feature—it’s a foundational layer.

          We designed AIMY™ with five guardrails:

          1. Cultural adaptation: AIMY™ is localised into 80+ languages and reviewed by regional experts for tone and cultural nuance.
          2. Transparency: We clearly communicate what AIMY™ can—and cannot—do. Users know when they’re talking to AI and when it’s time to talk to a person.
          3. Continuous monitoring: Every response is logged, reviewed, and fine-tuned with real user feedback. Bias audits and QA loops run constantly.
          4. Inclusive development: We brought in diverse voices at every stage—neuroscientists, DEI consultants, multilingual reviewers, and end users across continents.
          5. Regulatory alignment: We proactively align with guidance from the EU AI Act and beyond, ensuring our approach holds up under legal and ethical scrutiny.

          The result? A product that respects human complexity, avoids overreach, and scales with integrity.

          In your view, what are the most common misconceptions executives hold about AI in coaching or L&D?

            The biggest misconception is viewing AI as a replacement rather than augmentation tool. Executives often think AI coaching means eliminating human coaches entirely, when actually it’s about creating better division of labour. AI handles foundational needs whilst coaches can focus on more nuanced situations. There’s also fear that AI will exacerbate bias, when proper implementation actually provides opportunities to identify and correct biases more systematically than traditional approaches.

            A lot of C-suite executives often approach AI as a tech trend rather than a strategic transformation tool. They focus on efficiency gains rather than understanding how AI can support broader business objectives like retention, innovation capabilities, or cultural change. The most successful implementations treat AI as a strategic partner for transforming how people grow and connect within organisations.

            What challenges arise when balancing AI efficiency with the emotional intelligence required for effective coaching?

              The primary challenge is knowing where to draw boundaries. AI excels at structured scenarios, performance review preparation, feedback frameworks, presentation anxiety, but coaching often ventures into deeply personal territory requiring sophisticated emotional intelligence that AI simply cannot replicate.

              We’ve addressed this by designing AIMY™ to recognise its limitations. It focuses on low-complexity situations where efficiency genuinely adds value, rather than attempting to handle issues that require human empathy and intuition.

              The balance comes through transparency and clear expectations. Users understand what AIMY™ can and cannot do, which actually enhances trust. They appreciate having immediate access to structured support for routine challenges, knowing they can escalate to human coaches for more specific needs.

              Rather than seeing this as a limitation, we view it as responsible AI deployment. The goal isn’t to replace emotional intelligence but to handle the foundational layer efficiently, freeing human coaches to apply their skills where they’re most needed.

              How do you see AI coaches complementing human coaching relationships in organisations?

                AI coaches like AIMY™ create a foundational support layer that makes human coaching more strategic and impactful. Rather than competing, they work in tandem, AI handles immediate, accessible needs whilst human coaches focus on complex, relationship-intensive development.

                AIMY™ supports someone preparing for their first feedback conversation or working through presentation nerves, providing frameworks and building confidence through practice scenarios. When that same person faces a career crisis or leadership challenge, they’re better prepared to engage meaningfully with a human coach.

                The result is more inclusive development, everyone gets consistent baseline support through AI, whilst human coaching becomes more targeted and effective. It’s not about replacement; it’s about creating coaching ecosystems that serve entire workforces efficiently.

                1. How do you envision the future of personalised learning at scale, and what role will AI like AIMY™ play in that evolution?

                The future lies in creating integrated development ecosystems where learning connects directly to organisational strategy and individual growth paths. We’re moving beyond isolated training sessions towards continuous, contextual development that adapts to real workplace challenges.

                AI will provide the foundational layer, immediate, personalised support available precisely when needed. AIMY™ represents this shift: rather than scheduling development, people access coaching support as challenges arise. This creates compound learning effects where small, consistent improvements build substantial capabilities over time.

                Crucially, personalisation at scale isn’t just about individual needs, but about connecting personal development to broader organisational transformation. AI can help identify patterns across teams, suggest relevant development pathways, and ensure learning initiatives align with business objectives.

                From a leadership perspective, how should executives approach AI as a growth partner rather than a tech trend?

                  Start with curiosity, not pressure. The biggest mistake executives make is approaching AI with preconceived notions about efficiency gains rather than understanding how it can strategically transform their workforce development.

                  Focus on real business challenges rather than technology features. Ask: where do our people need consistent support? How might AI help us instill core values or build innovation capabilities?

                  Most importantly, maintain strategic oversight. Develop clear policies for AI usage, involve employees in defining how these tools support their growth, and remember that AI amplifies your organisational culture. Ensure you’re amplifying the right values and behaviours. Treat AI as a strategic partner for transformation, not just another efficiency tool.

                  What lessons have you learned personally while developing AIMY™ that have changed how you lead or think about talent development?

                    The most significant lesson is that democratising development requires rethinking our assumptions about what effective coaching looks like. Building AIMY™ has also reinforced the importance of what I call “generalist era thinking”, connecting insights across psychology, ethics, technology, and human behaviour. You cannot build responsible AI by focusing solely on technical capabilities.