Need a little productivity boost? Join our monthly newsletter and we’ll go/link you to the latest tips and trends in tech!
AI adoption has moved past the point of debate — it’s now a cultural stress test, exposing which organizations have the governance, trust, and leadership infrastructure to scale it responsibly and which are simply reacting to pressure. As companies push AI further into daily work, HR leaders face a complex mandate: accelerate adoption without eroding the human-centered culture that drives performance in the first place. The question is no longer whether AI will reshape your organization — it’s whether your people, managers, and systems are intentionally designed to absorb that change without fracturing trust.
In a recent GoProfiles HR Game Changers panel, senior people leaders Allyson Carr, Jim Miller, and Vanessa Monsequeira shared how they’re navigating that tension — building governance frameworks, closing critical thinking gaps, and equipping managers to lead through continuous change. Moderated by Janelle Henry, Talent and Brand at Stripe, the conversation tackled the thorniest questions in AI adoption: how to move fast without breaking trust, how to preserve the human elements of work while embracing automation, and how to prove any of it is working.
For more HR technology insights, check out our previous episodes:
And if you’re not registered for the series, you can register here. And follow us on LinkedIn to never miss a moment!
Speakers
Janelle Henry: Talent and Brand at Stripe, Advisor & Former VP of People at Rad AI & GoProfiles customer (moderator)
Allyson Carr: Chief People Officer, Cybereason
Vanessa Monsequeira: VP of People, Intelligence Rewired (ex- Gorilla)
Jim Miller: VP of People and Talent, Ashby
Key Takeaways:
Governance is the gap — the barrier isn’t curiosity, it’s the absence of a clear framework.
Slow down to speed up — mapping what success looks like leads to faster, more confident execution.
Saved time isn’t ROI — without a plan for redirecting those hours, the return never materializes.
Managers are the overlooked layer — without alignment and peer learning built in, even the best AI strategy stalls in the middle.
Metrics need a maturity curve — pressure to show ROI too soon undermines genuine adoption.
Trust is the foundation — transparency, honest communication, and understanding what your workforce values are what make AI adoption sustainable.
Don’t Mistake an Adoption Problem for a Governance Problem
One of the first things the panel challenged was the framing of AI adoption itself. Many organizations assume the hurdle is getting employees to use AI. The real issue runs deeper.
“Nearly 30% of employees are using AI without telling their manager. That highlights that it’s not necessarily an AI adoption issue — it’s actually a governance issue.”
—Allyson Carr, CHRO, Cybereason
People are curious. They’re experimenting on personal devices, on their own time, finding workarounds when companies haven’t given them a clear lane to operate in. The problem isn’t resistance — it’s the absence of a framework. Carr also pointed to a gap between intention and action at the leadership level: nearly 80% of companies have announced plans to reskill their workforce by 2030, yet only about 13% of workers have actually undergone reskilling.
The takeaway: before you can solve for culture, trust, or productivity, it’s worth asking whether your AI strategy is a concrete plan — or still more of an intention.
Slow Down to Speed Up
Vanessa Monsequeira’s advice runs counter to the “move fast” instincts that dominate tech culture — and she makes a compelling case for why.
When Monsequeira took on an AI rollout in a previous role, she started not with tools or training, but with a question: what could go wrong? Before anything else, she convened a cross-functional steering group and facilitated what she calls a “pre-mortem.”
“We came together to ask: what does good look like? Not just success metrics — but in six months, in 12 months, what are we seeing that’s different? What are we hearing from our customers and employees? Where AI adoption gets blocked is almost always fear and resistance.”
—Vanessa Monsequeira, VP of People, Intelligence Rewired (ex-Gorilla)
That pre-mortem surfaced real concerns: data security, governance, tooling, and clarity around what employees were and weren’t expected to do. With those mapped out, the team could move deliberately — fast where it was safe to move fast, slow where caution was warranted. Without that foundation, Monsequeira noted, organizations risk deploying tools before people know how to think with them — and that’s where things tend to go sideways.
The Hidden ROI: Time Isn’t Saved Until It’s Repurposed
Jim Miller offered a framework for thinking about AI’s return on investment — one that challenges how most organizations are currently measuring it.
“We’re deploying AI to save human time — but it’s what you do with that time that matters. Before you deploy the technology, identify where that time should be saved and what you’re going to do with it.”
—Jim Miller, VP of People & Talent, Ashby
In other words: if you implement AI to save two hours per person per week but don’t actively redirect those hours toward higher-value work, you haven’t gained anything. Miller suggests thinking in units of 40 hours — every time you free up a full workweek’s worth of capacity across the team, you have a concrete story to tell about what that capacity is now doing. At Ashby, that’s translated into something equally valuable: recruiters spending more time with candidates, not less — a measurable gain in job satisfaction that sits alongside the productivity numbers.
That balance — output and fulfillment — is what makes AI adoption sustainable.
The Middle Layer: Why Manager Enablement Can’t Be an Afterthought
Every panelist circled back to the same pressure point: managers. They’re the critical conduit between executive vision and employee experience — and often the least supported through AI transitions.
Miller highlighted how misaligned adoption across functions can quietly create serious business risk. If your sales team accelerates with AI while customer success doesn’t have the same tools, you’re scaling pipeline faster than your ability to deliver on it. The fix isn’t more training. It’s alignment.
Monsequeira was direct about what she believes is the most important investment organizations can make right now:
“Every single leader needs to go through change leadership and change management training. We know that 75% of transformations fail — and in the AI space, some estimates put it at 90%. People are focused on tools without communicating the vision — again and again and again. When you feel blue in the face, some people are probably hearing it for the first time.”
—Vanessa Monsequeira, VP of People, Intelligence Rewired (ex-Gorilla)
She also advocated for making peer learning a structural habit — short weekly sessions where managers learn from each other, sharing what’s working and what isn’t, rather than waiting for top-down training that quickly goes stale.
Carr organized the manager challenge into three gaps worth addressing systematically:
The clarity gap — Managers don’t know what AI use is actually permitted.
The confidence gap — Managers don’t know how to explain the “why” or share examples with their teams.
The equity gap — Some team members feel invisible or devalued as AI takes on more tasks.
Her take on the equity gap is worth sitting with: weave both human skills and AI-driven gains into your rewards and recognition model. Acknowledge the person who applied judgment, thought critically, and made a call — alongside those who used AI to deliver measurable results. If both matter to your organization, it’s worth making sure people know it.
Critical Thinking Is the Skill AI Can’t Replace
One theme that surfaced consistently throughout the conversation was the quiet erosion of critical thinking — and the opportunity HR leaders have to get ahead of it.
Monsequeira outlined three components she believes define real critical thinking in the context of AI: structured problem-solving before touching a tool, critical analysis of the output, and decision-making that stays firmly in human hands. She described a rule she implemented with her own team:
“Before you touch the tool, you must do structured problem-solving — situation, complication, question, answer. Then work with the AI. Get the output, then critically analyze it.”
—Vanessa Monsequeira, VP of People, Intelligence Rewired (ex-Gorilla)
Carr recommended using personal projects — building a website, creating an audiobook, making a game for your kids — as a low-stakes way to get comfortable with AI before bringing those skills into the workplace. Her broader point: the skills gap around AI isn’t just technical. It’s about learning to engage critically with what these tools produce.
How to Measure What Actually Matters
On metrics, the panel agreed that most organizations are measuring the wrong things — or measuring the right things at the wrong time.
Carr outlined three workforce stability metrics she sees as underutilized:
Internal mobility — What percentage of employees are moving into AI-augmented roles?
Role redesign — What percentage of job descriptions have been updated with reskilling plans?
Manager readiness — What percentage of managers are AI-literate and actively leading change?
Monsequeira offered a useful reframe for anyone feeling pressure to show results quickly: before building a dashboard, make sure you understand what your stakeholders are actually expecting from AI — and what’s working elsewhere. And resist the urge to measure too soon. Giving people space to experiment in the early months, without the pressure of productivity targets, creates the conditions for genuine adoption. The right metrics will depend entirely on where your organization sits on the maturity curve — a company just beginning to explore AI needs a very different framework than one that has been deploying it for two years.
It All Comes Back to Trust
When asked to distill everything into one piece of advice, each panelist returned to a different dimension of the same core truth: trust is the foundation.
Carr urged leaders to resist the temptation to use AI as a shield for difficult business decisions:
“People want honest answers. What will help build trust is not reactive decisions, but a visible, multi-year plan. When people feel like there’s time, thought, and intention behind the plan, that’s what protects culture and trust.”
—Allyson Carr, CHRO, Cybereason
Miller pointed to transparency about the future of headcount as an underutilized cultural lever:
“The level of transparency — ‘we took this into consideration when building the headcount plan, here’s what it looks like moving forward’ — is how culture, change, and innovation can sit well together.”
—Jim Miller, VP of People & Talent, Ashby
And Monsequeira brought it full circle: don’t take action until you actually understand what your workforce values. Conduct a change-readiness survey. Run pulses. Ask people what culture means to them — the rituals, the stories, the structures — before you design anything around it.
“Before you take action, the only action is: understand what’s most important to your workforce. Don’t assume.”
—Vanessa Monsequeira, VP of People, Intelligence Rewired (ex-Gorilla)
The Real Work Starts Here
Scaling AI without breaking culture isn’t a technology problem — it’s a leadership problem. It requires honesty about where your organization actually is, not where you want it to be. Managers need the tools, training, and support to lead through ambiguity, and critical thinking needs to be practiced, recognized, and rewarded — even as AI accelerates the pace of work and decision-making.
The HR leaders who get this right won’t be the ones who moved fastest. They’ll be the ones who were most intentional.
Scaling AI responsibly starts with understanding your people. Discover how GoProfiles helps HR leaders build the visibility, alignment, and trust needed to make AI adoption stick.
Build a culture of connection and recognition with GoProfiles