Need a little productivity boost? Join our monthly newsletter and we’ll go/link you to the latest tips and trends in tech!
AI has moved beyond its experimental phase — it’s reshaping how performance is measured, how work is structured, and how impact is delivered across organizations. As companies transition from curiosity to operational execution, HR leaders must reset how AI readiness is embedded into goals, workflows, incentives, and culture. The question is no longer whether AI will influence performance — it’s how intentionally organizations integrate it to accelerate outcomes without creating resistance, confusion, or misalignment.
In a recent GoProfiles HR GameChangers panel, senior people leaders from Guild, UiPath, and Vidyard shared how they’re navigating this reset — tying AI fluency directly to productivity, accountability, and measurable business results. Moderated by Janelle Henry, Talent and Brand at Stripe, the conversation explored the structural shifts, enablement investments, and leadership behaviors required to move AI from experimentation to expectation — while ensuring performance remains human-centered, transparent, and future-ready.
For more HR technology insights, check out our previous episodes:
And if you’re not registered for the series, you can register here. And follow us on LinkedIn to never miss a moment!
Speakers
Janelle Henry: Talent and Brand at Stripe, Advisor & Former VP of People at Rad AI & GoProfiles customer (moderator)
Alana Brandes: Chief People Officer, Guild
Agnes Garaba: Chief People Officer, UiPath
Sarika Lamont: Chief People Officer, Vidyard
Key Takeaways:
Urgency is accelerating — AI is moving from “interesting” to essential faster than many organizations expected.
Workflow redesign is the toughest step — adoption isn’t linear, and momentum often slows after the early wins.
Performance needs integration, not compliance — AI should be built into how results are achieved, not treated as a checkbox.
Incentives sustain adoption — aligning AI progress with recognition and rewards drives continued momentum.
Mindset matters more than tools — curiosity and judgment often outweigh prior AI experience.
Enablement is infrastructure — hands-on, cross-functional support makes AI expectations realistic and fair.
Leadership accountability is critical — without clear direction from the top, AI in performance drives anxiety, not adoption.
The Shift From Experimentation to Urgency
Over the past year, many organizations encouraged teams to explore generative AI tools — to experiment, test, and build baseline fluency. It was a period defined by curiosity and discovery.
But in recent months, the tone has shifted.
Rapid advances in AI capabilities have compressed timelines and raised expectations. What once felt exploratory now feels immediate. Leadership teams are no longer asking whether to adopt AI — they’re asking how it should reshape workflows, productivity, and performance systems right now.
For HR leaders, that acceleration brings both opportunity and pressure. Just as organizations begin to operationalize one wave of tools, the next release changes the landscape again. The pace is energizing — but it requires clarity, prioritization, and strong leadership alignment to avoid confusion or fatigue.
And for Alana Brandes, the velocity of change is redefining how quickly work itself can evolve:
“The speed at which these tools are enabling work to shift dramatically is wildly exciting — and incredibly hard to keep up with.”
—Alana Brandes, Chief People Officer, Guild
The Hardest Phase: Redesigning Work
If experimentation was phase one, workflow redesign is phase two — and it’s significantly more complex.
Many organizations have successfully encouraged employees to use generative AI as a thought partner. But lasting transformation requires more than better prompts. It demands rethinking processes, connecting previously siloed systems, and reimagining how teams collaborate and deliver results.
Adoption also isn’t linear. Leaders are seeing a familiar pattern: early enthusiasm followed by hesitation when the work shifts from task support to structural redesign. Momentum builds quickly with visible wins — then slows when teams are asked to rebuild how work actually flows.
The real transformation begins when AI moves beyond assisting individual tasks and starts reshaping operating models.
Sarika Lamont captured this inflection point clearly:
“We’ve moved past simply testing the tools — now the question is: where do we go from here? How do we truly rethink our workflows?”
—Sarika Lamont, Chief People Officer, Vidyard
Alana Brandes described a deliberate evolution from experimentation to effectiveness:
“Last year was about driving experimentation. This year is about asking: how is AI actually making you more effective?”
—Alana Brandes, Chief People Officer, Guild
Performance Management: From Usage to Impact
One of the most practical — and nuanced — parts of the discussion focused on performance management.
In the early stages of adoption, some organizations introduced AI as a standalone competency in performance reviews. But isolating AI can unintentionally dilute its purpose. When treated separately, it risks becoming a compliance exercise rather than a strategic enabler.
The panel emphasized that AI should not be positioned as an end goal. It should be embedded into conversations about outcomes and impact. As moderator Janelle Henry noted:
“There’s a tendency to make the goal AI — when the goal is actually the impact.”
—Janelle Henry, Talent and Brand at Stripe, Advisor & Former VP of People at Rad AI
That distinction reframes performance entirely. Instead of asking whether someone is using AI, leaders should explore how it changed their approach, improved execution, and elevated results.
Sarika Lamont shared why separating AI into its own question can backfire:
“When you make it a standalone question, it feels like a compliance checkbox. The better question is: where did you change how you approached your work — and where do you still have room to grow?”
—Sarika Lamont, Chief People Officer, Vidyard
Equally important, expectations must reflect role realities. A product engineer, a customer success manager, and a people operations partner will leverage AI differently. Performance frameworks must account for that nuance rather than applying uniform standards.
And in regulated or sensitive environments, effectiveness alone is not enough. Responsible use must be measured alongside outcomes. Agnes Garaba underscored the role of oversight and judgment in AI-enabled work:
“How does human judgment come into play as you use AI? Are people taking the right steps to make sure things are safe?”
—Agnes Garaba, Chief People Officer, UiPath
AI fluency, then, isn’t just about speed or efficiency. It’s about impact — delivered thoughtfully, responsibly, and in alignment with organizational standards.
Incentives and Enablement: The Infrastructure of Adoption
Adoption at scale requires more than strong messaging — it requires alignment across systems, incentives, and support structures.
One practical lever the panel discussed was incentive design. When AI readiness is tied to measurable outcomes and compensation, it sends a clear signal: this isn’t a side initiative — it’s strategic.
Sustained adoption requires deliberate, structured enablement. AI expectations must be matched with the training, tools, and cross-functional collaboration needed to support them. HR cannot carry this transformation alone — engineering, IT, operations, and leadership must all play a role.
Sarika Lamont emphasized that enablement is infrastructure, not an afterthought:
“If you want AI woven into performance, you have to prioritize the infrastructure around enablement. It’s a cross-functional effort — everyone is involved in driving it.”
—Sarika Lamont, Chief People Officer, Vidyard
Without that foundation, embedding AI into performance systems risks creating pressure rather than progress. With it, organizations can move from experimentation to sustainable, scalable adoption — grounded in clarity, capability, and shared ownership.
Hiring for the Future: Mindset Over Mastery
Recruitment is evolving alongside performance.
Organizations are embedding AI expectations directly into job descriptions and interview processes. For technical roles, this may include AI-assisted case studies or coding exercises. The goal isn’t simply to confirm familiarity with AI tools — it’s to understand how fluently and thoughtfully candidates apply them in real scenarios.
At Vidyard, AI readiness is intentionally built into the hiring flow. Sarika Lamont shared that job descriptions clearly outline AI expectations, and interview plans are designed to assess fluency in context. For technical roles, that can include incorporating AI directly into practical exercises — such as coding with AI — to evaluate how candidates think and execute in real time.
But exposure alone isn’t a reliable predictor of long-term success.
Agnes Garaba emphasized that frequency of use doesn’t automatically equal strategic capability:
“Using AI frequently can be a positive signal — but the real question is whether it’s being used thoughtfully and effectively. We’ve often seen strong results from individuals with less exposure who are deeply curious and eager to learn.”
—Agnes Garaba, Chief People Officer, UiPath
The takeaway: hiring for mindset may be more durable than hiring for mastery. Curiosity, adaptability, and the ability to rethink workflows often matter more than familiarity with a specific platform. In a rapidly evolving landscape, learning agility is the true competitive advantage.
The Leadership Mandate
As the conversation closed, one theme rose above the rest: AI transformation cannot be delegated downward without visible leadership ownership.
Embedding AI into performance systems without providing structure, clarity, and enablement is not sustainable. Leaders must define the strategy, align incentives, create clear learning pathways, and model usage themselves. Without that foundation, expectations feel like pressure rather than progress.
Alana Brandes underscored the risk of moving too quickly without support:
“It’s irresponsible to put AI fluency into performance without a clear, supported way for employees to get there — that creates anxiety, not adoption.”
—Alana Brandes, Chief People Officer, Guild
True transformation requires both ambition and infrastructure.
At the same time, waiting for perfect frameworks can stall momentum. Iteration is part of the process. Agnes Garaba emphasized the importance of moving forward — even before every detail is finalized:
“Don’t wait until everything is perfect — ship the idea, test it, and iterate.”
—Agnes Garaba, Chief People Officer, UiPath
The reset, then, is not about flawless execution. It’s about building while learning — pairing leadership accountability with a bias toward action.
Redefining Performance for an AI-Enabled Workforce
This shift isn’t about adding AI to performance forms.
It’s about redefining performance around meaningful impact in an AI-enabled workforce — and building the leadership clarity, aligned incentives, enablement infrastructure, and cultural reinforcement required to make that change sustainable.
AI is not the goal. Impact is the goal. AI is the multiplier.
Scaling AI successfully starts with visibility into your people and performance. Discover how GoProfiles helps HR leaders align teams, strengthen accountability, and lay the foundation for AI-enabled performance at scale.
Build a culture of connection and recognition with GoProfiles
Join us to learn how to move beyond AI experimentation into scaled adoption with a practical CHRO playbook for governing AI while protecting culture and building trust.