Please ensure Javascript is enabled for purposes of website accessibility
Free Guide

Before You Put AI on Your Performance Reviews, Read This

ai performance reviews May 06, 2026

Your executive team has a great idea: "We need to add AI proficiency to our performance reviews." Now HR is in charge of rolling it out. So, what's the problem?

It's that rolling out AI in your performance review criteria before you've had the right conversations with your team isn't bold - it's a setup for confusion, resentment and a lot of managers and team members alike staring blankly at a form, wondering what they're actually supposed to assess.

Before AI proficiency lands on anyone's performance review, there's a framework worth working through - one that applies to any new expectation, but especially to something as fast-moving and emotionally loaded as AI.

Watch a roleplay of HR having this conversation in a leadership team meeting - and keep reading for takeaways.

The 3 Questions Every Employee Should Be Able to Answer

For any performance expectation to be fair - and actually drive results - every employee needs to know three things:

  1. What's expected
  2. What's allowed
  3. What's possible

If your organization can't answer those questions clearly before review season, you're not evaluating AI proficiency. You're evaluating who happened to guess right.

What's Expected

This sounds obvious, but try it: ask five people across different levels of your organization what "AI proficiency" means for their specific role. You'll likely get five very different answers - or a lot of "Well, I'm not exactly sure" placeholders.

"What's expected" means more than just "use AI." Without a clear conversation, someone could spend hours each week prompting away and genuinely believe that effort alone should earn them a 5 out of 5. The standard needs to be specific: what does good actually look like in this role, in this function, at this stage? That's the difference between a performance standard and a vague aspiration.

Before anything goes on a review form, managers need to be able to have - and actually have - a real conversation with each team member about what they're actually expected to do with AI. Not just a general organization-wide statement, but something role-specific and meaningful.

What's Allowed

This is where Legal, IT Security, and HR all have a stake - and where, without that conversation, employees tend to land in one of two places: they overuse AI in ways that create real risk, or they avoid it entirely.

When no one has communicated which tools are approved, a lot of people will default to whatever free tool they've heard of - and your confidential data ends up somewhere it shouldn't be. That's an obvious problem. But the opposite isn't great either. Employees who opt out because they're not sure what's allowed are making a reasonable call - it's just not fair to then turn around and assess them on AI adoption when no one gave them the boundaries to work within.

"What's allowed" means your organization needs clear answers to questions like:

- Are there approved tools employees should be using?

- If someone finds a new tool, who do they ask - and how long does approval take?

- What data and information is off-limits in AI tools, and why?

Getting this clarity before review time isn't just good for your lawyers and IT security. It's the prerequisite for holding anyone accountable.

What's Possible

Here's the one that most organizations miss - and it matters more than people realize.

If your employees have no visibility into what's possible with AI, two things happen. First, innovation stays siloed - one person figures something out, maybe tells a coworker, but it never scales. Second, employees start doing the mental math: If I make myself more efficient with AI, does that just mean more work for free? Or worse - am I automating myself out of a job?

That fear isn't irrational. It's a normal human response to incentive structures. If you want people to genuinely explore and adopt AI, you have to think about what's in it for them. Their development still matters. Their judgment still matters. The goal isn't to use AI for the sake of using AI - it's to use it to actually be better, not just look better.

Creating space for people to share what they're discovering, celebrating innovation and being transparent about how AI adoption connects to opportunity and compensation - that's how you actually get the results you're aiming for in the executive meeting.

The Real Conversation Happening Far from the Conference Room

Here's what gets lost between the executive table and the front line: the actual human conversations.

When your organization announces that AI will be part of performance reviews, the conversations that matter most aren't the ones in the boardroom. They're the ones between a manager and a direct report who's spending the meeting wondering if she should be worried. Between two managers in a hallway asking each other, "Wait, what are we actually supposed to assess?" Between an employee and an SVP in a skip-level meeting, who has no idea what the standard even is.

Those conversations are going to happen whether you prepare for them or not.The question is whether the people having them will actually know what to say.

Before You Add It to the Form

If you're heading into a season where AI proficiency is being added to performance reviews, here's what to do before the form goes live:

- Have the Expected/Allowed/Possible conversation with your team - not as a one-time announcement, but as an ongoing dialogue, mapped to each function and role

- Ask managers what they would actually assess - if they can't answer that, the review criteria isn't ready

- Ask employees what they've heard - the gap between what leadership thinks they've communicated and what employees have actually received is almost always wider than expected

- Signal what's in it for people - be explicit that human development, judgment, and contribution still matter and will still be recognized

Adding AI to a performance review form takes about three minutes. Having the conversations that make it meaningful takes longer - but it's the difference between a review that actually drives results and one that just adds a new box no one knows how to fill out.


Want a tip like this every week? Join the Manager Method Minute - practical and tactical, with a bonus tip for reading → sign up here.

I'm

Ashley Herd

Founder of Manager Method®

I worked as a lawyer in BigLaw (Ogletree Deakins), and leading companies (including McKinsey and Yum! Brands). I’ve also served as General Counsel and Head of HR for the nation’s largest luxury media company (Modern Luxury). I’m a LinkedIn Learning instructor on people management, co-host of the “HR Besties” podcast (a Top 10 Business Podcast on Apple Podcasts and Spotify) and have been featured by CNN, Financial Times, HR Brew and Buzzfeed — all providing a skill set to benefit your organization and redefine people leadership.

HR Besties Podcast

Your HR Besties are here to celebrate your good days, relate on your tough days, and shout from the rooftops that being human at work matters. Hosted by Ashley Herd, Leigh Elena Henderson and Jamie Jackson.

Listen to the Podcast