The research behind what separates top enterprise sellers from average ones. The insight: the best sellers teach, tailor, and take control. They lead with an insight the buyer has not considered. This is exactly what the Brief should do in every issue.
Enterprise Sales
📘
The Fearless Organization
Amy Edmondson
The definitive research on psychological safety — and the science behind why employees don't speak up about AI confusion. Required reading for anyone trying to understand why adoption theater happens at the team level.
Psychological Safety
📙
Switch
Chip & Dan Heath
How to change behavior when change is hard. The Rider, Elephant, and Path framework maps directly to every AI adoption challenge. The most practical book on human behavior change in organizational contexts.
Behavior Change
📕
Never Split the Difference
Chris Voss
Applied negotiation from a former FBI hostage negotiator. Mirroring, labeling, tactical empathy — directly applicable to enterprise fee conversations and to reading what a buyer is actually thinking when they say "we're still evaluating options."
Negotiation
📗
The Culture Code
Daniel Coyle
Three skills that build high-trust environments: safety, vulnerability, and purpose. Every chapter has a direct application to why some teams adopt AI and others resist it. Read it for the belonging cue research alone.
Culture
📘
Atomic Habits
James Clear
The most cited book on habit formation — relevant because AI fluency is ultimately a habit, not a skill. The four laws of behavior change (cue, craving, response, reward) explain exactly why most AI training programs fail to create lasting change.
Habits
📙
The Trusted Advisor
David Maister
How to move from vendor to advisor in the eyes of an enterprise client. The trust equation is the most useful single framework for understanding why some practitioners get hired repeatedly and others only once.
Enterprise Relationships
📕
Long Life Learning
Michelle Weise
A blueprint for the future of skill development in a world where careers span 50+ years. The research on skill adjacency and identity in the AI era maps directly to the Activity vs. Fluency framework and every L&D conversation happening right now.
Future of Work
🎙
WorkLife
Adam Grant
Organizational psychology applied to real workplace situations. Every episode is directly relevant to L&D and HR professionals. Grant's research on motivation, identity, and performance is the academic spine of half the Brief's Human Side content.
Organizational Psychology
🎙
Dare to Lead
Brené Brown
Vulnerability, courage, and leadership in organizations. The research on psychological safety and shame in workplace cultures connects directly to why employees hide AI confusion from their managers and why adoption theater persists.
Leadership
🎙
HBR IdeaCast
Harvard Business Review
The most consistently relevant management research podcast for L&D and HR buyers. Episodes are 20-30 minutes and routinely surface the academic research that Brief readers need to cite in budget conversations with leadership.
Management Research
🎙
No Priors
Sarah Guo & Elad Gil
The sharpest AI industry conversation happening in podcast form. Guests are founders, researchers, and investors at the frontier of AI development. Listening gives L&D leaders 6-12 months of lead time on what is coming to their organizations.
AI Industry
🎙
The Knowledge Project
Shane Parrish
Mental models and decision-making from the world's clearest thinkers. The episodes on first-principles thinking, systems, and organizational psychology are required listening for anyone building AI adoption programs that need to actually work.
Decision Making
🎙
Coaching for Leaders
Dave Stachowiak
One of the most downloaded leadership development podcasts. Interviews with L&D researchers, HR leaders, and organizational coaches. Stachowiak's audience is Terry's audience — L&D practitioners who are serious about their craft and hungry for substance.
L&D Practitioners
🎙
L&D Disrupt
David James
The most practitioner-focused L&D podcast available. James talks directly about AI in training, behavior change measurement, and the gap between learning investment and business impact. Every episode is a potential source quote for the Benchmarks section.
L&D Practitioners
🎙
How I Built This
Guy Raz
Founder stories told through the lens of failure, identity, and reinvention. The parallels to the Identity Reset framework are everywhere — every episode is a case study in the personal-level application of the four frameworks. Terry should be a guest.
Founder Stories
The Enterprise AI Brief
Curated by Terry Rice — AI Performance Systems Architect, General Assembly Faculty · ISSUE 001 — APR 8, 2026
Trusted by teams at
Google·EY·Berkshire Hathaway·Amazon
This Week
Benchmarks
Case Studies
Human Side
Team Challenges
Contribute
// This Week
Three Signals That Matter
Every edition is built around one theme. This week: the gap between AI activity and AI fluency, and why most L&D programs are measuring the wrong thing.
This edition's theme: Adoption vs. Theater — the difference between organizations genuinely changing how work gets done and organizations performing AI adoption for leadership.
01
WORKFORCE
91% of employees completed AI training in 2025. Only 11% changed how they actually work.Gartner, Q1 2026
This is Kirkpatrick Level 3 failure at industrial scale. Organizations are measuring training completion and calling it adoption. The 80-point gap between completion and behavior change is not a training problem. It is a measurement problem dressed up as a training problem.
3 actions to take this week
01
Audit your most recent AI training program against Kirkpatrick Level 3. If you cannot name three workflows that changed because of it, the program failed, regardless of what the completion report says.
02
Replace completion-rate reporting with a 30-day behavior change survey. Ask employees to name one specific thing they do differently because of the training. One real example is worth 100 completion certificates.
03
Run this week's Team Challenge: the AI Adoption Self-Assessment. Four questions, answered privately, then discussed as a group. The gap it reveals is the data you actually need for your next budget conversation.
02
TECHNOLOGY
7 in 10 employees who became proficient AI users learned by doing, not by watching. Only 3 in 10 cited formal training as their primary path.LinkedIn Learning, 2026
The research keeps saying the same thing and L&D keeps ignoring it: adults learn when the content is immediately applicable to a real problem. Passive video training is the wrong format for the wrong moment. The organizations closing the adoption gap are building, not watching.
3 actions to take this week
01
Identify the two or three job functions that touch the most repetitive, high-volume tasks. Those are your highest-ROI AI adoption targets. Start there, not with a company-wide rollout.
02
Build one 30-minute learn-by-doing session for each target function. Participants build something real during the session. The output is the learning artifact, not a quiz score.
03
Add a 30-day implementation commitment at the end of every training session. Participants commit to one specific workflow change. Manager follows up at day 14 and day 30. No follow-up, no change.
03
LEADERSHIP
Only 23% of CHROs report that senior leadership regularly uses AI tools in their own work, compared to 61% of frontline employees.Josh Bersin Co., Mar 2026
Leadership is mandating adoption it has not achieved itself. Employees watch what executives do, not what they say. If the CHRO is not using AI, the message sent to the entire organization is that AI is for other people, regardless of what the memo says.
3 actions to take this week
01
Before your next executive briefing on AI adoption, ask every senior leader to demonstrate one AI tool they used this week. If they cannot, that is your priority training investment, not another employee program.
02
Build one executive-specific session: 60 minutes, one real problem, one AI-built solution. Lead by example before mandating by memo. That sequence is the difference between a culture that adopts and one that performs adoption.
03
Add AI fluency to leadership performance reviews. Not AI completion rates, but specific behaviors: workflows changed, tools adopted, team members coached. Measure what matters and you change what gets done.
// Benchmarks
Verified Data
Sourced, dated, and delta-tracked. Every number includes where it came from and how it changed from the prior period.
// FREE DOWNLOAD
Corporate AI Training Benchmark Report — Q1/Q2 2026
The full report goes deeper on every section: spending benchmarks, adoption rates by industry, training formats, the self-assessment framework, what leaders are getting wrong, and what you should actually be paying. Free to download and share.
91% of employees completed AI training last year. 11% changed how they work because of it. That 80-point gap is not a technology problem or a budget problem. It is a strategy problem. Organizations are optimizing for the metric that is easy to report — completions — and ignoring the one that actually matters: behavior change.
WHAT TO SHARE TO LOOK SMART
"91% of employees completed AI training. 11% changed how they work. That's the whole problem in two numbers."
"Only 23% of CHROs say their senior leaders regularly use AI. You can't mandate what you're not modeling."
"When someone says they have 75% AI adoption, ask which definition. McKinsey puts it at 34% using a definition that actually means something."
"Spending on AI training is up 38% year over year. The failure rate is still 70%. More money is not the answer."
The root cause hiding in the data:Only 23% of CHROs say their senior leaders regularly use AI tools in their own work. The people mandating AI adoption are not doing it themselves. That is not a cultural detail. It is the reason most AI training investments fail to produce behavior change: employees will always watch what leadership does, not what leadership says. The solution is not another training program. It is leaders using AI publicly, visibly, and imperfectly in front of their teams.
91%
Employees who completed AI training in 2025
+14pp YoYGartner — Jan 2026
11%
Employees who changed how they work because of AI training
+2pp YoYGartner — Jan 2026
27%
Organizations that feel effective at building employee AI skills
+5pp YoYJosh Bersin — Feb 2026
23%
CHROs who say senior leadership regularly uses AI tools in their own work
+8pp YoYJosh Bersin — Mar 2026
$4,100
Average annual investment per employee in AI training and tools
+38% YoYATD — Q1 2026
70%
AI change initiatives that fail to meet their stated objectives
-3pp YoYMcKinsey — Jan 2026
// Conflicting Signals
When two credible sources disagree on the same metric, both are shown. You decide which number applies to your organization.
⚡
CONFLICTING SIGNAL — AI ADOPTION RATE IN THE WORKFORCE
McKinsey — March 2026
34%
of workers regularly use AI for substantive work tasks
Microsoft / LinkedIn — February 2026
75%
of knowledge workers have used AI at work in the past six months
Why they differ: McKinsey defines "regular use" as at least weekly for tasks that meaningfully affect output. Microsoft and LinkedIn count any use within a six-month window. Both numbers are accurate. They are measuring completely different behaviors. McKinsey is measuring what L&D should be targeting. Microsoft is measuring what most executives are reporting to their boards. The 41-point gap between those two numbers is where most AI adoption strategies are currently lost.
// Case Studies
Enterprise Implementations
Real organizations, documented approaches. Each case study draws from publicly available information on how these companies approached AI adoption.
CASE 01 — FINANCIAL SERVICES
JPMorgan Chase
BEHAVIOR CHANGE
18 MONTHS
The Situation
JPMorgan deployed AI tools to 60,000+ employees in 2024 with high completion rates and near-zero behavior change. Leadership realized they were tracking the wrong metric. Employees completed training and returned to their existing workflows within days.
What They Did
Shifted from training programs to workflow integration sprints. Each team identified one high-volume repetitive task, built an AI workflow for it, measured time savings over 30 days, and shared results across teams. Peer-to-peer proof replaced top-down mandates.
The Result
Document review tasks became faster and teams that completed the sprint showed significantly higher ongoing AI usage at 90 days compared to employees from prior training-only cohorts. The difference was making rather than watching.
What You Can Do
Run a 30-minute workflow audit with your team this week. Pick one task everyone does repeatedly. Build an AI workflow for it together. Measure the time saved over 30 days. One concrete result beats a hundred completion certificates.
↗
The Wider Lens
This mirrors the shift from classroom driver's ed to actual driving practice. Decades of research showed that classroom hours had almost no correlation with driving safety — logged hours behind the wheel did. JPMorgan's sprint model is the corporate equivalent of putting people behind the wheel on day one rather than showing them slides about steering.
CASE 02 — RETAIL
Walmart
SCALE
12 MONTHS
The Situation
Walmart needed to upskill 1.6 million associates across 4,600 stores simultaneously. Traditional LMS-based training was showing 73% completion and near-zero behavior change. The scale of the problem made individual coaching impossible.
What They Did
Built an AI-powered simulation platform where associates practiced real customer scenarios with AI role-playing as the customer. Practice replaced presentation. Associates completed 8-minute sessions on their phones between shifts.
The Result
Associates who completed multiple simulation sessions showed significantly higher on-the-job skill application in the weeks following, verified via manager observation. Customer satisfaction scores improved in pilot stores over 90 days. The pattern held: practice produces behavior change, completion rates do not.
What You Can Do
Build a 10-minute practice scenario for your team's most common high-stakes conversation — a difficult stakeholder, a skeptical budget meeting, a performance discussion. Use Claude to role-play the scenario before the real thing. Practice is not rehearsal. It is the work.
↗
The Wider Lens
The Walmart model mirrors the shift from surgery residency lectures to surgical simulators. Research consistently shows that simulation-based training produces measurably better outcomes than observation-based training — in medicine, in aviation, in retail. The pattern is not industry-specific: deliberate practice beats passive exposure every time, regardless of the skill being built.
// Human Side of Change
The Psychology Behind the Gap
One behavioral science concept per edition. Always under 200 words. Always tied to a decision you can make this week.
38%
THE FINDING
of employees who resist AI tools cite fear of becoming irrelevant as their primary concern — not difficulty using the tool.
Prosci ADKAR Research — Q4 2025
// The Research
Social identity theory tells us that when employees have built their professional identity around a specific skill — writing, analysis, design, legal research — AI tools that automate that skill feel like a threat to who they are, not just how they work. This is not irrationality. It is a predictable response to identity threat that no amount of change management messaging will override.
// Why It Matters for Adoption
The way AI tools are positioned determines whether people adopt them or resist them. When AI is positioned as something that does your job, it triggers identity defense. When it is positioned as something that makes your expertise more valuable, people move toward it. Same tool. Same people. Completely different result, because the adoption barrier was never capability. It was identity.
The gap between knowing what AI can do and being willing to use it is not about tools or skills. It is about whether using the tool feels like it threatens who you are. That gap does not close with training programs. It closes with time, safety, and evidence that the new behavior is an addition, not a replacement.
— Enterprise AI Brief editorial, drawing on Prosci, Kotter, and Social Identity Theory
"
// BRING THIS TO YOUR NEXT LEADERSHIP MEETING
Are we positioning AI as something that replaces what our people do, or something that makes what they are already good at more valuable? That question is not cosmetic. It determines whether adoption actually happens, or whether you spend another quarter measuring completion rates that don't translate into changed behavior.
// How People Actually Learn
Why One Format Fails Most of Your Room
David Kolb's Experiential Learning Cycle, the foundational model in corporate L&D, identifies four stages people need to move through for learning to produce real behavior change. Most AI training programs hit one or two. The result is the 80-point gap between completion and behavior change documented in the benchmarks section.
STAGE 1 — HEAR IT
Concrete Experience
A live demo or explanation that creates the "I didn't know this was possible" moment. This is where most training starts and stops. Without the stages that follow, nothing sticks.
COVERED BY: Lectures, demos, videos
STAGE 2 — WATCH IT
Reflective Observation
Seeing others do it — peers, not presenters — creates social proof that changes what feels possible. This stage is where group showcases and team sharing earn their ROI.
COVERED BY: Team showcases, peer sharing
STAGE 3 — BUILD IT
Active Experimentation
Building something with their own hands is the anchor of the learning cycle. Gartner's data shows up to 70% of skills are never applied without this stage. No output, no change.
COVERED BY: Hands-on challenges, sprints
⚠️
// THE MISSING STAGE — WHY MOST AI TRAINING FAILS
Most corporate AI training delivers Stage 1 (demo/lecture) to a room of people who cannot retain learning without Stage 3 (building). Kinesthetic learners — people who only internalize something by doing it with their hands — make up a significant portion of most corporate rooms. Standard training formats have no mechanism for them at all, because real hands-on learning requires a deliverable. You cannot fake it with a quiz. The Team Challenges in this tool are designed around the full four-stage cycle. Each challenge is labeled with which stage it primarily addresses.
// SEE IT IN ACTION
The Maverick Lab is designed around all four stages in sequence.
Live demo. Team build. Group showcase. Leadership builds alongside. Every stage covered in one session.
Most L&D teams have never mapped their training programs against how people actually learn. This 30-minute exercise does it. Bring your last AI training program. Map every element against the four stages. See exactly where your adoption gap lives.
// LEARNING GAP AUDIT — 30 MIN — L&D TEAM
How to run it
APPLY IT — FULL CYCLE
1
Pull up the last AI training program your organization ran. Have the agenda, format, and completion data ready.
2
Map each element of the program to one of four stages: Hear It (demo/lecture), Watch It (peer observation), Build It (hands-on deliverable), Apply It (real work integration with follow-up). Be honest — most programs sit almost entirely in stage one.
3
Paste the prompt below into Claude with your audit results. Review the output together and identify one change to make before your next program launches.
// COPY THIS PROMPT INTO CLAUDE
Here is the structure of our most recent AI training program: [describe the format, length, activities, and any follow-up]. Using Kolb's Experiential Learning Cycle, audit this program against four stages: Concrete Experience (hands-on doing), Reflective Observation (watching peers, sharing results), Abstract Conceptualization (understanding the why), and Active Experimentation (applying to real work with follow-up). For each stage, tell me whether our program addresses it, partially addresses it, or skips it entirely. Then tell me the single highest-leverage change we could make to the next iteration that would most improve actual behavior change — not completion rates.
What changes: L&D teams see for the first time exactly which learning stages their programs skip — and why their completion numbers and behavior change numbers tell completely different stories. The audit takes 30 minutes. The results change how they design every program after it.
TR
// ABOUT THE CURATOR
Terry Rice
AI Performance Systems Architect, General Assembly Faculty
Terry has designed and delivered corporate AI training for Google, EY, Berkshire Hathaway, Walmart, and Estée Lauder. As a longtime General Assembly faculty member, he has trained thousands of professionals on applying emerging technology to real business problems. Outside of work he coaches youth football and is a father of five. The Enterprise AI Brief applies that same approach in written form: every edition is built around what actually changes behavior, not what sounds good in a deck. He works 20 hours a week by design, from Brooklyn.
Every challenge uses Claude. Every prompt is copy-paste ready. Some people learn by watching, some by doing — these are designed so there's something for both. Run them in order or start wherever makes sense for your team.
FILTER
All
Starter — 15 min
Builder — 30 min
Advanced — 60 min
Culture
01
The meeting prep assistant
STARTER
15 MIN
BETTER MEETINGS
BUILD IT — Active Experimentation
02
The email audit
STARTER
15 MIN
COMMUNICATION
WATCH IT — Reflective Observation
03
The process audit
BUILDER
30 MIN
REDUCE COSTS
BUILD IT — Active Experimentation
04
The objection handler
BUILDER
30 MIN
INCREASE REVENUE
BUILD IT — Active Experimentation
05
The AI adoption self-assessment
ADVANCED
60 MIN
CULTURE SHIFT
WATCH IT — Reflective Observation
06
The workflow redesign sprint
ADVANCED
60 MIN
REDUCE COSTS
APPLY IT — Full Cycle
07
The content repurposing machine
BUILDER
30 MIN
INCREASE REVENUE
BUILD IT — Active Experimentation
08
The Maverick Matrix team map
CULTURE
20 MIN
TEAM DIAGNOSIS
WATCH IT — Reflective Observation
09
The learning gap audit
CULTURE
30 MIN
L&D STRATEGY
APPLY IT — Full Cycle
Challenge 01 — The meeting prep assistant
STARTER
15 MIN SOLO
Before your next meeting, give Claude the agenda and attendee list. It generates smart questions, surfaces potential tension points, and tells you one thing you should know before you walk in. Run this once and you will run it before every important meeting going forward.
// HOW TO RUN IT
1
Find a meeting on your calendar in the next 48 hours that actually matters.
2
Copy the agenda, attendee names and roles, and any context you have about the situation.
3
Open Claude and paste the prompt below with your details filled in.
4
Screenshot the output and bring it to your meeting.
// COPY THIS PROMPT INTO CLAUDE
I have a meeting with [names and roles] about [topic]. The agenda includes [list the agenda items]. Generate: three smart questions I should ask in this meeting, two things that might create friction or tension, and one thing I should research or know before I walk in that I might not have thought about.
What changes: You stop walking into meetings unprepared. One prompt, five minutes, and you show up as the most prepared person in the room every time.
Challenge 02 — The email audit
STARTER
15 MIN SOLO
Paste your last five outgoing work emails into Claude and ask it to identify your communication patterns, flag anything that might be misread, and suggest one change to each. Most people discover they deflect more than they resolve, or hedge more than they decide.
// HOW TO RUN IT
1
Go to your sent folder. Find five recent emails about something that actually mattered.
2
Copy them into a document. Remove any confidential details if needed.
3
Paste the prompt below into Claude with your emails included.
4
Read the output privately first. Then decide if it's worth sharing with a trusted peer.
// COPY THIS PROMPT INTO CLAUDE
Read these five emails I sent recently: [paste emails here]. Tell me: what communication patterns do you notice, what language might create friction or confusion with the recipient, and what one specific change to each email would make it land better. Be direct. Do not soften the feedback.
What changes: You see your communication the way others receive it — not the way you intended it. That gap is where most relationship friction lives.
Challenge 03 — The process audit
BUILDER
30 MIN — TEAM OF 2-5
Map one process your team runs weekly. Describe every step, every handoff, every decision point. Then ask Claude to redesign it assuming AI is available at every step. The before-and-after comparison is what people share. The hidden risks section is the one that changes how teams think.
// HOW TO RUN IT
1
Pick one process everyone finds friction-heavy. Map every step, owner, and handoff.
2
One person pastes the prompt below into Claude with your process described.
3
Read the redesign as a team. Discuss: what would it take to implement the top two changes this quarter?
// COPY THIS PROMPT INTO CLAUDE
Here is a process my team runs regularly: [describe every step, who owns each step, where handoffs happen, and where decisions get made]. Do three things: First, identify where AI could reduce time or eliminate steps entirely. Second, tell me where human judgment is genuinely irreplaceable and why. Third, show me what this process would look like redesigned from scratch assuming AI is available at every step. Estimate the time savings if we made the top three changes.
What changes: Teams stop seeing AI as a tool they add to existing processes. They start seeing it as a reason to redesign the process entirely.
Challenge 04 — The objection handler
BUILDER
30 MIN — TEAM OF 2-5
List the 10 objections you hear most often. Paste them with your current responses. Claude will improve each and identify which ones you're deflecting rather than resolving. That distinction hits harder than expected every time.
// HOW TO RUN IT
1
As a team, write down the 10 objections you hear most and your current response to each.
2
Paste the prompt below into Claude with your objections and responses included.
3
Pay particular attention to which objections Claude says you're deflecting rather than resolving.
4
Rewrite the top three responses together using Claude's feedback.
// COPY THIS PROMPT INTO CLAUDE
Here are the 10 objections we hear most often, with our current responses to each: [list objections and responses]. Do two things: First, improve each response to actually resolve the objection rather than just address it. Second, identify which objections we are deflecting rather than resolving, and explain the difference. Be direct about which responses are weak.
What changes: Conversations get sharper immediately. The team stops rehearsing responses that sound good and starts having ones that actually move people.
Challenge 05 — The AI adoption self-assessment
ADVANCED
60 MIN — FULL TEAM
The most important challenge in the library. Each team member answers four questions independently, then the team shares and discusses. The gap between where people think they are and where they actually are is the data that changes how L&D leaders think about their next training investment.
// THE FOUR QUESTIONS — SEND TO YOUR TEAM FIRST
// COPY QUESTIONS FOR YOUR TEAM
Q1: In the last 30 days, how many specific work outputs can you point to that were different because of AI? (Zero / One or two / Three or more)
Q2: Has your process for approaching a type of work changed because of AI, or do you use AI to do the same process faster? (Same process faster / Somewhat changed / Fundamentally redesigned)
Q3: Can you name one outcome in the last 90 days that was measurably better because of AI use — not AI training? (No / I think so but can't point to it / Yes, specifically)
Q4: When you encounter a new problem at work, is AI the first place you go or something you remember to try later? (Remember later / Sometimes first / Always first)
// THEN PASTE RESULTS INTO CLAUDE
// ANALYSIS PROMPT
Here are the aggregated results from our team AI adoption self-assessment: [paste tally of responses]. Analyze these results and tell me: where is our biggest adoption gap, what does this suggest about our current AI training approach, and what is the single highest-leverage change we could make in the next 30 days to move the team forward. Be specific and direct.
What changes: The conversation shifts from "are we using AI" to "is AI actually changing how we work." That is the right question. Most teams have never asked it this directly.
Challenge 06 — The workflow redesign sprint
ADVANCED
60 MIN — FULL TEAM
Pick one process your team runs every week. Map it manually first — every step, every handoff, every decision. Then compare it to Claude's AI-native redesign. The gap between what you mapped and what Claude proposes is the opportunity your team has been walking past.
// HOW TO RUN IT
1
Pick one weekly process that feels heavier than it should be. Map it on a whiteboard or shared doc.
2
Paste the prompt below into Claude with your full process described.
3
Compare the two versions side by side. Vote on which changes to implement in the next sprint.
// COPY THIS PROMPT INTO CLAUDE
Here is a weekly process my team runs: [describe every step, who owns each one, how long each step takes, and where the handoffs happen]. Redesign this process from scratch assuming AI is available at every step. Show me the new process step by step. Estimate total time savings per week. Identify the three changes that would deliver the most value with the least disruption. Tell me which parts of the original process you would eliminate entirely and why.
What changes: Teams stop thinking about AI as a productivity tool and start thinking about it as a redesign tool. That shift produces real behavior change.
Challenge 07 — The content repurposing machine
BUILDER
30 MIN — TEAM OF 2-5
Paste one piece of long-form content — a strategy doc, a report, a presentation — and Claude turns it into five formats simultaneously. Teams almost always discover they have far more usable content locked in existing documents than they realized.
// HOW TO RUN IT
1
Find one piece of content your team worked hard on but under-distributed.
2
Remove confidential details, then paste the prompt below into Claude with your content.
3
Review all five outputs as a team. Assign one person to publish or send one before end of day.
// COPY THIS PROMPT INTO CLAUDE
Here is a piece of content my team created: [paste content]. Turn it into five formats: First, a one-paragraph executive summary a busy CHRO would read in 60 seconds. Second, three LinkedIn posts with three different angles on the same content. Third, five talking points for a team meeting or presentation. Fourth, a set of five FAQ questions and answers based on the content. Fifth, a two-sentence email hook that would make a senior leader want to read the full document.
What changes: Distribution becomes the bottleneck, not creation. One document becomes five touchpoints. The ROI on existing work multiplies immediately.
Challenge 08 — The Maverick Matrix team map
CULTURE
20 MIN — FULL TEAM
Each person identifies which quadrant they currently sit in — not where they aspire to be, but where they actually are right now. Share with the team. Ask: where does the team as a whole sit? What would it take to move one quadrant to the right?
// THE FOUR STATES
G
Ghost: Burned out, checked out, showing up but not present. The work has emptied you out.
S
Soldier: Active and hitting metrics but not asking if the work is right for you. Moving without meaning.
V
Visionary: Knows what's possible but can't seem to move toward it. Vision without velocity.
M
Maverick: Work feels like an expression of who you actually are. This is the goal state.
// ANALYSIS PROMPT — PASTE RESULTS INTO CLAUDE
Here are the results from our team Maverick Matrix self-assessment: [describe how many people identified as Ghost, Soldier, Visionary, or Maverick, and any patterns you noticed in the discussion]. Analyze what this distribution suggests about our team's current state, what the highest-leverage change would be to move people toward Maverick, and what leadership behaviors might be keeping people stuck in their current state.
What changes: The conversation most teams have never had directly. The distribution surprises leadership almost every time. The discussion that follows is often the most honest the team has had in months.
Challenge 09 — The learning gap audit
CULTURE
30 MIN — L&D TEAM
APPLY IT — FULL CYCLE
Most AI training programs fail because they deliver one format to a room of people who learn in four different ways. This challenge helps your L&D team diagnose exactly which learning stages your current program addresses — and which ones it skips. The gaps you find in this session are the reason your adoption numbers are not moving.
// HOW TO RUN IT
1
Pull up the last AI training program your organization ran. Have the agenda, format description, and completion data ready.
2
As a team, map each element of the training to one of four stages: Hear It (demo/lecture), Watch It (peer observation/sharing), Build It (hands-on deliverable), Apply It (real work integration with follow-up). Be honest — most programs are almost entirely in the first stage.
3
Paste the prompt below into Claude with your audit results. Review the output together and identify one change you can make to your next program before it launches.
// COPY THIS PROMPT INTO CLAUDE
Here is the structure of our most recent AI training program: [describe the format, length, activities, and any follow-up]. Using Kolb's Experiential Learning Cycle, audit this program against four stages: Concrete Experience (hands-on doing), Reflective Observation (watching peers, sharing results), Abstract Conceptualization (understanding the why), and Active Experimentation (applying to real work with follow-up). For each stage, tell me whether our program addresses it, partially addresses it, or skips it entirely. Then tell me the single highest-leverage change we could make to the next iteration that would most improve actual behavior change — not completion rates.
What changes: L&D teams see for the first time exactly which learning stages their programs skip — and why their completion numbers and their behavior change numbers tell completely different stories. The audit takes 30 minutes. The results change how they design every program after it.
Your learning path — from first challenge to full cycle
4 LEVELS
1
Solo Explorer
Try one starter challenge alone. Build something real. See what's possible without waiting for a program to tell you to start. Stage covered: Build It.
15 MIN
2
Team Spark
Bring a challenge to your team. Run the Maverick Matrix map or the adoption self-assessment. Start a conversation that surfaces what you've been walking past. Stage covered: Watch It.
30 MIN
3
Team Builder
Run an advanced session. Redesign a real workflow. Someone's job changes. You ask what else is possible. Stage covered: Apply It.
60 MIN
4
Maverick Lab
Full-day facilitated workshop built around all four stages of Kolb's cycle. Real AI tools built for real business outcomes. Every person in the room leaves with something they built and something they believe about themselves that they didn't believe when they walked in. All four stages. One session.
FULL DAY — $15K-$25K
What did your team build?
The best results get featured in the next edition with full attribution. Takes about 2 minutes to complete.
The Enterprise AI Brief features practitioners with specific points of view, not general takes. If you are doing something real with AI in an organization and have something specific to say about it, we want to hear from you.
// WHAT WE'RE LOOKING FOR
Has your company done something worth sharing?
A real AI implementation that changed how your team works. Not a pilot. Not a plan. Something that shipped and produced a result — good or bad. Both are useful.
What are you seeing in the field right now?
Patterns across organizations. What's working that nobody talks about. What's failing that everyone pretends is working. What the data says versus what leaders actually do.
Where do you disagree with the consensus?
The research says one thing. Your experience says another. We're interested in that tension, especially when you can point to something specific that explains the gap.
What does AI adoption actually look like from the inside?
The view from inside an organization is different from the view in the research. If you are an L&D director, CHRO, or VP of People navigating this in real time, your perspective is the one our readers want.
// WHO WE FEATURE
L&D Leaders
Directors of Learning, CLOs, and VP L&D who are building AI training programs and measuring what actually changes.
HR and People Leaders
CHROs, VPs of People, and HR directors navigating AI adoption policy, workforce readiness, and the gap between mandate and behavior.
Practitioners with Results
Anyone inside an organization who shipped something real with AI and can speak to what worked, what didn't, and why.
// WHAT TO EXPECT
We review every submission. If your perspective is a fit for an upcoming edition, we'll reach out directly to develop it. We don't publish general takes. We're looking for something specific: a real situation, a real decision, a real result. We'll help you shape it. You'll get full attribution and a link to your LinkedIn and company. We don't pay contributors, but the audience is exactly who you want to be in front of.
Ready to contribute?
Takes about five minutes. Tell us who you are, what you're seeing, and what you'd want to say. We'll take it from there.