top of page

From Insight to Influence – Supporting AI Leadership at the Top

Updated: Jul 31



ree

Guest: Renée B. Lahti,

Linei Kōkua Advisory


As we continue our series 'Unleashing the Power of Female Leadership in Generative AI,' we'll revisit our March 18th discussion on 'The C-Suite and Boards' Role in Leading AI Transformation' for a deeper exploration.

ree

Renée B. Lahti of Linei Kōkua Advisory

will share tips and guidance on how influential direct reports can help C-Suites and Boards embrace AI with confidence.


In Topic #2 of this Series, "The C-Suite's and Boards' Role in Leading AI Transformation", we explored how C-suite leaders and Boards must embrace AI with clarity, courage, and governance foresight. Now, let’s focus on those just outside the spotlight—senior advisors, Chiefs of Staff, and other direct reports—who are behind the scenes and often shape the executive approach to AI readiness.


Whether you're a Chief of Staff, senior advisor, or other strategic right hand, your proximity, perspective, and trust give you a unique opportunity to influence how AI is understood, governed, and adopted at the top. While your role may not always be front and center, it is absolutely pivotal.  And while your work may happen behind the scenes, the clarity, confidence, and momentum you help create will not go unnoticed—it will help define both your legacy and that of your leadership team.


ree

Helping senior executives and board members move from AI hesitation to AI readiness requires finesse. The goal isn’t to teach or preach—it’s to align AI discussions with their priorities: risk mitigation, long-term growth, and sustained competitive advantage.


Just as C-level decision-makers need strategic clarity to lead AI adoption (as explored in Topic #2), influential direct reports play a critical role in translating vision into practice. The following five techniques are designed to support that translation—ensuring AI readiness is not a top-down directive, but a well-supported, cross-functional evolution.


The following effective techniques can help you spark meaningful AI engagement at the top— without ever undermining authority.


=============================================================


ree

The “Curated Curiosity” Briefing


Objective: Provide AI insights in a non-threatening, high-level format that resonates with Board/C-suite priorities.

  • Compile a one-page briefing with AI trends specific to your industry, focusing on risk, opportunity, and governance.

  • Structure the document using the “So What?” approach:

    • What’s Happening? 

      Example: AI is transforming [industry] by automating [process].


    • Why It Matters? 

      Example: Our competitors are integrating AI into customer engagement, improving retention by 20%.


    • What Should We Consider? 

      Example: Should we explore AI-driven analytics to enhance decision-making in [area]?


  • Deliver it informally in a story-telling way, such as in a weekly strategy meeting or one-on-one discussion or lunch, rather than a formal "AI education session," which might feel remedial to the receiver.


Why this is helpful:

  • Senior leaders stay informed without feeling pressured to “catch up”.

  • It respects their time while positioning AI as a business enabler rather than a tech trend.


=============================================================


The “Executive Curiosity Walk” – AI in Their Daily Work


Objective: Help executives personally experience AI without making it feel like a tutorial.

  • Find an AI tool already in use within your company (e.g., chatbots, predictive analytics, generative AI for reports).

  • Invite them to interact with it casually, framing it as a business tool, not a lesson.

    • “Hey, I was using this AI-driven forecasting model, and it’s giving us insights we wouldn’t normally see. Want to take a look?”

  • If they’re resistant, ask them for a business question and let AI provide an answer.

    • “If you were wondering how our competitors leverage AI, let me show you what this tool found in 10 seconds.”


Why this is helpful:

  • AI becomes real, not theoretical—it’s already in their world.

  • Removes pressure—they learn through experience rather than formal education.


=============================================================


 “What’s the Risk?” – AI Governance as a Risk Mitigation Discussion


Objective: Help executives and board members engage with AI as a matter of risk oversight, fiduciary responsibility, and legal compliance — not just technology deployment.

  • Instead of asking, “What AI tools are we using?” shift the focus by suggesting questions to solve like:

    • “Have we assessed the legal, ethical, and reputational risks tied to AI use across the organization?”

    • “What controls are in place to detect biased or non-compliant outputs from generative AI tools?”

    •  “If an AI-generated decision is challenged in court or by a regulator, do we have documentation on how that decision was made and by whom?”

    •  “Are we prepared for upcoming AI-specific regulations such as the EU AI Act or state-level U.S. privacy laws?”


  • Provide relatable scenarios:

    • (Audit/compliance example) Imagine a marketing team uses a generative AI tool to create customer-facing materials. A hallucinated claim or AI-generated bias could expose the company to legal liability, brand damage, or regulatory inquiry. Has the company’s audit committee reviewed internal controls and approval workflows to prevent such risks?

    • (Legal example) AI-generated outputs (e.g., product recommendations, safety alerts, code written by generative AI tools) may reflect hidden algorithmic bias or flawed logic. If developers are using AI to generate code for a commercial product, who is accountable for verifying the code’s accuracy, compliance, and safety? If a product fails, causes harm, or exposes sensitive data due to AI-generated code, can the company’s legal team trace how that code was developed, whether appropriate safeguards and reviews were in place, and defend the company against potential product liability or negligence claims?

    • (Finance example) AI-powered forecasting tools are increasingly used for budgeting, revenue projections, and risk modeling. But if a company’s finance team relies on AI-generated outputs without understanding how the models are trained or whether assumptions are sound, is that introducing hidden volatility into financial reporting? If those forecasts inform investor guidance, capital allocation, or strategic M&A decisions, can the CFO stand behind the numbers if challenged by auditors, regulators, or shareholders? Is there sufficient oversight, documentation, and accountability for AI-derived financial insights being validated and used in decision-making?

·       

  • As discussed in topic #3, suggest forming a cross-functional AI Risk & Governance Task Force including legal, audit, compliance, procurement, IT, and business unit leads. This group can:

o   Develop internal AI usage guidelines.

o   Identify high-risk AI use cases across departments.

o   Review 3rd party AI vendor contracts for AI-related liability, IP rights, and data protection.

o   Define a protocol for AI-related disclosures and incidents.

o   Implement an AI Usage Policy aligned with emerging regulations and ethical standards.

o   Launch AI Awareness & Training Programs:

-        Mandatory training for employees (similar to cybersecurity protocols).

-        Encourage role-based certifications for high-exposure functions.


Why this is helpful:

  • Shifts the conversation from technology to corporate governance and liability responsibility.

  • Senior executives like audit committee members and CLOs are engaged to lead from their areas of strength. They don’t need to be AI experts—what they need is visibility, accountability structures, and a clear risk posture.


=============================================================


ree

The “Reverse Mentor” Approach – AI Insights from External Thought Leaders


Objective: Allow executives to hear from their peers or industry leaders to drive interest.

  • Casually share an article or interview where an executive they respect discusses AI.

    • “McKinsey just published a piece on how [Fortune 500 CEO X] is leading AI initiatives. Thought you’d find this perspective interesting.”

  • Invite a respected peer (another CEO, board member, or industry leader) for an informal AI discussion over lunch or a fireside chat.


Why this is helpful:

  • Senior executives trust their peers more than reports from their own teams.

  • Makes AI mainstream—not just a tech or IT conversation.


=============================================================


 The “AI in 10 Words” Challenge


Objective: Help executives develop a simple, confident understanding of AI.

  • Ask them this challenge question:

    “If during an interview, you had to explain AI’s role in our company in 10 words or less, what would you say?”

  • If they struggle, offer a simple, business-aligned explanation.

    • Example 1: “AI helps us make faster, smarter decisions with better data.”

    • Example 2: “AI improves forecasting, reduces risk, and enhances operational decision-making.”

    • Example 3: “AI is a strategic tool—we govern it like any asset.”

  • Reinforce that understanding AI doesn’t mean mastering AI—it means knowing its impact.

Why this is helpful:

  • Boosts executive confidence in discussing AI.

  • Eliminates fear of looking uninformed.


=============================================================

The Key to this Technique's Success...

The most effective approach for a Chief of Staff or senior advisor is to make AI curiosity natural, strategic, and non-threatening:

✅ Make it about business value, not technology.

✅ Tie AI discussions to risk mitigation, governance, and competitive advantage.

✅ Use curiosity as the gateway, not pressure or fear.

 

Final Thought: Transforming Hesitation into AI Leadership Readiness

Engaging with AI doesn’t mean mastering the technology—it means asking the right questions, understanding risks, and leveraging AI strategically. This post builds on the foundation of our ongoing series on generative AI leadership. If you’re looking to better understand the executive landscape or functional requirements for success, revisit:


Together, these posts offer a 360° view of what GenAI transformation really requires—strategic, cross-functional, and human-centered.

 

 Call to Action:

If you're in a role that quietly—but powerfully—shapes leadership thinking, your voice matters in the AI conversation. Which of these techniques resonates most with how you support your leadership team?


👇 Reach out to me and share your thoughts or experiences—I’d love to learn from you!

 

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
shannonm
Aug 04
Rated 5 out of 5 stars.

Great tools and techniques!

Like
bottom of page