The Tool That Quietly Becomes Your Workflow

No one announces, “We are implementing artificial intelligence.”

AI is entering your organization little by little, without you even realizing it. This has been true in the work we do at Blue Sky Partners. It has been terrifyingly true for my work as a teacher at Trinity University.

Someone in your organization is trying to be helpful, and even if they would have never thought to turn to AI before to write a grant narrative, a board deck, a pile of case notes, or a donor update that has to go out by 5 p.m., suddenly they’re up against a deadline and think, “I’ll just try it.” That single point of entry becomes a defining moment for them and your organization.

The benefits of using AI tools can be significant: reduced administrative burden, accelerated drafting, improved data synthesis, and expanded capacity without proportional headcount growth. For lean teams, this is transformative.

The drawbacks are equally material: data leakage, hallucinated outputs, embedded bias, intellectual property ambiguity, overreliance, and reputational risk. In regulated environments, the exposure is amplified.

The problem is not that they used AI. It’s the absence of guardrails.

Technology adoption is rarely a single decision; it is the accumulation of small ones. Leaders who proactively define parameters will capture the upside while mitigating risk. Those who delay will find that the culture has already made the decision for them.

However, every organization must be intentional and that intentionality depends entirely on context.

  • A healthcare entity must consider HIPAA implications.

  • A school system must navigate FERPA and student data protections.

  • A nonprofit receiving federal funds must assess procurement standards, data handling rules, and audit exposure.

  • A foundation must think about fiduciary duty and transparency.

AI governance is not a communications issue; it is a compliance, risk, and strategy issue.

The AI policy you don't write will become the AI policy you live with.

The AI ecosystem is expansive and it is important to broaden the conversation beyond ChatGPT. Learn about and truly understand which tools you're currently using that rely on AI to operate.

Here is the BSP guide on understanding your AI ecosystem

  • Conduct a comprehensive inventory of AI embedded across productivity suites, and document who is using what, for which purposes, and with what data. Ensure any software processing sensitive or regulated information is clearly flagged. This establishes a factual baseline before policy drafting begins.

    • CRM systems

    • Collaboration tools

    • Analytics platforms

    • HR software

    • Automation workflows

  • Audit default AI features within platforms and intentionally determine activation settings rather than allowing passive adoption.

    • Microsoft 365

    • Google Workspace

    • Zoom

    • Slack

    • CRM systems

  • Establish a formal AI governance policy. Rather than producing abstract principles, define policies around data sensitivity levels (public, internal, confidential, regulated) and align permitted AI use accordingly. Define:

    • Approved tools

    • Prohibited use cases

    • Data-sharing parameters

  • Align AI usage with existing compliance. Consider cybersecurity, procurement, and grant requirements to ensure regulatory exposure is addressed before scale. What do you want to be public and used by large corporations? What do you need to be closed and private? Do you have an intention behind those decisions?

  • Embed human accountability into workflows. Assign someone to review and validate AI-assisted outputs before those outputs influence hiring, financial reporting, donor communication, grant submissions, or public-facing materials. AI may assist; it does not decide.

  • Aligning AI oversight with existing governance and compliance systems. Instead of creating parallel structures, integrate AI into cybersecurity policies, procurement reviews, risk registers, and board-level oversight. AI becomes part of enterprise risk management not a side initiative.

  • Provide structured training to staff on appropriate AI use, including:

    1. Data privacy risks

    2. Hallucination mitigation

    3. Intellectual property considerations

    4. Documentation standards.

  • Assign executive and board-level accountability for AI governance, and leverage established frameworks such as the NIST AI Risk Management Framework and OECD AI Principles to guide ongoing review.

Additional resources to help build your AI policies: AI PrinciplesAI Risk ManagementAI Skills TrainingEEOC

We work closely with friends and clients to navigate this rapidly changing landscape. If we can help or share any resource, please let us know!

Next
Next

Do you feel like you're working hard without direction?