Your Next Coworker Might Be AI
Adaptation has been a hallmark of the professional association, and we wear it like a badge of honor. Our industry did amazingly well when we were unexpectedly forced to work remotely during the pandemic. Now, as we move beyond the disruption whether normalizing a permanent remote setup or dealing with a return-to-office (RTO) mandate, we face a new transformation in how we work and possibly interact with our members.
ChatGPT launched on Nov. 30, 2022, a time when many of us were adjusting to our new normal. What happened that day was the beginning of a cold war between tech giants over how AI will change the way we all work, play and live.
Not yet three years after its launch, many of us are being given directives to start implementing aspects of AI into our workflows, policies and Standard Operating Procedures (SOPs). You may have seen headlines warning that if you haven’t started thinking about the implementation of AI, you’re already behind — and others warning of the risks of delay.
Most of us, especially those reading articles like this — and the dozens more currently flagged in your inbox — have already been interacting with generative or large language model (LLM)-based platforms like ChatGPT and Claude. Whether for searches, Excel formulas or help with an Association Forum magazine article, we’re discovering new ways to use these tools daily. Now, just a few short years after the introduction of generative AI, we’ve moved from phase one (novelty and curiosity) to phase two (utility and productivity) and are well into phase three (personalization and agentic use).
Many of us are still catching up — myself included — and beginning to ask questions about what we should know in this third phase – how AI is changing the very interactions we have with our members, and how to prepare for what’s next.
In my quest for answers, I reached out to Doug Brown, president of Catalyst Fire, a custom application developer for professional associations. His team has been consulting with clients on how to best implement AI agents and agentic AI.
“The key is to differentiate between ‘AI Agent’ and ‘Agentic AI.’ One is rule-based and task-oriented; the other is dynamic and decision-capable,” Brown said.
He explained: “AI Agent is the AI-driven helper or customer-support tool that might appear as a browser popup. This is normally for things like FAQs or ‘Let me get some information and then forward you to a real person.”
AI agents are structured assistants that some associations already use. They handle FAQ-style queries, help users navigate websites, and automate outreach tasks.
“An AI agent is typically easier to implement and can be controlled to provide standard interactions that the association wants to manage. It’s great for scheduling tasks, sending notifications at appropriate times, and sending out curated recommendations based on the recipient’s data attributes,” Brown continued.
“You might think of the AI agent as the ideal project manager. A project manager isn’t expected to have all the answers, but is expected to be able to leverage all their resources in response to the customer’s needs. Similarly, the AI agent reaches out to all the resources at the ready and can summarize those inputs for the user — gathering information from various AI components in pursuit of the best answer.”
Agentic AI, on the other hand, is a dynamic, adaptive system capable of autonomous decision-making within a defined domain.
“Agentic AI is great for a defined subject area. For instance, using it for security and fraud detection ensures that your security tool is always learning and reacting to the latest potential threats,” Brown said.
“Agentic AI can identify the sentiment of interactions with a customer and respond appropriately. If the customer is angry, it can diffuse the situation. If the customer is anxious or time-constrained, it responds quickly. If the customer is confused about how to use your resources or website, it can offer suggestions and guide the user to their destination. These interactions involve AI that understands the situation at hand and adjusts without specific staff input.”
He added: “Agentic AI can personalize a user’s interaction with your applications. As users respond to AI recommendations, the system learns whether they were well-received and adjust over time. It even becomes predictive, anticipating potential next steps and preparing responses in advance.”
Agentic AI goes beyond reactive tools — it acts more like a colleague than a tool.
Risks and Governance Considerations
Naturally, concerns arise around this type of autonomy. Will we lose control? What about data privacy, bias or the risk of misinformation?
Because agentic AI can act on its own, it’s essential to have appropriate controls in place. This reduces the risk of the AI making decisions that conflict with the organization’s tone, policies or brand. Establish human-in-the-loop mechanisms, especially for high-impact functions such as policy messaging or public statements. Create clear approval workflows and ensure AI systems provide logs or summaries of actions taken.
AI tools that process member data must handle it securely and transparently. Associations should select vendors that are SOC 2 compliant, align with HIPAA or GDPR (if applicable), and follow the NIST AI Risk Management Framework. Internally, it’s also critical to define how AI systems access, store and process data.
Bias is another concern. AI systems trained on incomplete or skewed datasets may perpetuate discriminatory patterns. For mission-driven associations focused on inclusion, this is especially important. Conduct regular bias audits and require vendors to disclose how their models are trained and what steps are in place to mitigate bias.
My first AI-related article explored AI hallucinations — the phenomenon where generative models deliver inaccurate but confident-sounding information. While outputs have improved, incorrect or outdated responses still occur, especially when tools rely on static or unreliable sources. To minimize this risk, fact-check AI-generated content before releasing it to the public.
AI will continually help us work smarter and engage members more meaningfully — but as we phase in autonomous AI we must do so thoughtfully. Without the right guardrails, even the best tools can create risk. As we venture toward the future of member interactions together, let’s remember, as associations, we’re in a unique position to lead by example.
Tags
Related Articles
The Weight We Carry: What Teams Wish Leaders Understood
Association professionals are feeling the strain of relentless change—burnout, shifting expectations, and emotional fatigue are...
Work/Life: What Tech Tools Have Helped Shape Your Work and Life?
Technology touches everything from our professional work to our personal downtime. Association professionals tell us...
Turning AI Into Member Value
How AAE’s Save Your Tooth Month GPT became a personalized marketing assistant for members


