From Chatbots to Agentic AI: How Servicely Is Turning Service Platforms Into Systems That Do the Work

For years, AI in IT service management has meant narrow use cases. A bit of predictive ticket routing here, some knowledge search there, maybe a chatbot on the portal.

Now we are entering a very different phase.

Agentic AI is about systems that act on your behalf. They do not just generate text or suggest answers. They understand context, choose the right tools, break work into steps and execute those steps safely inside (and outside) your service platform.

In a recent technical preview, the Servicely team showed how agentic AI is being embedded directly into the Servicely platform to move beyond chatbots and into true, AI-driven operations.

This article recaps the concepts and live scenarios from that session and what they mean for IT leaders.

What Is Agentic AI in an ITSM Context?

From Servicely’s perspective, agentic AI in service management is about three main shifts:

  1. Proactive, context-aware behaviour
    AI does not just respond to a single prompt. It understands service data, patterns and relationships, anticipates what is likely to happen next, and acts accordingly.
  2. Dynamic task decomposition
    Instead of one big action, the AI breaks requests into smaller tasks, calls the right tools and agents, and sequences work in the right order.
  3. Continuous learning in a changing landscape
    IT environments change constantly. Agentic AI needs to learn from that change, adapt and keep improving workflows and decisions over time.

Rather than a single “AI brain”, Servicely uses a layered agentic stack:

  • A conversational layer, where users and agents talk to the system.
  • Assistants, which act like smart middle managers.
  • Agents, which perform specific types of work.
  • A pool of tools, which are the concrete actions agents can take in Servicely or external systems.

This is what turns AI from a help desk add-on into a real operational engine.

Inside the Servicely Agentic Stack: Assistants, Agents and Tools

The best way to understand the stack is to see how the layers fit together.

1. Conversational interface: SOFI

At the top is the SOFI UI. This is where users type questions or requests such as:

  • “Who is Sam Herring’s manager?”
  • “How much work do I have at the minute?”
  • “Check the WordPress servers for any open vulnerabilities.”

SOFI sends these to the right assistant, which then coordinates how work gets done.

2. Assistants as middle managers

Assistants sit between the conversation and the low-level tools. Each assistant has a clear domain. For example:

  • IT assistant for general IT requests and incidents
  • Agent workload assistant for service desk agents managing their queue
  • SecOps assistant for security operations and vulnerabilities
  • Organisational structure assistant for people data like managers and contact details
  • Code assistant for developers and admins configuring the platform

The assistant decides:

  • Which agents need to be involved
  • In what order
  • With which tools

3. Agents and their toolboxes

Agents are task-focused actors. You might have:

  • An incident agent to create, update and manage incidents
  • A reporting agent to build and run reports
  • A queue agent to analyse and prioritise workload
  • A SecOps agent to work with CMDB and vulnerability data

Each agent can call a dedicated toolbox. For example:

  • The incident agent can create incidents, add tasks, update journals and interact with change management.
  • The reporting agent can generate visual reports.
  • A SecOps agent can query vulnerability data, cross-reference CMDB items, and propose change requests.

This layered model is critical for governance. You control which tools each agent can use, and which assistants can use which agents, so you do not end up with free-floating automation that bypasses policy or permissions.


Examples in real life

Scenario 1: "Who is Sam's Manager?" to Self-Service Actions


The first set of scenarios focused on simple but common needs: looking up information and initiating basic requests.

A user can ask SOFI things like:

“Who is Sam's manager?”
“What is Sam’s email address and phone number?”

The organisational structure assistant activates user data agents and tools that query the Servicely platform. The result is a clean, conversational answer that still respects the organisation’s data model and permissions.

From there, the user can go further:

“I have an expense claim to submit for the engineering team. Who would approve it?”
“Is there a catalog item for this?”

The system:

  • Looks up the approver (for example, Dion as the manager).
  • Searches the service catalog for the right reimbursement request item.
  • Presents a link the user can click to submit a structured request.

This is agentic AI working as guided self service. It is not just “chat”; it is:

  • Understanding context
  • Locating the correct process
  • Connecting the user with the right structured workflow

Scenario 2: Helping Service Desk Agents Manage and Action Their Work

The next set of scenarios focused on the service desk agent experience.

An agent can start with a simple question:

“How much work do I have at the minute?”

The agent workload assistant activates, looks at the agent’s queue and returns a view of their current workload. From there, the agent can ask:

“Help me prioritise this work.”

The system analyses the queue context and proposes an ordered set of tickets to focus on.

From there, things get more interesting.

The agent can then say, for example:

“Leave a client journal update on that incident, letting the user know we are working on the ticket and will revert.”

The AI:

  • Understands which incident you are referring to (based on the earlier context or a copied incident number).
  • Proposes a suitable journal update.
  • Applies it to the correct record in Servicely once you confirm.

All of this happens without the agent having to dig through records manually or switch between multiple screens.

Creating incidents and tasks conversationally

Agents can also create new incidents and tasks through the assistant:

  1. “Help me create a new incident.”
  2. The assistant asks for a short description.
  3. It then asks who the requester is, impact and urgency.
  4. The incident is created in Servicely with the right structure.

From there, the agent can say:

“For that incident, create three subtasks, all assigned to the database support team.”

The system creates the tasks and links them to the incident.

If the incident also needs network analysis, the agent can ask:

“What team could help us resolve any network issues?”

The AI finds the appropriate team, such as infrastructure support, and then creates a subtask for that team, with the context that initial investigation has already taken place.

This is a practical example of agentic AI doing structured work inside ITSM:

  • Creating records
  • Breaking work down
  • Assigning to the right teams
  • Maintaining links and context

All from a conversational interface.

Scenario 3: SecOps, Vulnerabilities and Change Management

The agentic stack is not limited to incidents.

In a SecOps scenario, a user interacts with the SecOps assistant:

“Check the WordPress servers for any open vulnerabilities.”

Behind the scenes, the AI:

  • Interacts with the CMDB to identify the relevant WordPress servers.
  • Calls tools that query vulnerability data.
  • Surfaces relevant CVEs that may apply.

From there, the user can ask:

“Create a change request to fix these issues.”

The assistant and agents:

  • Create a properly structured change request.
  • Check for scheduling conflicts with existing changes.
  • Respect the change calendar and rules.
  • Can reschedule if needed, while preventing clashes where rules do not allow it.

This is a strong example of agentic AI bridging multiple domains:

  • CMDB and asset data
  • Vulnerability sources
  • Change management and schedule conflict checks

All inside Servicely’s permission and governance model.

Scenario 4: Building Tools and Automation With a Code Assistant

Agentic AI is not just for end users and service desk agents.

Servicely also includes a code assistant that supports admins and developers who want to configure or extend the platform.

For example, you might ask:

“Create a server script that generates a record and an AI tool I can use to run that as part of an agent.”

The assistant:

  • Writes the server-side code.
  • Provides a ready-to-use script that can be pasted into the Servicely instance.
  • Helps you generate an AI tool that an agent or assistant can call.

In the demo, this was used to generate a report and visualise it as a bar chart, with the only manual step being “make this a bar chart and click run”.

In practice, this could extend to:

  • Creating or updating table operations
  • Building specialised tools for agents
  • Automating repetitive configuration tasks

The end result is a faster path from idea to working automation, powered by AI but still under the control of your developers and administrators.

Governance, Permissions and LLM Choice

Throughout the session, many questions focused on safety, flexibility and enterprise controls.

Governance and permissions

Agentic AI in Servicely runs under the same permission model as the platform:

  • Actions are executed as the currently logged-in user or in line with configured roles.
  • Governed assistants and agents determine what kinds of tasks can be performed.
  • Tool access is explicitly configured, so you decide which agents can do what.

This means you can support advanced use cases like a “persona manager” for joiners, movers and leavers, where an assistant:

  • Picks up a JML ticket
  • Adds users to the right groups, DLs and permissions
  • Progresses the ticket towards closure

All within the guardrails you set.

LLM flexibility

The platform is LLM agnostic. Organisations can choose the model that best fits their strategy and compliance requirements, including:

  • OpenAI
  • Azure OpenAI
  • AWS Bedrock
  • Anthropic Claude
  • Google Gemini

You can also configure your own LLM endpoints where your subscription supports that.

Using existing data and external sources

The agentic tools can work on:

  • Existing monitoring data
  • Vulnerability data
  • Incident history
  • Other enterprise data sources exposed via APIs

This allows scenarios like:

  • Using monitoring data to automatically log initial incidents with subtasks to the right teams.
  • Using existing trends to support proactive and preventative problem management.
  • Pulling in relevant external knowledge from the internet when appropriate, through configured tools.

What This Means for IT Leaders

Agentic AI is not just another buzzword layered on top of chatbots.

Used well, it represents a shift from systems that simply capture and route tickets, to systems that actually do the work for your teams.

From the Servicely technical preview, a few clear themes emerge:

  • You can start small with information lookups and simple requests.
  • You can quickly add value by helping agents prioritise, update and create work through conversational interactions.
  • You can expand into richer domains like SecOps, change management and proactive problem management.
  • You stay in control through assistants, agents, tool permissions and existing role-based access.
  • You are not locked into a single LLM vendor, which keeps your AI strategy flexible.

If you are exploring how to move beyond isolated AI pilots and into everyday, operational impact, agentic AI inside a platform like Servicely is a practical next step.

You can use this article as a starting point to brief stakeholders, link from your event follow-ups, or invite your team to see a deeper demo of the scenarios described here.

Share this post

Stay Updated with Servicely

Sign up for our mailing list to stay in the loop with Servicely.

Sign Up
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.