Creating a NetAI Playground for Agentic AI Experimentation

Creating a NetAI Playground for Agentic AI Experimentation


Hey there, everyone, and welcome to the latest installment of “Hank shares his AI journey.” 🙂 Artificial Intelligence (AI) continues to be all the rage, and coming back from Cisco Live in San Diego, I was excited to dive into the world of agentic AI.

With announcements like Cisco’s own agentic AI solution, AI Canvas, as well as discussions with partners and other engineers about this next phase of AI possibilities, my curiosity was piquedWhat does this all mean for us network engineers? Moreover, how can we start to experiment and learn about agentic AI?

I began my exploration of the topic of agentic AI, reading and watching a wide range of content to gain a deeper understanding of the subject. I won’t delve into a detailed definition in this blog, but here are the basics of how I think about it:

Agentic AI is a vision for a world where AI doesn’t just answer questions we ask, but it begins to work more independently. Driven by the goals we set, and utilizing access to tools and systems we provide, an agentic AI solution can monitor the current state of the network and take actions to ensure our network operates exactly as intended.

Sounds pretty darn futuristic, right? Let’s dive into the technical aspects of how it works—roll up your sleeves, get into the lab, and let’s learn some new things.

What are AI “tools?”

The first thing I wanted to explore and better understand was the concept of “tools” within this agentic framework. As you may recall, the LLM (large language model) that powers AI systems is essentially an algorithm trained on vast amounts of data. An LLM can “understand” your questions and instructions. On its own, however, the LLM is limited to the data it was trained on. It can’t even search the web for current movie showtimes without some “tool” allowing it to perform a web search.

From the very early days of the GenAI buzz, developers have been building and adding “tools” into AI applications. Initially, the creation of these tools was ad hoc and varied depending on the developer, LLM, programming language, and the tool’s goal.  But recently, a new framework for building AI tools has gotten a lot of excitement and is starting to become a new “standard” for tool development.

This framework is known as the Model Context Protocol (MCP). Originally developed by Anthropic, the company behind Claude, any developer to use MCP to build tools, called “MCP Servers,” and any AI platform can act as an “MCP Client” to use these tools. It’s essential to remember that we are still in the very early days of AI and AgenticAI; however, currently, MCP appears to be the approach for tool building. So I figured I’d dig in and figure out how MCP works by building my own very basic NetAI Agent.

I’m far from the first networking engineer to want to dive into this space, so I started by reading a couple of very helpful blog posts by my buddy Kareem Iskander, Head of Technical Advocacy in Learn with Cisco.

These gave me a jumpstart on the key topics, and Kareem was helpful enough to provide some example code for creating an MCP server. I was ready to explore more on my own.

Creating a local NetAI playground lab

There is no shortage of AI tools and platforms today. There is ChatGPT, Claude, Mistral, Gemini, and so many more. Indeed, I utilize many of them regularly for various AI tasks. However, for experimenting with agentic AI and AI tools, I wanted something that was 100% local and didn’t rely on a cloud-connected service.

A primary reason for this desire was that I wanted to ensure all of my AI interactions remained entirely on my computer and within my network. I knew I would be experimenting in an entirely new area of development. I was also going to send data about “my network” to the LLM for processing. And while I’ll be using non-production lab systems for all the testing, I still didn’t like the idea of leveraging cloud-based AI systems. I would feel freer to learn and make mistakes if I knew the risk was low. Yes, low… Nothing is completely risk-free.

Luckily, this wasn’t the first time I considered local LLM work, and I had a couple of possible options ready to go. The first is Ollama, a powerful open-source engine for running LLMs locally, or at least on your own server.  The second is LMStudio, and while not itself open source, it has an open source foundation, and it is free to use for both personal and “at work” experimentation with AI models. When I read a recent blog by LMStudio about MCP support now being included, I decided to give it a try for my experimentation.

Creating Mr Packets with LMStudio
Creating Mr Packets with LMStudio

LMStudio is a client for running LLMs, but it isn’t an LLM itself.  It provides access to a large number of LLMs available for download and running. With so many LLM options available, it can be overwhelming when you get started. The key things for this blog post and demonstration are that you need a model that has been trained for “tool use.” Not all models are. And furthermore, not all “tool-using” models actually work with tools. For this demonstration, I’m using the google/gemma-2-9b model. It’s an “open model” built using the same research and tooling behind Gemini.

The next thing I needed for my experimentation was an initial idea for a tool to build. After some thought, I decided a good “hello world” for my new NetAI project would be a way for AI to send and process “show commands” from a network device. I chose pyATS to be my NetDevOps library of choice for this project. In addition to being a library that I’m very familiar with, it has the benefit of automatic output processing into JSON through the library of parsers included in pyATS. I could also, within just a couple of minutes, generate a basic Python function to send a show command to a network device and return the output as a starting point.

Here’s that code:

def send_show_command(
    command: str,
    device_name: str,
    username: str,
    password: str,
    ip_address: str,
    ssh_port: int = 22,
    network_os: Optional[str] = "ios",
) -> Optional[Dict[str, Any]]:

    # Structure a dictionary for the device configuration that can be loaded by PyATS
    device_dict = {
        "devices": {
            device_name: {
                "os": network_os,
                "credentials": {
                    "default": {"username": username, "password": password}
                },
                "connections": {
                    "ssh": {"protocol": "ssh", "ip": ip_address, "port": ssh_port}
                },
            }
        }
    }
    testbed = load(device_dict)
    device = testbed.devices[device_name]

    device.connect()
    output = device.parse(command)
    device.disconnect()

    return output

Between Kareem’s blog posts and the getting-started guide for FastMCP 2.0, I learned it was frighteningly easy to convert my function into an MCP Server/Tool. I just needed to add five lines of code.

from fastmcp import FastMCP

mcp = FastMCP("NetAI Hello World")

@mcp.tool()
def send_show_command()
    .
    .


if __name__ == "__main__":
    mcp.run()

Well.. it was ALMOST that easy. I did have to make a few adjustments to the above basics to get it to run successfully. You can see the full working copy of the code in my newly created NetAI-Learning project on GitHub.

As for those few adjustments, the changes I made were:

  • A nice, detailed docstring for the function behind the tool. MCP clients use the details from the docstring to understand how and why to use the tool.
  • After some experimentation, I opted to use “http” transport for the MCP server rather than the default and more common “STDIO.” The reason I went this way was to prepare for the next phase of my experimentation, when my pyATS MCP server would likely run within the network lab environment itself, rather than on my laptop. STDIO requires the MCP Client and Server to run on the same host system.

So I fired up the MCP Server, hoping that there wouldn’t be any errors. (Okay, to be honest, it took a couple of iterations in development to get it working without errors… but I’m doing this blog post “cooking show style,” where the boring work along the way is hidden. 😉

python netai-mcp-hello-world.py 

╭─ FastMCP 2.0 ──────────────────────────────────────────────────────────────╮
│                                                                            │
│        _ __ ___ ______           __  __  _____________    ____    ____     │
│       _ __ ___ / ____/___ ______/ /_/  |/  / ____/ __ \  |___ \  / __ \    │
│      _ __ ___ / /_  / __ `/ ___/ __/ /|_/ / /   / /_/ /  ___/ / / / / /    │
│     _ __ ___ / __/ / /_/ (__  ) /_/ /  / / /___/ ____/  /  __/_/ /_/ /     │
│    _ __ ___ /_/    \__,_/____/\__/_/  /_/\____/_/      /_____(_)____/      │
│                                                                            │
│                                                                            │
│                                                                            │
│    🖥️  Server name:     FastMCP                                             │
│    📦 Transport:       Streamable-HTTP                                     │
│    🔗 Server URL:      http://127.0.0.1:8002/mcp/                          │
│                                                                            │
│    📚 Docs:            https://gofastmcp.com                               │
│    🚀 Deploy:          https://fastmcp.cloud                               │
│                                                                            │
│    🏎️  FastMCP version: 2.10.5                                              │
│    🤝 MCP version:     1.11.0                                              │
│                                                                            │
╰────────────────────────────────────────────────────────────────────────────╯


[07/18/25 14:03:53] INFO     Starting MCP server 'FastMCP' with transport 'http' on http://127.0.0.1:8002/mcp/server.py:1448
INFO:     Started server process [63417]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8002 (Press CTRL+C to quit)

The next step was to configure LMStudio to act as the MCP Client and connect to the server to have access to the new “send_show_command” tool. While not “standardized, “most MCP Clients use a very common JSON configuration to define the servers. LMStudio is one of these clients.

Adding the pyATS MCP server to LMStudio
Adding the pyATS MCP server to LMStudio

Wait… if you’re wondering, ‘Where’s the network, Hank? What device are you sending the ‘show commands’ to?’ No worries, my inquisitive friend: I created a very simple Cisco Modeling Labs (CML) topology with a couple of IOL devices configured for direct SSH access using the PATty feature.

NetAI Hello World CML Network
NetAI Hello World CML Network

Let’s see it in action!

Okay, I’m sure you are ready to see it in action.  I know I sure was as I was building it.  So let’s do it!

To start, I instructed the LLM on how to connect to my network devices in the initial message.

Telling the LLM about my devices
Telling the LLM about my devices

I did this because the pyATS tool needs the address and credential information for the devices.  In the future I’d like to look at the MCP servers for different source of truth options like NetBox and Vault so it can “look them up” as needed.  But for now, we’ll start simple.

First question: Let’s ask about software version info.

Short video of the asking the LLM what version of software is running.

You can see the details of the tool call by diving into the input/output screen.

Tool inputs and outputs

This is pretty cool, but what exactly is happening here? Let’s walk through the steps involved.

  1. The LLM client starts and queries the configured MCP servers to discover the tools available.
  2. I send a “prompt” to the LLM to consider.
  3. The LLM processes my prompts. It “considers” the different tools available and if they might be relevant as part of building a response to the prompt.
  4. The LLM determines that the “send_show_command” tool is relevant to the prompt and builds a proper payload to call the tool.
  5. The LLM invokes the tool with the proper arguments from the prompt.
  6. The MCP server processes the called request from the LLM and returns the result.
  7. The LLM takes the returned results, along with the original prompt/question as the new input to use to generate the response.
  8. The LLM generates and returns a response to the query.

This isn’t all that different from what you might do if you were asked the same question.

  1. You would consider the question, “What software version is router01 running?”
  2. You’d think about the different ways you could get the information needed to answer the question. Your “tools,” so to speak.
  3. You’d decide on a tool and use it to gather the information you needed. Probably SSH to the router and run “show version.”
  4. You’d review the returned output from the command.
  5. You’d then reply to whoever asked you the question with the proper answer.

Hopefully, this helps demystify a little about how these “AI Agents” work under the hood.

How about one more example? Perhaps something a bit more complex than simply “show version.” Let’s see if the NetAI agent can help identify which switch port the host is connected to by describing the basic process involved.

Here’s the question—sorry, prompt, that I submit to the LLM:

Prompt asking a multi-step question of the LLM.
Prompt asking a multi-step question of the LLM.

What we should notice about this prompt is that it will require the LLM to send and process show commands from two different network devices. Just like with the first example, I do NOT tell the LLM which command to run. I only ask for the information I need. There isn’t a “tool” that knows the IOS commands. That knowledge is part of the LLM’s training data.

Let’s see how it does with this prompt:

The multi-step LLM response.
The LLM successfully executes the multi-step plan.

And look at that, it was able to handle the multi-step procedure to answer my question.  The LLM even explained what commands it was going to run, and how it was going to use the output.  And if you scroll back up to the CML network diagram, you’ll see that it correctly identifies interface Ethernet0/2 as the switch port to which the host was connected.

So what’s next, Hank?

Hopefully, you found this exploration of agentic AI tool creation and experimentation as interesting as I have. And maybe you’re starting to see the possibilities for your own daily use. If you’d like to try some of this out on your own, you can find everything you need on my netai-learning GitHub project.

  1. The mcp-pyats code for the MCP Server. You’ll find both the simple “hello world” example and a more developed work-in-progress tool that I’m adding additional features to. Feel free to use either.
  2. The CML topology I used for this blog post. Though any network that is SSH reachable will work.
  3. The mcp-server-config.json file that you can reference for configuring LMStudio
  4. A “System Prompt Library” where I’ve included the System Prompts for both a basic “Mr. Packets” network assistant and the agentic AI tool. These aren’t required for experimenting with NetAI use cases, but System Prompts can be useful to ensure the results you’re after with LLM.

A couple of “gotchas” I wanted to share that I encountered during this learning process, which I hope might save you some time:

First, not all LLMs that claim to be “trained for tool use” will work with MCP servers and tools. Or at least the ones I’ve been building and testing. Specifically, I struggled with Llama 3.1 and Phi 4. Both seemed to indicate they were “tool users,” but they failed to call my tools. At first, I thought this was due to my code, but once I switched to Gemma 2, they worked immediately. (I also tested with Qwen3 and had good results.)

Second, once you add the MCP Server to LMStudio’s “mcp.json” configuration file, LMStudio initiates a connection and maintains an active session. This means that if you stop and restart the MCP server code, the session is broken, giving you an error in LMStudio on your next prompt submission. To fix this issue, you’ll need to either close and restart LMStudio or edit the “mcp.json” file to delete the server, save it, and then re-add it. (There is a bug filed with LMStudio on this problem. Hopefully, they’ll fix it in an upcoming release, but for now, it does make development a bit annoying.)

As for me, I’ll continue exploring the concept of NetAI and how AI agents and tools can make our lives as network engineers more productive. I’ll be back here with my next blog once I have something new and interesting to share.

In the meantime, how are you experimenting with agentic AI? Are you excited about the potential? Any suggestions for an LLM that works well with network engineering knowledge? Let me know in the comments below. Talk to you all soon!

Sign up for Cisco U. | Join the  Cisco Learning Network today for free.

Learn with Cisco

X | Threads | Facebook | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to join the conversation.

Share:



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *