Cisco

AI Assistant for Networking Labs

#AI #LearningTools

Overview

Cisco Packet Tracer

Cisco Packet Tracer is a widely used network simulation tool for students learning computer networking. It allows them to build topologies, configure devices, and test communication flows without physical hardware. In Packet Tracer, even a small mistake can break the whole lab. As a teaching assistant, I often saw students end up troubleshooting the tool instead of focusing on learning networking concepts.

I designed an AI assistant for Cisco Packet Tracer to help students troubleshoot labs in real time.I observed student struggles, mapped common questions into intent patterns, and prototyped an assistant that supported tasks like device selection, connection tracing, and CLI help.In student testing, the prototype showed high validation of reducing frustration and building confidence.

Duration

4 months

Role

Product Designer

Contributions

UX/UI Design
User Interviews
Prototyping

Problem

Cisco Packet Tracer enables complex labs, but overwhelms new learners.

Many students lost motivation as errors keep happening

Students often said they felt “stuck on the same error for hours.” Small mistakes, like a wrong IP or cable can broke the entire lab, making learning feel frustrating.

Spend more time troubleshooting than teaching

In course reviews, students admitted they lined up in office hours with “the same broken file issue.” TA became fixers of Packet Tracer errors, not mentors in networking, which drained teaching time.

The software disrupt the overall learning experience

Some students thought Cisco’s tool felt “outdated” and “hard to use.” Instead of supporting confidence, the rough interface risked discouraging beginners and shaping a poor impression of Cisco’s learning tools.

Research

I focused on understanding where students got stuck in Packet Tracer by direct observation during lab sessions.

While they were troubleshooting, I often heard recurring questions like: “Why isn’t this port/router working?” or “Why can’t this device reach that one?” These weren’t just casual doubts — they were signs that students were losing confidence when small errors blocked their progress.

The real barrier was both the networking content and the heavy cognitive load of troubleshooting inside Packet Tracer.

I noticed patterns across different students: some were confused about the logic of networking itself, others misconfigured devices or commands, and many simply couldn’t trace why communication was failing between endpoints. Even when the concept was clear, the tool’s lack of feedback made mistakes harder to detect and correct.

Insights

Analyze student struggles into patterns

In lab sessions, students’ struggles often surfaced as repeated mistakes or half-formed questions. I organized these into four core intents, then matched each one with a clear UX pattern in Packet Tracer.

Understand a concept

natural language input without selecting topology.

Fix a device/port

click directly on a device for targeted help.

Trace a broken connection

select source and destination to reveal drop points.

Scan a network zone

loop-select an area for logic or IP errors.

Why generic AI couldn’t help

Students struggled to explain what was wrong in Packet Tracer. ChatGPT couldn’t “see” their lab setup, and screenshots or vague questions weren’t enough to pinpoint the exact problem.

A built-in AI assistant that understands context

Instead of relying on perfect prompts, students could express intent through simple actions, clicking a device, selecting endpoints, or highlighting an area. These gestures gave the AI the missing context, enabling it to deliver targeted, structured help that generic AI tools couldn’t provide.

Solution

I mapped student struggles into four clear intents. Students signaled intent with simple actions (clicking devices, selecting endpoints). The AI read these gestures and delivered real-time, task-specific help, no perfect prompts needed.

General (Concept / CLI help)

Students could ask high-level questions (e.g., IPv4 vs. IPv6) without selecting a device. The AI replied with explanations and sample commands.

Select Device/Link (Configuration help)

Clicking a device or port flagged a config-level issue. The AI scanned for misconfigurations, suggested CLI fixes, and explained errors in context.

Select Connection (Flow help)

Choosing two devices let students trace a failed ping or communication flow. The AI pinpointed the drop point and suggested the next fix.

Select Area (Zone help)

Loop-selecting a section triggered a scan for logic or IP issues. The AI returned a prioritized issue list so students knew where to focus.

Each session generated a concept review log, organizing all AI interactions into a searchable, reusable learning history.

result

Even as a concept sprint, the prototype showed strong potential:

→ 8 of 9 students said they would choose the AI assistant over office hours for common troubleshooting, reducing support demand.
→ 6 students felt more confident experimenting with CLI tasks when backed by AI guidance.
→ Other Course staff saw it as a way to cut repetitive support work and free time for deeper teaching.

Reflection

This project taught me how critical context-aware, task-specific AI can be for adoption. Students were already turning to ChatGPT for help, but generic answers weren’t enough — they needed AI that could “see” their actual workflow and guide them in the moment.

I also learned that effective AI design isn’t about replacing human expertise, but about reducing cognitive load and keeping users in flow. By shifting from command-driven interactions to intent-driven guidance, the assistant became less of a chatbot and more of a mentor embedded in the workflow.

The key takeaway: building trust in AI means grounding it in real user context, familiar interactions, and clear paths to action. These lessons apply beyond Packet Tracer — they’re central to designing AI tools that people not only try, but rely on.