top of page

PG

Auto-Scheduling

Transforming K-12 Student Scheduling Through Automation
Platform: FACTS SIS (School Information System)
Timeline: [Add timeline - e.g., Q2 2024 - Q4 2024]
Team: Product Design Manager (Lead), [Designer name], Product Management, Engineering

Student 1
Student dashboard 1

Average User Satisfaction Score (CSAT) from beta administrators. High satisfaction with conflict detection and auto-scheduler intelligence positioned FACTS ahead of competitors in scheduling capabilities.

4.8/5.0

52% 

Results:

Reduction in time spent on manual scheduling conflicts reported by pilot school administrators (measured through time-tracking studies and administrator interviews).

Group 5052 1
Group 5052 2

Overview:

My Role: Leading the Design Vision

Strategic UX & Systems Design Leadership: Drove the design vision for transforming scheduling from a reactive to proactive workflow, establishing design principles that guide decision-making across complex interdependencies.
Design Execution & Prototyping:  Led the design team through user flows, wireframes, and high-fidelity prototypes for auto-scheduling configuration, and conflict management.
Cross-Functional Collaboration: Partnered with product management on business/user goal alignment and with engineering on technical feasibility, especially for AI integration.
Usability Testing & Iteration: Planned and led usability testing, iterating on designs based on user feedback on AI interactions.
Success Metric Definition:  Collaborated on defining KPIs for scheduling efficiency, conflict reduction, and administrator satisfaction.
Design System Leadership: Extended NPDS (Nelnet Product Design System) with 4+ new components for scheduling interfaces, establishing patterns now used across FACTS products.

Every spring, K-12 school administrators face one of their most stressful tasks: building student schedules for the next academic year, juggling 500 students, 40 teachers, limited classroom space, complex graduation requirements, and state compliance, all within a three-week deadline.  Current auto-scheduling has so many gaps, doesn't provide good feedback for adminstrators, causing more manual ork for them in other areas, with thr revamp we wan to make auto-scheduling we can reduce a lot of burden from the teachers and let them focus on problems where their input is needed

 Designing for AI: Our MVP Process & Principles

The traditional design process involves understanding users, defining goals, ideating solutions, prototyping, and testing. Designing for AI is similar but requires additional and critical steps.

Identifiying User Needs

Screen Shot 2025-05-20 at 12.35.52 PM 1

Competitive Market Positioning:

The EdTech market is rapidly evolving with AI-driven solutions. Nelnet identified an opportunity to explore and validate a competitive market proposition to establish a foundation and demonstrate innovation in this space.

Students  Grade 9- 12

New Product  Validation:

This MVP was designed to test the feasibility of  AI-driven student support category. Its purpose was to learn whether AI could successfully analyze varied assignment documents and guide students through questioning, validating the fundamental technical concept and measuring interest from both students and teachers before substantial investment

Teachers  Grade 9- 12

Identify User Needs

Assess & Map AI Capabilities

Design for Data & Feedback loops

Who is Dialogic for?

Anxious or hesitant to ask questions in class

Need support to complete assignments

Get stuck while self studying

For students who:

Informing Future AI Strategy:

A key purpose was to gather crucial learnings and data (user engagement, AI effectiveness, technical challenges) from a real-world application. This would inform Nelnet's broader AI strategy, potential for new product lines, and future investment in AI-driven educational tools.

Ethical Considerations

Burdened with oither resposnibilities

Lack tools for post class engagement

For Teachers who:

Assessing & Mapping AI Capabilities to User Needs

Focused Intelligence Assesment:

We collaborated with engineering teams and assessed the underlying AI technology for our MVP, a specific LLM fine-tuned for document analysis and Socratic dialogue. We acknowledged its strengths, analyzing structured text, files, generating relevant questions based on keywords and its MVP limitations limited contextual memory, potential struggles with highly complex or poorly formatted documents, no general world knowledge

Strategic Mapping:

We then strategically mapped these AI capabilities directly to the identified user needs. For instance, the AI's ability to extract key themes from a document was mapped to the student's need to understand the core requirements of an assignment. Its Socratic questioning capability was mapped to the need for guided learning rather than direct answers, and to the teacher's need to foster critical thinking. Features were scoped to what the AI could reliably deliver for the MVP to meet the most pressing aspects of the user needs

Student (1) 1

Dialogic’s Purpose

Step by Step teaching
Doubt Resolution 24/7

Adaptive Learning

Actionable Insights

Dialogic’s Tone

Approachable & Supportive
Informal
Reassuring
image 8

Student flow:

We recognized the possible scenarios a student can utilize dialogic and in a simple way created a flow for our MVP.

 Designing Data and Feedback Loops :

Since data is crucial for AI, our MVP focuses on clear data that teachers can readily supply. To continuously improve and facilitate adaptive AI learning, we've integrated a feedback loop to collect data on what is and isn't effective.

Student dashboard 2
Mask group

Data as the Engine:

We recognized that the primary input data (teacher-uploaded assignment documents) was crucial. The design for teachers focused on making this upload process simple and clear.

Group 5082 1
Mask group

 Explicit Feedback :

A simple feedback options on AI responses was included as an initial feedback loop. This MVP-level mechanism aimed to gather preliminary data on the perceived helpfulness of AI interactions, which could inform future prompt engineering or model adjustments. The purpose was to establish the habit of feedback collection.

Student dashboard (3)
Student dashboard 4

Teacher Flow:

We designed a simple page for teachers to quickly add assignments for classes and assign them to students, view progress of the students assignments.

Student dashboard  - student view 1
Dialogic initiate 1
Group 5082 2

Student Flow

We designed a dashboard for students to review their assignments, provided filtering options to check completed, and incomplete assignments, review what classes are available within Dialogic and also an history of their interaction with Dialogic.

Mobile Experience for Students:

Student - 1 1
Student - 6 1
Student 02 1
Student - 7 1
Student - 4 1
Student - 8 1
Student - 5 1

 Key Learnings for Designing AI (from the MVP):

Start with Solvable User Needs, Not Just AI Capabilities: It's tempting to be technology-first with AI. However, grounding the project in clearly defined user needs (student assignment clarity, teacher workload reduction) and then mapping AI capabilities to those needs, rather than the other way around, ensured the MVP remained focused and purposeful. We learned to ask "How can AI best solve this specific user problem?" not "What cool things can this AI do?"

Data (document) Quality is a Key Dependency: For an AI that analyzes documents, the quality and structure of those input documents significantly impact AI performance. This learning highlighted the need for clear guidelines for teachers on document preparation or, in future iterations, more robust AI parsing capabilities.

Feedback Loops are Non-Negotiable, Even in an MVP: AI products thrive on data and iteration. Establishing even simple data and feedback loops (like the thumbs up/down) in the MVP was crucial. It sets the precedent for a data-driven improvement cycle and provides invaluable, although initial, signals on AI performance and user perception.

Prototyping AI is Prototyping Conversations & Probabilities: Prototyping for AI, especially conversational AI, goes beyond static screens. We learned the importance of prototyping interaction flows that account for the AI's probabilistic nature and potential variations in responses. This helped us anticipate conversational challenges and design more resilient interactions.

MVP for AI Means Managing Expectations Rigorously: With an AI MVP, especially one with "bare minimum" features, managing user expectations about the AI's intelligence and scope is paramount. Clear UI communication, onboarding, and even the AI's conversational design must reinforce its specific purpose and limitations to prevent user frustration and build initial trust. Our negative impact analysis highlighted this early on.


This MVP project provided several key learnings crucial for designing effective and responsible AI-powered products:

bottom of page