AI Project

AI UX Research Assistant

I built this as a structured AI workflow for turning raw usability notes into findings, domain-aware review, and synthesized research reporting.

I built the AI UX Research Assistant as a structured workflow for turning raw usability notes into more usable research outputs. Instead of treating research synthesis as a single prompt, I approached it as a multi-step process that extracts findings, routes the work through domain-aware logic, includes human review, and generates multiple reports before producing a final synthesis.

Research SynthesisHuman in the LoopWorkflow DesignStructured Outputs

What I built

I designed this as a staged research-support workflow rather than a single prompt.

I designed the workflow to start with raw usability test notes. From there, it extracts three foundational layers:

  • raw findings
  • themes
  • key evidence

I then use those outputs to recommend the most relevant subject matter expert domain, choosing from categories like healthcare, financial services, government services, e-commerce, SaaS/product, and general business.

Once the domain is identified, I keep a human-in-the-loop step in place so the recommendation can be accepted or overridden before the reporting stage continues.

Workflow structure

Extraction, routing, review, parallel reporting, and synthesis.

Extract findingsRoute to SMEHuman reviewGenerate reportsSynthesize

I built the assistant as a LangGraph workflow rather than a single chat interaction. The sequence is:

  1. extract findings from raw notes
  2. route the research to a recommended SME domain
  3. allow human review of that recommendation
  4. generate three reports in parallel
  5. synthesize those reports into a final output

The three reports I currently generate are:

  • a UX research report
  • a product design report
  • an SME-specific report

Those are then combined into a final synthesized report for stakeholders.

Human in the loop

I designed review into the workflow on purpose.

One of the important parts of this project for me is that I did not try to remove judgment from the process. The system can recommend an SME based on the extracted findings, but that recommendation can still be reviewed and changed before the workflow continues.

That matters to me because research work benefits from structure, but it still needs human judgment in the places where context and interpretation matter most.

I use the extracted structure to recommend an SME.

I keep a person in the loop so that recommendation can be accepted or overridden before the workflow continues.

I generate reports from the reviewed SME choice, not from a hidden automatic decision.

That keeps interpretation visible where domain context still matters.

Outputs

Multiple report layers before the final synthesis.

The workflow I built can produce:

  • extracted findings
  • extracted themes
  • key evidence
  • a UX research report
  • a product design report
  • an SME-specific report
  • a final synthesized report

That gives the project a useful range of outputs instead of reducing everything to one generic summary.

I already have the core workflow in place, and I’m presenting this project as a live, usable part of the portfolio. I want visitors to be able to try the workflow directly on the site while keeping the overall experience guided and structured.

There are still parts of the workflow I want to keep refining over time, especially around the review step and how the outputs are presented, but the underlying system is already strong enough to demonstrate the concept in a real way.

Live demo / current implementation

Try the workflow I built, from raw notes through report generation.

I made the extraction, review, and output steps explicit so the workflow feels like a structured tool rather than a generic AI summary field.

Live workflow

Turn raw notes into structured outputs

Start with raw research notes, review the recommended SME, then generate the UX, product, SME, and synthesis outputs.

Demo input
1. Extract2. Route3. Review4. Report
Generate reports to extract findings, route to an SME, and build the full report set.
Reports appear here after the review step is complete.

Why this project matters

I’m interested in AI workflows that support research judgment instead of replacing it.

What interests me most about this project is that it applies AI to a part of UX work that often benefits from more structure: organizing raw notes, identifying patterns, shaping outputs for different audiences, and keeping human review in the loop where it matters.

It reflects the direction I’m increasingly interested in: using AI to support real workflows in a way that is useful, structured, and grounded in how people actually work.