Protosynthetic generated mockups collage

PROTOSYNTHETIC

When the Interview Question Becomes the Product

How I turned an open-ended design challenge into an AI-powered requirements gathering system that generates design artifacts

Solo Designer & Developer3 WeeksReact + Node.js + Claude API
Back to home

At a Glance

A tool that helps designers figure out what to build before they build it. I got an interview question, 'design a 1000-floor elevator', and instead of answering it, I built a system that asks all the questions you'd need answered first. It generates a branching question tree, pulls from uploaded documents, and writes the requirements doc for you.

3 Weeks
Concept to Prototype
Interview → Tool
From Question to Product
4 AI Agents
Multi-Agent System
3 Modes
Manual, Suggested, Auto
2 Strategies
Breadth-First & Depth-First
Solo Project
Designer & Developer
Protosynthetic interface showing multiple generated mockup variations from a single design challenge

THE CHALLENGE

During an interview process, I received a design task:

"Design the interface for a 1000-floor elevator."

It sounds simple, but you can't answer it well without a lot more information.

Who uses this elevator? What building type? What constraints exist? What problems are we actually solving? The question was deliberately vague - a test of how I'd handle ambiguity.

I could have made assumptions and designed something generic. Instead, I built a tool that systematically eliminates assumptions through AI-powered requirements gathering.

THE APPROACH

Rather than answer the question, I created a system that helps anyone answer similar open-ended design challenges:

  1. AI asks progressively tailored questions about user needs and constraints
  2. Branching question tree explores both breadth and depth of requirements
  3. Knowledge base integration suggests answers from existing documentation
  4. Requirements automatically distilled from accumulated Q&A
  5. Mockups generated on demand at any point in the exploration

That became Protosynthetic. An AI asks the questions, you answer them, and the system assembles requirements and generates mockups as you go.

RESEARCH: WHY THIS QUESTION IS HARD

LEARNING FROM OTHERS

I looked for how others approached this classic interview question. What I found was unanimous: everyone emphasized that any definitive answer would require making assumptions that might not be true.

The simplicity of the question was the problem. Without understanding the users, their needs, the building context, accessibility requirements, traffic patterns, or technical constraints, you're just guessing.

From my architecture studies, I knew this intimately. So many parameters influence a building and its interior systems. Elevators aren't just boxes with buttons - they're complex systems shaped by human behavior, building codes, structural limitations, and usage patterns.

Making assumptions and designing from there would have been fine, but the question was clearly testing something else.

Discussion screenshot 1
Click to expand
Discussion screenshot 2
Click to expand
Discussion screenshot 3
Click to expand
Discussion screenshot 4
Click to expand

Online discussions revealed the core problem: the question is deliberately vague. Any solution requires assumptions about users, context, and constraints that may or may not be valid.

Original elevator design question

THE ASSIGNMENT

"Design the interface for a 1000-floor elevator."

That's it. No additional context. No user research. No requirements document. Just 11 words and a blank canvas.

FROM CHILDHOOD GAMES TO PROFESSIONAL TOOLS

THE 20 QUESTIONS BREAKTHROUGH

Then I remembered something from childhood: a handheld 20 Questions game. You'd think of any object, and through a series of progressively tailored yes/no questions, the device would remarkably pinpoint what you had in mind.

What made it work was the questions. Each one built on previous responses and narrowed the possibilities.

What if I could create a professional version of this for design requirements?

Instead of me making assumptions about the elevator, what if an AI asked questions about what the stakeholder wanted to build? Questions that built on each other. Questions that made you think through considerations you might not have considered.

I wanted to make requirements gathering feel more like that game, where each answer shapes what gets asked next.

20 Questions game inspiration from childhood

The 20 Questions handheld game used progressive questioning to narrow down infinite possibilities. Could this same principle apply to design requirements?

THE CORE CONCEPT

Traditional requirements gathering is linear: stakeholder describes needs, designer asks clarifying questions, document gets written.

This approach is different: the AI becomes the interviewer, systematically exploring the problem space through a branching question tree. Each answer spawns new questions. The tree structure ensures both breadth (covering all major considerations) and depth (diving into specifics where needed).

At any point, generate mockups based on the accumulated requirements. Keep exploring, generate again. The requirements evolve, the designs evolve.

The line between gathering requirements and actually designing got blurry in a useful way.

Initial concept sketch

Early sketch exploring how AI-guided questioning could systematically surface design requirements without forcing assumptions

Two approaches: parallel vs tree

Deciding how the AI should think

With the core concept defined, I needed to figure out how the AI would structure its questioning. Two architectures emerged:

Parallel Sequences

OPTION 1: Parallel Sequences

Pros:

  • Diversified exploration
  • Different temperature levels
  • Independent progression

Cons:

  • Siloed questioning
  • Missed interconnections
  • Harder to see relationships
  • Doesn't reflect unified solution

Branching Tree

✓ SELECTED
OPTION 2: Branching Tree

Pros:

  • Breadth and depth simultaneously
  • Contextually connected
  • Explores relationships
  • Unified solution space
  • Visual structure makes sense

Cons:

  • More complex to implement
  • Requires careful prompt engineering

Option 2's tree structure better reflected how design considerations interconnect. Every question relates to the central problem while exploring distinct angles.

The complete system design

Comprehensive system architecture sketch

Comprehensive sketch showing the full system: tree visualization on the left, chat interface on the right, with automation features and mockup generation. This became the implementation blueprint.

BUILDING THE SYSTEM

FROM SKETCHES TO SPECIFICATION

With the architecture decided, I needed a comprehensive plan. I put together a detailed specification document outlining all functionality, data structures, API endpoints, and user flows.

The process was iterative - sketching, prompting an LLM to pressure-test the plan, refining, repeating. The goal was to ensure the specification was both comprehensive and executable before writing any code.

Having the spec nailed down before writing code saved me a lot of backtracking.

Comprehensive specification document

Comprehensive specification document covering system architecture, data models, API design, and user flows. Created through iterative refinement with LLM feedback to ensure completeness.

MULTI-AGENT ARCHITECTURE

The system required four distinct AI agent roles, each with specific responsibilities and prompt engineering:

Knowledge Base Processor

Takes uploaded documents or entered text at project initialization and distills them into a unified knowledge corpus. This corpus becomes the source for suggesting answers to questions throughout the session.

Responsibilities:
  • Parse and chunk uploaded documents
  • Extract key facts, requirements, constraints
  • Create searchable knowledge base
  • Maintain semantic understanding for retrieval

Four specialized AI agents handle distinct aspects of the system: knowledge processing, question generation, requirements synthesis, and mockup creation.

SEEING IT IN ACTION

FROM CONCEPT TO WORKING TOOL

After three weeks of design and development, Protosynthetic was ready. The system works in four stages:

  1. SETUP: Define the project and optionally upload knowledge materials
  2. EXPLORATION: Answer questions as the AI builds a requirement tree
  3. SYNTHESIS: Requirements automatically documented from accumulated answers
  4. GENERATION: Create mockups on demand, iterate, regenerate

Let me walk you through each stage.

STAGE 1: PROJECT INITIALIZATION

Every session starts with context. Describe what you're building, upload any relevant documents (RFPs, specs, research), and choose your exploration strategy (breadth-first or depth-first). The knowledge base processor immediately goes to work, distilling uploaded materials into a searchable corpus.

Project initialization: describe what you're building, upload knowledge materials, and select exploration strategy. The system processes everything before generating the first questions.

STAGE 2: SYSTEMATIC EXPLORATION

The core of Protosynthetic is the Q&A interface. Tree visualization on the left shows the evolving question structure. Chat interface on the right lets you answer the current question.

Main Q&A interface

Three ways to answer each question:

Method 1: Manual Entry

Type your own answers for complete control. Best when you have specific requirements or want to think through answers carefully.

Method 2: AI-Suggested Answers

Let the AI suggest answers based on your knowledge base. Extracted answers are editable before submission. Only available if you uploaded materials during setup.

Method 3: Full Automation

Auto-Answer mode continuously generates and submits AI-suggested answers without user intervention. Rapidly builds out the requirement tree. Can be stopped at any point. Only available with knowledge base.

BREADTH-FIRST VS DEPTH-FIRST

The two exploration modes produce noticeably different question trees:

Breadth-First Exploration

Breadth-First mode systematically covers high-level considerations across the problem space before diving into specifics. Notice how the tree grows wide first, then deep - ensuring major aspects aren't missed.

Four top-level questions generated first. Then three follow-ups per question. Then three follow-ups for each of those. Systematic coverage.

Depth-First Exploration

Depth-First mode prioritizes exploring follow-up questions to maximum depth (5 levels) before backtracking to cover sibling branches. Better for projects needing detailed requirement exploration quickly.

Goes deep on one branch first, exploring nuances and edge cases. Then backtracks to explore alternative angles. Prioritizes specificity.

STAGES 3 & 4: SYNTHESIS AND GENERATION

At any point during exploration - after 5 questions or 50 - you can generate mockups. The UX Researcher agent synthesizes all answers into a requirements document. The UX Wireframe Designer agent creates working mockups based on those requirements. Generate once. Answer more questions. Generate again.

Iterative Mockup Generation

Generate mockups at any point. As you answer more questions and requirements become more detailed, regenerate to see designs evolve. Each generation incorporates all accumulated context.

THREE VIEWS OF THE SOLUTION

Each generation produces three outputs:

Mockup tab output

The Mockup tab shows the live, interactive design. Built with React, Tailwind, and DaisyUI - not static images. You can interact with the UI, test flows, and see real component behavior.

WHAT CAME OUT OF IT

Here's what emerged from systematically exploring the original interview question: "Design the interface for a 1000-floor elevator." The system didn't produce one design. It produced different designs depending on what I told it about the building, the users, and the constraints.

Collage of all generated mockups

Mockups generated through iterative questioning, each revealing different design directions based on user answers. The variety demonstrates how systematic exploration uncovers nuances that assumptions would miss.

What this project taught me

LESSONS FROM META-DESIGN

1

QUESTIONING THE QUESTION

The interview question was testing how I'd handle ambiguity. Most people would ask a few clarifying questions, make assumptions, and start designing. Instead of answering, I built a tool that generates every clarifying question you'd need. That turned out to be more useful than any single elevator interface.

2

WHERE THE HUMAN STILL MATTERS

The AI is good at generating questions and pulling answers from documents. It's not good at deciding which answers actually matter for a specific project. I still chose which branches to explore and when requirements were detailed enough to generate mockups. That judgment is the design work.

3

A DIFFERENT KIND OF TOOL

Figma waits for you to do something. Protosynthetic pushes back. It asks questions I hadn't considered and surfaces trade-offs I would have missed working alone. Using it felt less like operating software and more like thinking with a second brain.

4

REQUIREMENTS AS DESIGN

Answering the system's questions didn't feel like filling out a form. It felt like designing. Each answer shaped the next question, and the branching tree itself became a map of the problem space.

5

CONTEXT CHANGES EVERYTHING

A 1000-floor elevator in a residential tower needs completely different solutions than one in a freight hub or a hospital. Running the same question through different contexts produced wildly different requirements and designs each time.

How it's built

Stack and structure

FRONTEND: React + Next.js + TailwindCSS + DaisyUI

  • Tree visualization using custom recursive components
  • Real-time chat interface with streaming AI responses
  • Split-panel layout with resizable sections
  • Mockup rendering with live React component preview

BACKEND: Next.js 15 (App Router)

  • API Routes for agent orchestration (/app/api/*)
  • Session management for multi-turn conversations
  • Knowledge base processing and chunking
  • Prompt template management for different agent roles

AI LAYER: Anthropic Claude API

  • Claude Sonnet 4 (claude-sonnet-4-20250514) for all generation tasks
  • Question generation and answer suggestions
  • Requirements document synthesis
  • UI mockup code generation
  • Temperature tuning per agent type (0.3-0.7 range)
  • Context window management for long conversations

DATA STRUCTURES:

  • Tree structure storing Q&A with parent/child relationships
  • Knowledge base as processed content with categories
  • Requirements document as structured JSON
  • Mockup specifications as executable code + metadata

GENERATION PIPELINE:

  1. User answers question → stored in tree
  2. UX Researcher synthesizes all answers → requirements doc
  3. UX Wireframe Designer receives requirements → generates code
  4. Code executed and rendered → live interactive mockup
  5. Specs and next steps documented automatically

Try it yourself

Protosynthetic is live and the code is open source.

Full Project Walkthrough

Full demonstration showing Protosynthetic in action: from project setup through question exploration to mockup generation. This 10-minute walkthrough covers all features and demonstrates both exploration modes.