
Music producers don’t have a volume problem they have a discovery efficiency problem.
Modern sample libraries contain thousands of sounds, but finding the right one often interrupts creative momentum.
loophaven explores an audio-first discovery system that replaces text-based search with sound-to-sound interaction and instant cloud-to-VST synchronization, enabling producers to discover, audition, and use sounds without leaving their creative flow.
∞haven
Music producers don’t have a volume problem they have a discovery efficiency problem.
Modern sample libraries contain thousands of sounds, but finding the right one often interrupts creative momentum.
loophaven explores an audio-first discovery system that replaces text-based search with sound-to-sound interaction and instant cloud-to-VST synchronization, enabling producers to discover, audition, and use sounds without leaving their creative flow.
Radio mode removes decision-making from discovery.
Simple interactions such as Like or Skip update the preference model in real time, shaping what plays next while keeping attention focused on listening rather than interface.
Radio mode is designed for moments when producers do not yet have a clear direction — only the desire to keep creating.
The system plays a continuous stream of musically compatible sounds based on listening behavior and production history. Discovery occurs passively, without filters, categories, or manual input.
Radio mode is designed for moments when producers do not yet have a clear direction — only the desire to keep creating.
The system plays a continuous stream of musically compatible sounds based on listening behavior and production history. Discovery occurs passively, without filters, categories, or manual input.
Remix mode supports forward exploration from a single idea.
Rather than replacing sounds, producers generate structured variations that preserve musical identity while exploring alternate timing, rhythm, and phrasing.
LoopHaven operates as a connected system across mobile, desktop, and plugin environments.
All discovery actions feed a shared preference model stored in the cloud. When a sound is saved on any device, it becomes immediately available inside the LoopHaven VST plugin, removing the need for manual downloads, file management, or importing workflows.
This architecture allows discovery to happen anywhere while usage remains anchored inside the production environment.
To evaluate the direction before expanding the system, I validated the core interaction model through early concept testing and internal critique sessions with producers and designers.
What was tested
Sound-to-sound discovery versus traditional tag-based browsing
Time-to-first-usable-sound from a cold start
Clarity of AI output without requiring technical explanation
Key signals observed
Producers consistently preferred triggering discovery from audio rather than filters or keywords.
Participants described the experience as “faster” and “less interruptive” compared to browsing folders.
The AI overview was most effective when framed in musical language (key, tempo, role) rather than abstract ML terminology.
Resulting iterations
Reduced visible filters in early discovery states.
Prioritized harmonic compatibility over exact similarity scoring.
Introduced the AI Overview card to build trust without overwhelming detail.
These insights reinforced the core hypothesis:
momentum matters more than precision during active music creation.
Early exploration focused on understanding how producers actually discover sounds during active sessions — not how tools expect them to search.
Research revealed that discovery often happens mid-composition, when creative intent is fluid and difficult to describe verbally. Keyword search, filters, and folders require explicit intent at moments when producers are operating intuitively.
This mismatch between creative behavior and traditional discovery tools frequently results in stalled sessions, abandoned ideas, and unnecessary cognitive load.
Producers frequently need to:
pause playback
leave the DAW
browse external tools
audition unrelated samples
manually import files
Each step introduces friction during moments when creative momentum is most fragile.
When viewed holistically, the market reveals a shared gap:
Across platforms, several consistent behaviors emerged:
Discovery is primarily driven by text-based metadata
(tags, filters, folders, descriptors)
Musical intent is often expressed indirectly through language rather than sound
Discovery, analysis, and usage typically occur in separate environments
File-based workflows remain the dominant method of moving sounds into the DAW
Learning systems, where present, are usually limited to browsing behavior rather than production context
LoopHaven was designed to support discovery as an active part of music creation rather than a separate preparatory step. Instead of requiring producers to define intent upfront, the system responds directly to audio input, allowing ideas to evolve through listening and iteration.
By shifting discovery from language-based search to sound-based interaction, the experience prioritizes momentum over precision and reduces the friction that typically pulls creators out of flow.
To better understand how producers currently discover and use sounds, we reviewed workflows across established platforms and emerging tools, including:
The goal was not to evaluate individual products, but to identify recurring patterns across the ecosystem.
Across platforms, several consistent behaviors emerged:
Discovery is primarily driven by text-based metadata (tags, filters, folders, descriptors)
Musical intent is often expressed indirectly through language rather than sound
Discovery, analysis, and usage typically occur in separate environments
File-based workflows remain the dominant method of moving sounds into the DAW
Learning systems, where present, are usually limited to browsing behavior rather than production context
When viewed holistically, the market reveals a shared gap:
Producers frequently need to:
pause playback
leave the DAW
browse external tools
audition unrelated samples
manually import files
Each step introduces friction during moments when creative momentum is most fragile.
Rather than replacing existing tools or competing on library size, it focuses on:
audio-first discovery instead of keyword search
musical compatibility over categorical similarity
continuous flow instead of discrete steps
cloud-synced usage rather than file management
Its role is to act as an intelligent connective layer between discovery and production.
To better understand how producers currently discover and use sounds, we reviewed workflows across established platforms and emerging tools, including:
The goal was not to evaluate individual products, but to identify recurring patterns across the ecosystem.
To better understand how producers currently discover and use sounds, we reviewed workflows across established platforms and emerging tools, including:
The goal was not to evaluate individual products, but to identify recurring patterns across the ecosystem.
The opportunity was not found by outperforming individual platforms, but by stepping back and observing how producers move across tools, not within them.
LoopHaven addresses friction at the system level — where discovery, intelligence, and execution intersect.
Early exploration focused on understanding how producers actually discover sounds during active sessions — not how tools expect them to search.
Research revealed that discovery often happens mid-composition, when creative intent is fluid and difficult to describe verbally. Keyword search, filters, and folders require explicit intent at moments when producers are operating intuitively.
This mismatch between creative behavior and traditional discovery tools frequently results in stalled sessions, abandoned ideas, and unnecessary cognitive load.
To evaluate the direction before expanding the system, I validated the core interaction model through early concept testing and internal critique sessions with producers and designers.
What was tested
Sound-to-sound discovery versus traditional tag-based browsing
Time-to-first-usable-sound from a cold start
Clarity of AI output without requiring technical explanation
Key signals observed
Producers consistently preferred triggering discovery from audio rather than filters or keywords.
Participants described the experience as “faster” and “less interruptive” compared to browsing folders.
The AI overview was most effective when framed in musical language (key, tempo, role) rather than abstract ML terminology.
Resulting iterations
Reduced visible filters in early discovery states.
Prioritized harmonic compatibility over exact similarity scoring.
Introduced the AI Overview card to build trust without overwhelming detail.
These insights reinforced the core hypothesis:
momentum matters more than precision during active music creation.
Analyze mode allows any sound to be used as a starting point for discovery.
Instead of browsing libraries, producers input audio directly and receive musically compatible alternatives based on spectral balance, rhythm, and harmonic structure.
Remix mode supports forward exploration from a single idea.
Rather than replacing sounds, producers generate structured variations that preserve musical identity while exploring alternate timing, rhythm, and phrasing.
Radio mode is designed for moments when producers do not yet have a clear direction — only the desire to keep creating.
The system plays a continuous stream of musically compatible sounds based on listening behavior and production history. Discovery occurs passively, without filters, categories, or manual input.
Discovery begins from listening rather than searching, enabling rapid exploration without keywords, filters, or manual browsing.
This approach allows iteration within the same creative context, eliminating the need to restart discovery or search for alternatives mid-session.
Radio mode removes decision-making from discovery.
Simple interactions such as Like or Skip update the preference model in real time, shaping what plays next while keeping attention focused on listening rather than interface.
Nov 2025
Timeline
UX research
Sketching & ideation
Interaction modeling
End-to-end flow design
Mobile UX/UI
System definition
Responsibilities
Product Design
UX Strategy
Interaction Design
Visual Design
Design Systems
AI-Assisted Experience Design
Disciplines
Tools
Figma
Adobe CS
Miro
Create a discovery experience that allows producers to interact directly with sound itself, minimizing translation, decision fatigue, and interruption during music creation.
Opportunity
During active production sessions, creators frequently switch contexts between listening, browsing folders, applying filters, and translating auditory intent into keywords. This cognitive overhead disrupts creative flow and slows iteration.
Challenge
Challenge
Opportunity
Initial Problem Discovery
Sound-to-Sound Discovery
Remix Mode
Radio Mode
Key Insights & Strategic Pivot
The Problem
The Principle
The Pivot
Browsing folders and metadata requires producers to translate auditory ideas into descriptive language — a task that interrupts creative flow.
Music discovery should respond to sound itself, not require users to describe what they are hearing.
Shift discovery from search-driven interaction (tags, filters, categories) to audio-first interaction based on sound similarity.



























RESEARCH
Vision
Competitive Landscape
Observed Industry Patterns
Identified Opportunity
Strategic Position
Strategic Takeaway







LoopHaven operates as a connected system across mobile, desktop, and plugin environments.
All discovery actions feed a shared preference model stored in the cloud. When a sound is saved on any device, it becomes immediately available inside the LoopHaven VST plugin, removing the need for manual downloads, file management, or importing workflows.
This architecture allows discovery to happen anywhere while usage remains anchored inside the production environment.
Ecosystem Architecture
Design Constraints & Trade-offs




The moment between hearing an idea and using a sound remains fragmented.
Tools
Tools
Figma
Adobe CS
Miro
Tools




Remix mode supports forward exploration from a single idea.
Rather than replacing sounds, producers generate structured variations that preserve musical identity while exploring alternate timing, rhythm, and phrasing.


Discovery begins from listening rather than searching, enabling rapid exploration without keywords, filters, or manual browsing.
Analyze mode allows any sound to be used as a starting point for discovery.
Instead of browsing libraries, producers input audio directly and receive musically compatible alternatives based on spectral balance, rhythm, and harmonic structure.
Sound-to-Sound Discovery
The Problem
The Principle
The Pivot
Browsing folders and metadata requires producers to translate auditory ideas into descriptive language — a task that interrupts creative flow.
Shift discovery from search-driven interaction (tags, filters, categories) to audio-first interaction based on sound similarity.
Music discovery should respond to sound itself, not require users to describe what they are hearing.


To evaluate the direction before expanding the system, I validated the core interaction model through early concept testing and internal critique sessions with producers and designers.
What was tested
Sound-to-sound discovery versus traditional tag-based browsing
Time-to-first-usable-sound from a cold start
Clarity of AI output without requiring technical explanation
Key signals observed
Producers consistently preferred triggering discovery from audio rather than filters or keywords.
Participants described the experience as “faster” and “less interruptive” compared to browsing folders.
The AI overview was most effective when framed in musical language (key, tempo, role) rather than abstract ML terminology.
Resulting iterations
Reduced visible filters in early discovery states.
Prioritized harmonic compatibility over exact similarity scoring.
Introduced the AI Overview card to build trust without overwhelming detail.
These insights reinforced the core hypothesis:
momentum matters more than precision during active music creation.
Early exploration focused on understanding how producers actually discover sounds during active sessions — not how tools expect them to search.
Research revealed that discovery often happens mid-composition, when creative intent is fluid and difficult to describe verbally. Keyword search, filters, and folders require explicit intent at moments when producers are operating intuitively.
This mismatch between creative behavior and traditional discovery tools frequently results in stalled sessions, abandoned ideas, and unnecessary cognitive load.
The opportunity was not found by outperforming individual platforms, but by stepping back and observing how producers move across tools, not within them.
LoopHaven addresses friction at the system level — where discovery, intelligence, and execution intersect.
Rather than replacing existing tools or competing on library size, it focuses on:
audio-first discovery instead of keyword search
musical compatibility over categorical similarity
continuous flow instead of discrete steps
cloud-synced usage rather than file management
Its role is to act as an intelligent connective layer between discovery and production.
The moment between hearing an idea and using a sound remains fragmented.
When viewed holistically, the market reveals a shared gap:
Producers frequently need to:
pause playback
leave the DAW
browse external tools
audition unrelated samples
manually import files
Each step introduces friction during moments when creative momentum is most fragile.
Across platforms, several consistent behaviors emerged:
Discovery is primarily driven by text-based metadata
(tags, filters, folders, descriptors)
Musical intent is often expressed indirectly through language rather than sound
Discovery, analysis, and usage typically occur in separate environments
File-based workflows remain the dominant method of moving sounds into the DAW
Learning systems, where present, are usually limited to browsing behavior rather than production context
LoopHaven was designed to support discovery as an active part of music creation rather than a separate preparatory step. Instead of requiring producers to define intent upfront, the system responds directly to audio input, allowing ideas to evolve through listening and iteration.
By shifting discovery from language-based search to sound-based interaction, the experience prioritizes momentum over precision and reduces the friction that typically pulls creators out of flow.
During active production sessions, creators frequently switch contexts between listening, browsing folders, applying filters, and translating auditory intent into keywords. This cognitive overhead disrupts creative flow and slows iteration.
Opportunity
UX research
Sketching & ideation
Interaction modeling
End-to-end flow design
Mobile UX/UI
System definition
Responsibilities
Product Design
UX Strategy
Interaction Design
Visual Design
Design Systems
AI-Assisted Experience Design
Disciplines
Dec 2025
Timeline
Create a discovery experience that allows producers to interact directly with sound itself, minimizing translation, decision fatigue, and interruption during music creation.
Figma
Miro
Adobe CS
Tools
The initial direction explored improving traditional sample discovery through familiar search and browsing patterns.
Discovery relied on keywords, BPM, key, tags, and category-based navigation — reflecting how most existing sample libraries organize large sound collections.
While this approach improved visual clarity and navigation speed, discovery still required producers to pause playback and translate auditory intent into descriptive language.
Early exploration revealed that the core friction was not interface quality, but the discovery model itself.
Despite refinement, search remained a visual and cognitive task disconnected from listening — reinforcing interruption at moments when creative flow was most fragile.
This insight led to a strategic pivot away from metadata-driven browsing toward an audio-first, sound-to-sound discovery system.


Early Direction: Search-Driven Discovery
The initial direction explored improving traditional sample discovery through familiar search and browsing patterns.
Discovery relied on keywords, BPM, key, tags, and category-based navigation — reflecting how most existing sample libraries organize large sound collections.
While this approach improved visual clarity and navigation speed, discovery still required producers to pause playback and translate auditory intent into descriptive language.
Early exploration revealed that the core friction was not interface quality, but the discovery model itself.
Despite refinement, search remained a visual and cognitive task disconnected from listening — reinforcing interruption at moments when creative flow was most fragile.
This insight led to a strategic pivot away from metadata-driven browsing toward an audio-first, sound-to-sound discovery system.


Early Direction: Metadata-Based Discovery
Success is defined by how quickly producers can return to creation.
Primary metrics
Time from initial sound input to usable VST-ready sample.
Secondary signals
Discovery-to-use conversion rate
Saved sounds used inside projects
Session interruption frequency
Speed over precision
Similarity results prioritize fast auditioning rather than perfect theoretical matching. Creative momentum was valued over technical accuracy.
Reduced parameters
Advanced filtering was intentionally minimized to avoid decision fatigue during active sessions.
Controlled exploration
Radio mode balances relevance with novelty to prevent over-personalization and repetitive output.
If I had more time on this project, I would test the experience with more producers and observe how discovery fits into real production sessions.
Based on those insights, I would refine interaction clarity, onboarding cues, and feedback states to ensure users feel confident when exploring new sounds.
The focus would remain on reducing friction and supporting creative flow throughout the discovery process.
Measuring Success
Final Reflection
Success is defined by how quickly producers can return to creation.
Primary metrics
Time from initial sound input to usable VST-ready sample.
Secondary signals
Discovery-to-use conversion rate
Saved sounds used inside projects
Session interruption frequency
Speed over precision
Similarity results prioritize fast auditioning rather than perfect theoretical matching. Creative momentum was valued over technical accuracy.
Reduced parameters
Advanced filtering was intentionally minimized to avoid decision fatigue during active sessions.
Controlled exploration
Radio mode balances relevance with novelty to prevent over-personalization and repetitive output.
Validation & Evidence
LoopHaven operates as a connected system across mobile, desktop, and plugin environments.
All discovery actions feed a shared preference model stored in the cloud. When a sound is saved on any device, it becomes immediately available inside the LoopHaven VST plugin, removing the need for manual downloads, file management, or importing workflows.
This architecture allows discovery to happen anywhere while usage remains anchored inside the production environment.
Challenge
Early Direction: Metadata-Based Discovery
The initial direction explored improving traditional sample discovery through familiar search and browsing patterns.
Discovery relied on keywords, BPM, key, tags, and category-based navigation — reflecting how most existing sample libraries organize large sound collections.
While this approach improved visual clarity and navigation speed, discovery still required producers to pause playback and translate auditory intent into descriptive language.
Early exploration revealed that the core friction was not interface quality, but the discovery model itself.
Despite refinement, search remained a visual and cognitive task disconnected from listening — reinforcing interruption at moments when creative flow was most fragile.
This insight led to a strategic pivot away from metadata-driven browsing toward an audio-first, sound-to-sound discovery system.


Final Reflection
If I had more time on this project, I would test the experience with more producers and observe how discovery fits into real production sessions.
Based on those insights, I would refine interaction clarity, onboarding cues, and feedback states to ensure users feel confident when exploring new sounds.
The focus would remain on reducing friction and supporting creative flow throughout the discovery process.
Success is defined by how quickly producers can return to creation.
Primary metrics
Time from initial sound input to usable VST-ready sample.
Secondary signals
Discovery-to-use conversion rate
Saved sounds used inside projects
Session interruption frequency
Remix Mode
Remix mode supports forward exploration from a single idea.
Rather than replacing sounds, producers generate structured variations that preserve musical identity while exploring alternate timing, rhythm, and phrasing.
Radio Mode