Jatin Leuva

Founder

The year was 1982. 

Celebrity chef Wolfgang Puck had just opened Spago. It, however, opened with an interesting twist to the dining experience.

The restaurant had an open kitchen, which was almost unheard of in the 80s. 


Wolfgang Puck was the cynosure of all eyes, grilling and saute-ing dishes like a showman.

It became a rage.   

Customers came in just to get a peek behind the curtains. It is said that this move transformed the dining experience forever.  

By letting customers see exactly how their food was prepared, this simple act of transparency built trust and enhanced the entire experience. 

Now, what does the open kitchen have to do with AI-first user interfaces you ask?

It all comes down to trust.

AI interfaces stand at a similar crossroads today–can we create ‘open kitchens’ for AI-first products where users can understand how their digital experiences are being crafted?

In an era where artificial intelligence is becoming increasingly integrated into our daily digital interactions, we face an interesting paradox: users tend to either over-trust or under-trust AI systems.

The onus is, therefore, on designers and engineers, aka the company, to ‘guide’ users and build the required trust to use their products. 

However, before we go into the how, it is only prudent that we answer the why.

Why traditional UI patterns aren’t sufficient?

A decade ago, saving a Word document was a ritual of anxiety–clicking 'save' multiple times, checking the timestamp, making a backup copy (just in case, you know), and still whispering a silent prayer when closing the file. It was a ‘traditional’ user interface. It was predictable and deterministic. Every button press led to a known outcome.

Click 'save,' and your document saves. Press 'print,' and papers emerge. Press a, b happens…you get the drift.

But AI is a whole different beast. It is probabilistic, creative, and, dare we say, sometimes surprising. 

When you ask an AI to ‘make this image more colourful’ or ask it ‘to help brainstorm ideas,’ you aren’t triggering a pre-programmed response; you are initiating a ‘complex’ generative process that requires entirely new patterns of interaction. 

To put it plainly, you have no clue what will come out as a response. 

Mind you, this shift from deterministic to generative interfaces isn’t simply a technical evolution; it's a fundamental reimagining of human-computer interaction.

Now, the questions to ask are: how do you design an interface that can handle open-ended creativity? How do you make complex AI processes transparent and trustworthy? How do you give users the right balance of control and automation?

Make no mistake, this is a pivotal moment almost akin to Macintosh’s revolutionary graphical user interface.Just to jog your memory, in 1979, a visit to Xerox PARC would change computing forever. When Steve Jobs witnessed Smalltalk's graphical user interface, with its revolutionary windows, icons, and mouse, he saw the future of personal computing. This vision materialized in Apple’s Lisa (1983) and then the Macintosh (1984), transforming the arcane world of command-line computers into something anyone could use. The Macintosh, with its intuitive desktop metaphor, became the first mass-market computer to feature a GUI, marking the beginning of the modern computing era.Similarly, the interfaces we create today will shape how billions of people interact with AI tomorrow. And unlike the graphical interfaces of the 1980s, which took years to reach mainstream users, AI interfaces are evolving and scaling at break-neck speed.

Just to jog your memory, in 1979, a visit to Xerox PARC would change computing forever. When Steve Jobs witnessed Smalltalk's graphical user interface, with its revolutionary windows, icons, and mouse, he saw the future of personal computing. This vision materialized in Apple’s Lisa (1983) and then the Macintosh (1984), transforming the arcane world of command-line computers into something anyone could use.

The Macintosh, with its intuitive desktop metaphor, became the first mass-market computer to feature a GUI, marking the beginning of the modern computing era.



Similarly, the interfaces we create today will shape how billions of people interact with AI tomorrow. And unlike the graphical interfaces of the 1980s, which took years to reach mainstream users, AI interfaces are evolving and scaling at break-neck speed.

Making AI Systems Transparent Through Interface Design

Just as Apple's desktop made complex computing operations ‘visible’ and ‘understandable’, today’s AI interfaces face an even greater challenge—making the ‘black box’ of AI transparent. This transparency manifests through three key interface elements:

Interpretability through visual parameters

When users adjust AI parameters, whether they’re tweaking image generation settings or fine-tuning text outputs, interfaces need to translate complex mathematical operations into human-understandable concepts.

Think of Midjourney's interface, where abstract concepts like ‘stylize’ or ‘chaos’ are represented through simple sliders, making the internal workings of AI more accessible without sacrificing sophistication.


Explainability through interactive feedback

Simply put, modern AI interfaces must ‘show’ their work.

When Notion AI suggests a revision or GitHub Copilot proposes a code snippet, the interface should highlight which parts of the input influenced the output. This isn’t just about showing results; it’s about building trust through visibility. 

For instance, when an AI writing assistant suggests a change, it could highlight the original text that prompted the suggestion, creating a clear cause-and-effect relationship that users can understand.

Accountability through user control

The most effective AI interfaces maintain a clear division of responsibility between users and AI.

Consider Google Docs’ AI writing suggestions. They appear as proposals that users can accept, modify, or reject, maintaining human agency in the creative process. This approach shifts AI from a black-box decision-maker to a transparent collaborator, where users remain in control and understand their role in the outcome.

We recently worked with Omny AI, an AI platform designed to boost Amazon sales and simplify operations for brands and agencies. Their initial interface pre-populated AI-suggested product benefits and use cases, prioritizing efficiency through minimal clicks. Despite their AI boasting 90% accuracy, this ‘frictionless’ approach wasn’t very successful as users skimmed past important decisions and felt betrayed when occasional errors slipped through.

The solution?

Rather than presenting AI outputs as decisions, we worked with Omny to redesign the interface to encourage collaboration. The final product benefits area remained empty by default, with AI suggestions presented in a side panel for users to actively review and select.

This seemingly simple shift transformed the experience: users became more forgiving of AI imperfections because they were now active participants in the decision-making process. By creating this deliberate moment of engagement, the interface established clear accountability–the AI suggests, but the user decides.

We used purple to indicate AI-generated content and blue for user inputs, creating a consistent visual language across the interface. This visual and interactive separation of suggestions from decisions became a cornerstone of user trust in the platform.

We believe that effective AI interfaces strike a delicate balance between automation and human agency through four key principles: appropriate automation, clear feedback loops, progressive disclosure, and maintained user agency.

At their core, these principles ensure users understand when and how AI is working on their behalf while retaining meaningful control over outcomes. Each principle manifests in specific interface patterns: automation levels are communicated through confidence indicators and override options; feedback loops provide real-time processing status and uncertainty alerts; progressive disclosure gradually reveals AI capabilities to match user understanding; and user agency is preserved through natural input methods and multiple paths to achieve goals.

Together, these create a foundation of trust by making AI systems not just powerful but predictable and controllable.

Practical Examples of Interfaces that Have Built Trust with AI-first Interfaces

Just as Wolfgang Puck's open kitchen transformed dining by making the culinary process visible, these modern AI interfaces create their own versions of "open kitchens" in digital spaces. Each example demonstrates how making AI processes visible and interactive builds user trust and engagement.

Making AI Work Visible

Example: Granola's Note-taking Interface

What it does: Shows step-by-step progress indication and micro interaction patterns.

Why it works: Real-time processing visualization helps users understand what's happening

Human-in-the-loop processing maintains user engagement

Clear feedback systems build confidence in the AI's decision-making

Granola’s Note-taking Interface - Link

Example: Kaiber's Prompt Builder


What it does: Provides real-time generation visualization and style preservation

Why it works: Sketch interpretation patterns show how AI understands user input

Interaction confidence indicators help set appropriate expectations

Progressive feedback builds trust through transparency

Kaiber’s Prompt builder - Link

Spatial Organization and User Control

Example: Heuristica's Non-linear Research Canvas

What it does: Enables non-linear prompt construction and concept mapping

Why it works: Spatial relationship visualization gives users control over the input structure

Concept mapping for AI input makes the process more tangible

Design considerations for spatial interfaces maintain user agency

Heuristica’s Non-Linear Research canvas - Link

Example: Ideogram Canvas

What it does: Allows multiple image combinations with real-time feedback

Why it works: Spatial arrangement patterns give users precise control

Interactive refinement tools maintain user agency

Real-time composition feedback builds understanding

Natural Input Paradigms

Example: iPadOS Math Feature


What it does: Transforms handwritten input into structured mathematical results

Why it works: Natural input reduces cognitive load

Style-matched output maintains context

Real-time processing visualization builds confidence

iPadOS Math Feature - Link

Example: Voice Command Generation

What it does: Converts voice input into visual content

Why it works: Minimal interaction patterns reduce friction

Real-time preview systems maintain user engagement

The balance between control and automation maintains trust

Voice Command Generation - Link

Progressive Refinement

Example: Image Generation with Drag and Drop

What it does: Enables intuitive image manipulation and generation


Why it works: Real-time preview mechanisms show immediate results


Visual parameter adjustment maintains user control


Progressive refinement patterns allow for iteration

Image Generation feature - Link

Example: Storyboard to Video Generation


What it does: Transforms static content into dynamic video


Why it works: Sequential content creation maintains narrative control


Temporal preview systems show progress


Transition visualization builds understanding

Storyboard to Video Generation - Link

The Limits of AI Transparency: Why Human Oversight Matters

While transparency is crucial for building trust in AI interfaces, we must acknowledge that complete transparency isn’t always possible or, for that matter, even desirable. Modern AI systems, particularly large language models and deep neural networks, operate at levels of complexity that can make their decision-making processes inherently opaque.

This inherent opacity creates a paradox in AI interface design: we must build trustworthy systems while acknowledging their limitations. The solution lies in shifting our focus from complete transparency to meaningful human oversight.

Rather than attempting to explain every neural connection, effective AI interfaces should prioritize practical transparency, showing users what they need to know to make informed decisions while maintaining clear paths for human intervention when AI suggestions don’t align with user goals or values.

Conclusion

When Wolfgang Puck opened Spago's doors, his open kitchen didn’t just revolutionize dining; it redefined the relationship between chefs and diners.

By making the mysterious art of haute cuisine visible, he transformed mere customers into engaged participants in the culinary journey. Today’s AI interfaces stand at a similar crossroads. The future belongs not to black-box systems that weave their magic behind closed doors but to transparent, collaborative interfaces that invite users into the process.

As AI capabilities grow more sophisticated, the challenge isn't just to make them robust; it is rather to make them trustworthy partners in our daily digital experiences. The most successful AI interfaces will be those that find the sweet spot between automation and agency, between efficiency and understanding.

Like Puck’s open kitchen, they’ll turn what could be an opaque, intimidating process into an engaging, collaborative experience that enriches both the journey and the outcome.

After all, the goal isn’t to hide behind the cloak of complexity but to make this complexity accessible, understandable, and, ultimately, useful. In doing so, we’re not just designing better AI interfaces; we’re shaping how users will collaborate with artificial intelligence.

Claim your Free UX Design consultation

Let's work together to create user experiences that leave a lasting impact.

Contact us

Explore other resources

Experience unmatched design service

Flexible, fast and affordable

Start your project

Trusted design partner for 50+ brands