Part 2 •
Augmented IQ
Product Research for an Enterprise Productivity AI
When our CEO at Left Right Mind outlined their vision for an enterprise AI assistant, I worked cross-functionally to ground that feature list in user reality.
My role
UX Designer
Industry
Enterprise AI
Team
2 Designer, Lead Developer, Project Manager
Challenge
Organisations and knowledge workers face a challenge where information, expertise, and insights remain scattered across disconnected systems, departments, and platforms. This fragmentation creates a fundamental barrier to organisational effectiveness, resulting in slower learning cycles, reduced adaptability, and constrained innovation capacity.
Product Overview
Augmented IQ is a customizable AI assistant built for your private cloud, designed to amplify enterprise productivity. It brings together all your organizational data, documents, and applications into one unified knowledge fabric, securely connected to your preferred LLM and ready for your knowledge workers to access.The platform features RAG-powered UX design tailored to your preferences, with permission-aware access controls, Human-in-the-Loop (HITL) feedback mechanisms, and agentic workflows that handle complex tasks autonomously. Think of it as a Living Library that grows smarter with every interaction.
What I executed
User Research
I immersed myself in understanding how new tools like smart intranets, AI search, dashboards, and LLM assistants were addressing the fragmentation problem. The goal was to see how organizing information, automating tasks, and filtering for relevance could genuinely change the way people work.
Understanding AI Architecture
I dove deep into the layers that make an LLM truly useful in enterprise contexts. This meant exploring data connectors that integrate all platforms, building the knowledge fabric that maps relevant answers, and designing agentic workflows that automate multi-step tasks. I studied the concept of a Living Library and how HITL workflows enable continuous improvement. Finally, I investigated how RAG-powered UX delivers context-aware answers pulled from private data sources.
Designing AI-Native Experiences
Designing for AI meant reimagining the entire interaction model. Instead of fixed outputs, I focused on teaching users how to collaborate with intelligent systems. This included setting clear expectations about what the AI can do, showing capabilities upfront, and making interactions predictable enough that users know when to trust the system. I designed features to surface AI confidence levels, provide source attribution, offer ready-made prompts based on company data and trends, enable HITL corrections, and give admins control over AI behavior and role-based permissions.
What I Learned
Designing AI-native products requires a fundamental shift in thinking. You're not just delivering answers; you're orchestrating a partnership between human intelligence and machine capability. The key is transparency: show which features are AI-powered, reveal where answers come from, display confidence levels, and always give users the ability to correct the system.
I learned that the best AI assistants don't try to be invisible. They make their reasoning visible, their limitations clear, and their improvements tangible. Human-in-the-Loop isn't just a feature; it's the engine that keeps the system aligned with real-world needs.
Most importantly, I discovered that unifying data is only half the battle. The real challenge is designing interfaces that help people trust, question, and refine AI outputs so that technology becomes a genuine thought partner rather than just another tool to manage.