We’ve all been there. Staring at a blank screen, the cursor blinking with mocking patience. The deadline looms, the stakeholders are expectant, and the pressure is on to create something not just functional, but inspired. It’s in these moments that the siren song of AI-powered design tools is most seductive. They promise to fill that void, to transform a prompt into a prototype in minutes, saving us from the grind.
But this promise is shadowed by a legitimate fear. Is this our salvation, or are we just outsourcing our creativity? Will AI become a revolutionary assistant, or will it become a digital assembly line, churning out an endless stream of polished, predictable, and ultimately soulless designs? We stand at a fork in the road, a choice straight out of 80s sci-fi.
The initial spark for this thought came after watching a talk titled “AI-Assisted Prototyping: Promise and Pitfalls” by NNgroup.
The metaphor that resonates most strongly with me — the only fitting parallel that I can recall — is the classic series: The Terminator.
The moment AI tools started generating wireframes, mockups, and even functional code, a familiar concern spread across the design community. Many designers initially viewed AI as Skynet: a self-aware, superior intelligence bent on replacing human roles, starting with the repetitive tasks that once paid the bills.
I believe that initial fear was misplaced. We were never facing Skynet; we are being handed a T-800 Co-Pilot. The real threat is not the technology itself, but the designer who refuses to learn how to wield it.
In the Terminator films, Skynet’s core mission was to eliminate humanity, which it perceived as a threat. The parallel for us as designers is stark. Allowing the uncritical adoption of AI — letting algorithms dictate our creative choices — risks eliminating genuine, human-centered creativity from our work. It’s a path toward a homogenous design landscape where everything looks correct but feels empty.
Generic design is the AI’s default operating mode — it is the T-800’s emotionless protocol — it is functional, correct, and utterly devoid of unique, human-centric intention.
Ironically, the Terminator franchise itself became a real-world cautionary tale of this very principle. After the monumental success of the first two films, the visionary behind them, James Cameron, lost creative control. The results were a series of sequels that tried to replicate the formula but failed to capture the original spirit.
James Cameron, the original visionary, famously sold the rights to the first film for just $1 to producer Gale Anne Hurd in exchange for a guarantee that he could direct it.
After the massive success of Terminator 2, Cameron was no longer at the helm as the rights were passed between various corporate entities.
Subsequent films, like Terminator Genisys and Terminator: Dark Fate, were critically and commercially panned. Despite big budgets and the return of original actors, Dark Fate had a budget of $185 million and suffered an estimated loss of $110–130 million.
The lesson is unavoidable: without the original human vision, even a powerful and proven formula leads to failed, generic outputs. This is the danger of the Skynet Protocol. You get a perfect copy of the surface, but you lose the soul.
AI operates on patterns. It is, by its very nature, predictive. It excels at analyzing vast datasets of what has come before to generate a probable version of what should come next. Human creativity, on the other hand, thrives on the unpredictable. It finds inspiration in nuance, contradiction, and the beautiful messiness of human behavior. This is our evolution mandate — our duty to bring essential human intelligence to AI-driven systems in ways algorithms simply cannot replicate.
A perfect example of this is a methodology from Intuit called “Design for Delight”. It’s a simplified approach to what the Stanford University Design School calls Design Thinking, and at its heart is a technique that no AI can replicate: Design Ethnography.
This is the gold standard for gaining deep customer understanding. Instead of relying on secondhand data or surveys, Design Ethnography involves observing real users in their natural habitat — their homes, their offices, wherever they experience the problem you’re trying to solve.
The goal isn’t just to validate assumptions; it’s to gain empathy and find surprises. You’re looking for the unexpected workarounds, the strange habits, and the unarticulated pain points that users have developed. These are the insights that algorithms, trained on predictable patterns, will never find. This is the core of the human resistance: our unique ability to find groundbreaking inspiration in the unpredictable reality of lived experience.
This direct, empathetic research is precisely what allows human designers to triumph. An AI trained on a thousand existing Fintech dashboards would likely optimize for displaying portfolio value above all else. But a human designer, after Design Ethnography, would discover the deep, unspoken user anxiety around risk. This surprise insight leads to a strategic choice: prioritize a proactive, trust-building feature like an “AI Risk Alert” at the top of the information hierarchy. This is how the “Evolution Mandate” wins.
If we are to pilot the T-800, we must learn to communicate with it effectively. I’ve been actively learning the techniques of Prompt Engineering — the art of giving the algorithm the right constraints and vision to execute our ideas perfectly — and it’s been transformative. This learning is best done by doing, and for me, that meant testing the limits of AI-assisted prototyping.
The sheer visual quality of AI-generated prototypes can be deceiving. Even if an AI delivers a visually stunning product, we must never be tempted to take it as the final solution. The human designer’s due diligence requires rigorous testing (usability, accessibility, A/B testing) to validate the AI’s structural assumptions. A product is never truly “finished,” but must keep evolving with the times and user needs to survive — a truth that applies just as much to software as it does to the Terminator’s relentless upgrades.
My experiments in AI-assisted prototyping quickly exposed the core challenge: AI’s predictable output.
I created a coded prototype for a complex Fintech dashboard called Nebula Fin. This project was purely for personal learning and experimentation — a deep dive into the limits of generative AI. I focused strictly on the UI design, directing the AI to manage the coding and structure while I dictated the visual style. I experimented with contemporary design trends like glassmorphism and claymorphism, dictating the color palettes and fonts. The AI rapidly delivered a functional, generic design, but it was up to me to inject the visual nuance and style that makes a design feel intentional. This confirmed that the AI can build a house, but the human must provide the architectural vision and interior design.
Similarly, I tackled my personal portfolio. My goal was to move beyond the visually dull, ATS-compliant resume to create a dynamic, visually engaging interactive experience for my website. I used an AI tool to rapidly generate the foundational HTML/CSS structure. I found that while the result looked great on a laptop (my primary design viewport), it had minor scaling issues on mobile. The AI provided the initial speed, but my next task, the necessary iteration to fix the mobile UX flaw, is purely a human-led mission to ensure quality and responsiveness.
This shift in communication is leading to a concept known as Vibe Coding. The designer no longer needs to focus intensely on the how (the specific code or component details), but instead on the what (the creative intent, the vibe, the purpose). We provide the vision, and the AI handles the functional execution. As detailed on Wikipedia, Vibe Coding is a new workflow for UX professionals.
In Terminator 2, the humans reprogram a T-800 unit. The same terrifying machine that was once their enemy becomes their most powerful asset, now aligned with their mission. The AI tools emerging today are our T-800s. Here are a few I’ve been working with, including both specialized prompt-to-app platforms and powerful general-purpose LLMs.
Figma Make: The Infiltrator Unit
Figma Make operates as an infiltrator unit. This AI-driven, prompt-to-app tool takes your existing Figma designs and turns them into functional, interactive prototypes. It can also create initial designs solely from prompts. This tool bridges the gap between static design and interactive code. It works directly within the Figma ecosystem designers already know and love and its key impact is bridging the gap between static and dynamic design.
Lovable AI: The Heavy Support Unit
When you need to build something more substantial, Lovable AI acts as heavy support. It can generate full-stack applications — including a React frontend, a Node.js backend, and the database logic — from natural language prompts. However, its power comes with a critical caveat: the generated UI can be generic, and it often requires coding knowledge to fix or refine the output. This reinforces our core theme: even with the most powerful tools, human oversight and expertise are non-negotiable.
Google Gemini: The Code Architect
General-purpose LLMs like Google Gemini are invaluable for designers who have adopted Prompt Engineering. While it doesn’t offer a visual canvas, it excels as a Code Architect. I use Gemini to quickly generate and debug specific HTML/CSS snippets, complex JavaScript functions, or boilerplate component code when I’m refining an interactive prototype. Its value lies in instantly translating a precise functional need into clean code, allowing me to focus entirely on the design implementation rather than syntax.
Claude: The Code Scaffolding Unit
Claude (by Anthropic) acts as a specialized Code Scaffolding Unit for the design process. It can generate full component structures, code artifacts, and boilerplate for complex application frameworks directly from a prompt. I utilize Claude to quickly lay the foundational code for new interactive prototypes. This speed of generation frees up critical time to focus on the human-centered problems of interaction flow and visual hierarchy.
The Skynet future — a world of generic, soulless, AI-driven design — is not inevitable. It’s a choice. The responsibility lies with us, the human designers, to reject the Skynet Protocol and embrace the Co-Pilot Mission. We must be the visionaries who guide these powerful tools, not the operators who are guided by them.
Our AI tools are powerful T-800 co-pilots. The mission is not to fight the machine, but to reprogram it with human values.
In her final monologue in Terminator 2, Sarah Connor looks toward an uncertain horizon, not with fear, but with a newfound sense of agency. A philosophical paper on the film highlights her words as she says:
As another analysis of the film, rooted in Heidegger’s critique of technology, notes, this moment represents the “saving power” that can arise from within the “extreme danger” of technology itself. The T-800 learned because it was taught. Our AI co-pilots will only learn the values we instill in them. The future of design is not set; there is no fate but what we make. The ultimate victory is secured by human agency. As Screen Rant explains, the entire film argues that the future is not predetermined. So, what is the single most important principle you will teach yours?