AI–Design Co‑Creation - Futures and Thinking Agents

Written by Marihum Pernia

Premise

We live in a time when AI drafts images, scripts, and interfaces faster than we can sketch. But speed is not the same as insight. The question for designers is no longer "What can AI make for us?" but "How can AI think with us — challenge us, surprise us, and deepen our vision?"

I teach a course called AI Powered Design at Politecnico di Milano. The objective of the course is to explore the intersection of Artificial Intelligence (AI) and design, guiding students in identifying conditions, methods, and tools for human and artificial intelligence to creatively work together. The course shows how design principles can be applied in new product and service development across various industries, involving people, data, and machines. While analyzing the different opportunities for AI integration across domains, the course discusses its ethical implications and societal impact, and critically assesses its limitations.

We have built within the course four core modules: AI Relationships, where we deep dive into AI-empowered interactions and understand collaboration dynamics with machines; AI Ethics and Justice, where we introduce AI as infrastructure: geopolitical matters, environmental impact, labor and societal effects; and Imaginary of Future Translations, where students imagine possible futures of human-machine relationships. Each module combines theoretical teaching with hands-on laboratories where designers perform exercises that help them understand the subjects taught in the course. The teaching framework is built on AI literacy, Critical Thinking, and Labs.

In this article, we introduce an experimental laboratory developed in the previous semester. The module, Imaginary of Future Translations, is a 4-week program where designers acquire speculative design methods and an approach to co-creating with machines.

Instead of outsourcing imagination to a model, students construct Thinking Agents and enter an ongoing dialogue with them. The goal is futures literacy and imaginative rigor rather than output volume. Across 3 connected phases, the lab moves from building one's agent and daily exchanges, to translating those conversations into speculative scenarios, to visual worldbuilding with shared aesthetics. To keep futures tactile and coherent, each project anchors its scenario in a prop — a single artifact that makes the world legible: a biomorphic chair that subtly corrects posture, or a ring whose glow anticipates emotion. These props become the handles for discussing values, systems, and the terms of human–AI collaboration in everyday life. By the end of the journey, students are not just producing artifacts; they are redesigning their own design practice — learning how to ask better questions, expose blind spots, hold tensions, and reason across contradictory signals. AI is not the executor at the end of the pipeline, but a co‑thinker throughout it.


Student’s final outputs

Cohort and Challenge

Our laboratory brought together around 45 students from diverse backgrounds for a 4‑week module within AI Powered Design Course. This multidisciplinary cohort — spanning Product Design, Interaction Design, Fashion Design, Interior Design, and Product Service System Design — created both opportunities and challenges for our pedagogical approach. Students arrived with varying levels of AI fluency; some were already comfortable with tools like Midjourney and ChatGPT, while others were new to both prompting techniques and speculative design frameworks. To accommodate this diversity, we structured the lab around a rhythm that balanced theoretical foundations with practical application:

  • Short lectures to establish shared vocabulary and concepts

  • Agent‑building and visual clinics for hands‑on creative process

  • Critique sessions to refine thinking

From tool‑use to co‑thinking

Many students initially saw AI as a faster renderer. We explicitly shifted the point of view from “tool use” to “co‑thinking partner.” The aim was to help students recognize three distinct modes of collaboration and choose them deliberately:

  • Augmentation: AI extends human capability where it can surface patterns, expand option sets, or hold more context than a single designer.

  • Co‑creation: AI and human negotiate decisions in near‑equal partnership through questioning, reframing, and synthesizing

  • Automation: Certain steps can be responsibly automated to ease production and free time for higher‑order reasoning, without outsourcing judgment or ethics.

This reframing positioned AI to question assumptions, resist simplistic solutions, and help reframe problems in ways that strengthened students’ agency rather than replacing it.


Futures & The Thinking Agent system

Futures literacy is not about predicting what will happen, but developing the capacity to use future thinking to inform present decisions. In our lab, we've designed a system where dialogue with Thinking Agents transforms speculative possibility into disciplined futures exploration. Rather than simply requesting outputs, students engage in collaborative authorship—creating chains of questions, positions, and trade-offs that evolve into scenarios they can visualize, critique, and refine.

This approach is structured around four complementary cognitive moves—imagining, historicizing, synthesizing, and critiquing—each embodied in a dedicated Thinking Agent. Our fundamental principle is that every agent must ask questions before providing answers, prioritizing dialogue quality over mere generation. To implement this system effectively, we teach students to architect AI conversations through specific prompt strategies and tactics, resulting in goal-focused interactions that support both divergent exploration and convergent refinement.

The Roles involved and Implementation

Thinking Agent: Oracle

What it does: expands the possibility space while preserving internal logic. It pushes toward the preposterous to reveal edge conditions and failures. The approach can explore utopian futures, dystopian scenarios, or a thoughtful blend of both.

Thinking Agent: Historian

Traces lineages, surfaces hidden histories and weak signals, and names counter‑patterns that complicate our assumptions or acknowledge past patterns to help students comprehend possible futures.

Thinking Agent: Synthesizer

Compresses contradictions into 2–3 real alternatives with explicit value trade‑offs and decision criteria.

Thinking Agent: Critique

Stress‑tests assumptions and first scenario drafts. The core objective was to challenge students to be critical about their first output and, if needed, reframe their visions and intent.

Students implemented their agents in various ways: through custom GPTs, dedicated chat channels, or workflows using platforms like OpenAI, Claude by Anthropic, Gemini, or Stack AI. They configured system prompts, tone, and memory rules to suit their needs. Some students created vertical chains of agents, while others designed horizontal flows (Oracle → Historian → Synthesizer → Critique) that generated broader knowledge networks. The collaborative process emerged naturally as students passed outputs between agents—for instance, when a Historian challenged an Oracle's speculation, prompting the Oracle to propose a more nuanced alternative. This interaction between specialized agents effectively simulated a small design studio of diverse experts.

Students had from 5 days to 10 days where they could perform iterative conversations with agents, clustering a refining outputs.

Why this matters

By turning imagining, historicizing, synthesizing, and critiquing into explicit roles — and by running them as ask‑first agents — students moved from “prompt fishing” to reasoned design. The record of questions, tensions, and trade‑offs became the spine of decisions, and the later visuals were aimed to function as evidence rather than decoration.

Below you will find a slideshow showcasing the students' outputs and process work.


From Concept to Visualization

In this final phase, the students transformed abstract ideas into tangible artifacts. Moving from dialogue to visual representation requires disciplined translation of values and decisions into a coherent aesthetic language.

Throughout the previous topics, students have engaged in structured dialogues with Thinking Agents, developed scenarios, and made value-based decisions. Then, they materialized these choices through visual storytelling and prop creation. This critical transition:

  • Transforms conceptual positions into tangible evidence

    • Anchors abstract values in physical artifacts

    • Creates worlds that viewers can step into and evaluate

  • Establishes visual coherence through:

    • Shared Solarpunk aesthetic framework

    • Consistent material language

    • Disciplined visual decision-making

Our approach treats visuals not as decorative afterthoughts but as essential evidence of the scenario's argument. The prop becomes the physical embodiment of the chosen values, making abstract positions undeniable.

Shared aesthetic as scaffolding the class collectively adopted Solarpunk not as a mood board but as a design discipline: living systems, material generosity, mutualism, soft light. This reduces style debates and turns attention to coherence.

Style guardrails examples

  • Avoid genre pastiche by grounding in real materials and fabrication cues.

  • Use a restrained palette with green family anchors and warm, bio‑based complements.

  • Keep light directional and natural; use shadow to signal tactility and depth.

Toolchain at a glance

Image models

  • Midjourney for fast composition exploration and stylization range

  • DALL·E for text fidelity and graphic motifs

  • Leonardo AI for sketch‑to‑image and form iteration

  • ComfyUI as a node‑based lab for control nets, LoRAs, and reproducibility

  • Sora to extend stills into motion studies and temporal coherence

LLMs in the loop

  • OpenAI, Claude, and Gemini to generate layered prompt stacks, shot lists, and critique checklists

Below you will find a slideshow showcasing the students' outputs and process work.


Final remarks

This laboratory was a first experiment — our first deliberate attempt to teach not only speculative design methods, but the relationship modes designers can have with machines. We asked students to practice augmentation, co‑creation, and automation with LLMs and visual models, and to do so inside a coherent design process. It was articulated and complex. It was not easy to implement. And yet, within a short four‑week window, students moved through the full speculative arc while learning to collaborate with text‑based and visual agents in ways that were intentional rather than incidental.

What we learned

Collaboration modes are teachable. Students learned when AI should augment their capability, when near‑equal co‑creation makes sense, and which tasks can be safely automated to free attention for judgment and ethics.

Human agency remains central. Students initiated the briefs, chose scenarios, and owned both the objectivity and the subjectivity of decisions. The quality of outcomes was in their hands; models supported but did not decide.

Literacy widens responsibility. Working with agents surfaced ethical concerns: where model outputs come from, what data they rely on, who is represented or erased. Part of this course’s purpose is to help designers look behind the interface and account for provenance, bias, and impact.

Toward responsible practice

Efficiency with intention. Model “consumption” decreases as literacy increases. Clear goals, precise instructions, and role‑aware interactions reduce wasteful generation. When divergence is the point, we diverge on purpose; when convergence is needed, we tighten prompts and constraints.

Designers as orchestrators. Talking to a model is not the same as talking to a colleague. It demands orchestration: defining roles, contracts, and hand‑offs; sequencing questions before answers; and maintaining a trace from decision to evidence. The designer’s role expands from maker to conductor of human–machine collaboration.

Human‑in‑the‑loop is not optional. Given the maturity of today’s systems, responsible practice keeps people in the loop — not as rubber stamps, but as stewards of values, context, and consequences. As literacy grows, roles and processes will transform, but accountability should not be outsourced.

What’s next

This was the first iteration, not the last. We will refine the agent contracts, deepen ethics‑in‑use exercises, and explore temporal studies in motion to test how consent and support evolve over time. Most of all, we will continue to cultivate designers who can hold two truths at once: machines can ease and expand creative work, and humans must still decide what futures are worth building.

In that sense, the lab did more than teach tools. It rehearsed a new posture for design — one where we treat AI as a co‑thinker when it helps, an assistant when it should, and an automaton only where it is safe — while keeping human judgment, care, and responsibility at the center.


Credits

AI Powered Design — Politecnico di Milano

Module: Imaginary of Future Translations

Year: 2025

Teaching team and Module Curation: Marihum Pernia and Silvia Ferrari

Student contributors

  • Awakening Emotions - Kismir, Balaban, Kalkan, Çakir, Tas

  • Bioactive Diary - Caffo, Ceglia, Pace, Rosselli, Tomio

  • The Year the Walls Began to Whisper - Caglio, Cosci, Jones, Righi, Sattolo

  • Chaas - Martinez

  • Suki - Caprini, De Caro, Tempesta, Asgarnejad, Alizadeh

  • The Edible Habitat - Andreoli, Cucurachi, Ruffo, Jing, Fathi

  • OMNI - Argintieri, Borsato, Brivio, Lindberg, Satrio

  • Proxy Mirror - Yuxuan, Zimu, Kaiyuan, Yuhan, Ziqi

  • Symbiotic Strangers - Huwei

Next
Next

The polarity of AI and the designer’s role.