Industry

Technology

Client

Experiment

Pixel Me – Turning Words into Retro Art

Building an 8-bit image generator as a no-code experiment to explore AI, product design, and development fundamentals.

Year

2025

Role

Product Designer (UX/UI)

With Pixel Me, I wanted to take that relationship further: using AI not only to assist in design, but as a core part of the product experience itself.

As a low-technical Product Designer , I had already been integrating AI tools into my day-to-day workflow to enhance efficiency and creativity—supporting tasks like technical documentation, UX, and writing, heuristic evaluations, generation of synthetic user profiles, user flows, and low-fidelity UI concepts. These tools became essential co-pilots across various stages of product discovery and delivery. With Pixel Me, I wanted to take that relationship further: using AI not only to assist in design, but as a core part of the product experience itself. This project became a space to experiment with end-to-end product creation—connecting AI, no-code tools, and user interaction in one functional prototype. It also allowed me to explore foundational topics like web development, API integration, and user data handling—sharpening my collaboration with technical teams and deepening my ability to evaluate where and how AI can create meaningful value and ROI based on product maturity.

Results

  • Designed and built a functioning AI-powered MVP using Lovable, Supabase, and OpenAI

  • Over 40 users tested the tool and had fun experimenting with their own 8-bit creations

  • Identified UX, tech, and prompt challenges that informed future product iterations

  • Gained hands-on experience integrating web design, user auth, and generative AI into a usable product

Partial view of the Pixel Me homepage

Problem

I set out to build a playful, functional experience: transforming written descriptions into pixel-style images. Initially, I wanted to allow users to upload photos and receive a pixelated version of themselves, but at that time (three months ago), OpenAI’s image models didn’t support this. Hugging Face offered a potential solution, but I chose to simplify the scope for the MVP and focus on text-to-image generation first.

This was also an opportunity to test user login flows, prompt design for visuals, and UI constraints inside no-code tools.

Design Goal

Build a small, usable product that:

  • Converts user descriptions into 8-bit-style images using AI

  • Tests user login and account creation via Supabase

  • Creates a lightweight, fun and shareable experience

  • Helps me learn hands-on how to integrate design, AI prompts, and data flows

Role

This was a solo side project where I:

  • Sketched the wireframe and initial UI style guide manually

  • Iterated and built the front-end using Lovable

  • Used Claude to refine image-generation prompts and GPT to solve implementation questions

  • Connected Supabase for login and user control

  • Integrated OpenAI’s API for image generation

  • Debugged, tested, and launched the product on my own

Screen of the MVP where users write a short description and generate a custom 8-bit image using AI. Designed with a retro aesthetic, minimal flow, and no photo upload required.

Process

Wireframing & Setup: Defined the product structure on paper, then recreated and refined it in Lovable

  • Prompt Design: Created prompts for UI generation and debugging using Claude and GPT

  • AI Integration: Connected OpenAI’s image endpoint to dynamically generate visuals

  • Login Flow: Tested account creation using Supabase to understand data storage and UX impact

  • User Flow:

    • Landing → Get Started → Sign up (pain point)

    • “Create” screen → User enters text → Clicks Generate 8-bit version

    • The image is generated and can be downloaded

  • Visual Testing & Sharing: Over 40 people interacted with the MVP and shared their results

  • Feedback Loop: Observed friction in initial login and UI limitations inside Lovable

Learnings & Insights

  • User friction from mandatory login: Requiring sign-up before generating the image reduced engagement. I plan to test deferring login to the download step as a lead-gen strategy in the next iteration.

  • No image upload functionality (yet): OpenAI didn’t support image-to-image transformation at that time; Hugging Face did, but I postponed that path for future testing.

  • Lovable’s UI constraints: Some UI components and layout logic are too generic unless pre-shaped via prompts, which limited control over the look and feel.

  • Prompt consistency: Some prompts returned less coherent images, showing the importance of tightening input rules and experimenting with temperature/parameters.

This project gave me hands-on exposure to the real-world limitations and powers of no-code tools, especially for rapid prototyping and testing product concepts. It also gave me a better sense of how AI prompts behave under different conditions, how token limits affect output, and how to simplify UX flows when technical constraints are tight.

I now feel more confident collaborating with engineers and evaluating where AI features make sense in early-stage products. This project also inspired me to build and present additional demos for Quipu.