Designing how AI systems communicate — not just what they say.
The most important content decisions in AI products aren't about what a product says in a given moment. They're about how the system communicates overall — how it interprets intent, handles ambiguity, expresses confidence, and earns trust. That's the work I do. And I think it's the future of the discipline.
"Content design asked: What does the user need this interface to say? Content model design asks something harder: How should this system communicate?"
Content design, as we've practiced it, is over.
Writing strings and polishing UI copy isn't enough in a world where products generate language dynamically. The work has changed.
Today, content design is about shaping model behavior — designing the prompts, structures, and systems that determine how language is produced, not just how it reads.
I approach content as a system, not a surface. My focus is on making AI-driven experiences understandable, trustworthy, and useful at scale — by defining how they work, not just what they say.
"More and more, the work is about shaping the systems that generate communication, not just refining the output after the fact."
"It's the work of shaping how AI systems communicate with people: how they create understanding, reduce friction, build trust, and make a product feel coherent, useful, and safe."
A year ago, I wrote that content engineering was the future of content design. I still believe that — but it doesn't go far enough. We're not just engineering content systems. We're designing model behavior. This essay is about what that shift means for the discipline, why naming it matters, and what content model design actually looks like in practice.
I've spent my career at the intersection of words and systems — starting in journalism and creative writing, moving through UX writing and content strategy, and arriving somewhere that doesn't have a clean job title yet. I've been calling it content model design: the practice of designing how AI systems communicate, not just what they say.
I've led content design teams at Pinterest, Meta, and Thumbtack. I care deeply about the discipline — about elevating what content designers do, creating space for craft, and making sure teams aren't just shipping strings but doing genuinely impactful work.
When I joined Pinterest as Head of Content Design, the team was underwater — requests ad-hoc, no prioritization framework, designers churning out strings. In my first six months:
Our initial proposal was to adapt content guidelines into core criteria for an LLM-as-judge evaluation. The learnings reshaped the approach: too many criteria degrade output quality, and highly context-dependent rules are hard for models to evaluate reliably.
The final framework used a highly focused set of rule-based core criteria, a weighted evaluation strategy, and critical fail criteria to trigger human review — with an adaptable rubric for feature-specific evaluation post-launch.
We also identified a gap: no heuristic existed for the ethical use of AI in our content. We built one.
The work on match explanations became the foundation for how Thumbtack approaches generative AI content more broadly.
I worked closely with Legal and Creator Operations to ensure every message was accurate, compliant, and timed correctly. Tone moved from informational to increasingly urgent — but never alarmist.
I also negotiated with the support team to route creators directly to a support ticket form when funds were about to expire. This created a clear, actionable path for creators who genuinely couldn't resolve the issue themselves — and reduced creator frustration.
Every message named the specific amount at risk. Making the loss concrete was critical to driving action without manufactured panic.
I started with a Custom GPT on chatgpt.com — no infrastructure, just me and the system prompt. I spent weeks refining the instructions until the model reliably applied Thumbtack's guidelines, asked the right clarifying questions before writing emails, and always gave full rewrites instead of inline edits.
Once the prompt was right, I moved to building the plugin. I used Claude Code — an AI coding tool — to write all four files, describing the behavior I wanted in plain language rather than writing code myself. The whole thing runs on Cloudflare Workers to keep the API key secure.
This project is also how I learned that the skills content designers already have — defining constraints, writing to a brief, iterating on output — are exactly the skills that make someone good at building AI tools.
Across Meta and Pinterest, I've built programs that didn't exist before — not just guidelines on a page, but structures that give underrepresented voices ongoing influence on the products that affect them.
There was no forum for teams to get feedback from people with disabilities about the products and communications affecting them. Teams relied on a handful of individuals who were open about their disabilities — and many disability types had no designated reviewer at all.
I founded the Disability Review Board: a group of people with disabilities who provide input based on lived experience — from wheelchair representations in avatars to training for advertisers.
There was no comprehensive guide to inclusive terminology at the company. I recruited contributors from across Pinterest, with each section led by someone from the relevant ERG to center lived experience. Coordinated reviews with PR, Legal, DEI, Learning & Development, and executives — launched in three months.