Exploring the Role of AI in Typeface Design with Glyphs

Hi everyone,

I’m writing to explore the potential use of Claude context CMP servers or OpenAI agents within Glyphs.

Before you roll your eyes, hear me out—I promise this is not just another AI hype pitch.

Designing a new typeface is both an artistic endeavor and a technical process. While the creative aspect demands originality and nuance, it’s also filled with repetitive and time-consuming tasks. Glyphs already helps simplify much of that manual workload—and in the same spirit, why not consider how AI could further support this goal?

One idea is to use AI for initial checks—spotting common issues or inconsistencies automatically, to save time during review and corrections.

Another direction could be generating font components or variations, helping expand creative exploration without replacing the designer’s intent.

A further example would be improving tools like HT Letterspacer Manager, where AI could assist in refining spacing suggestions based on context or learned patterns, speeding up yet another meticulous part of the process.

I’d love to hear your thoughts. Does this sound useful?

2 Likes

My experience so far is that AI fails miserably with vector data. I am not an expert on AI, but apparently, the abstraction level of describing shapes with splines does not fit well into language models. I think it could help with spacing and kerning, but through the detour of rasterization. However, there are good (non-AI) solutions in place for that already, Letterspacer and KernOn. That will make it hard for AI solutions to find a foothold.

2 Likes

I agree with the first part. In my experience, AIs are still quite bad at working directly with vector shapes. I’ve had a few interesting results with SVGs, but nothing truly usable yet.

I remember what we used to say about AI-generated images just three years ago—like hands with seven fingers. So maybe it’s just a matter of months, not years, before vectors follow the same curve.

For letter spacing, I definitely think AI could bring a useful layer on top of tools like Letterspacer. Pattern recognition is one of its strengths, especially when dealing with massive, complex data.

That said, I’m no expert—how could anyone be when the field is shifting every day?

I’ve also experimented a bit with a CMP context server driving Blender, helping users shape 3D meshes. Those aren’t vectors, but they are complex spatial geometries. So maybe AI can help with font design down the line too. But, how is the question!

The field where AI could be really helpful is the design stage processes that are repetitive but can’t be automated with a script for a some reason.

Tasks and communication

One the one hand, the AI model could be focused only on specific tasks. This one is easier to implement. On the other hand, the model can be flexible for any purpose. The second case requires the level of communication between the user and the model, like using prompts. Like “Make all the arms 10% shorter” or “Made all the counters 10% more squared for lowercase and 15% for uppercase”. That is, AI model should understand the context and the typography in general, like what is counter, arm, what means shorter or rounder, and so on. Human do it all visually by eye. AI act differently, usually by comparing something to its own gathered samples and mixing them, or using the rules and patterns. That is, the model should be trained to translate tasks to machine commands.

Rules and patterns. Exceptions and compensations

One of issues with this is an exceptions and compensations. Typography is all about it. It’s not enough to just made a hundreds of rules and patterns. AI should know about an exceptions and compensations that should be applied after (or instead of) rules. An exceptions are differ in different shapes / styles / designs / scripts, and so on. AI should know (feel) when things should be done mathematically and where things should be done optically. AI should understand where an exceptions and compensations are required and where aren’t, not using a rules, but analysing the current context.

Pros and Cons

Human can train the eye for decades (cons) to transform practice to experience. Also, human is slow to perform (cons). However, slow process speed could be profitable because it requires some time to see the issues that should be compensated (pros). And what important, human understand what he sees (pros).
AI can learn fast (pros) and execute fast (pros) but it can’t see the same way as people do (cons).

Sentimental thoughts

Will a designer stop training his eye without such a routine practice?

I completely agree — 100%. @michaelrafailyk

From what I understand, AI can already “see” a lot of things. It can likely do a solid first pass at identifying where optical adjustments are needed. With the right training on a large dataset, this is definitely achievable. It’s probably no more complex than recognizing objects or faces.

AI excels at spotting patterns and extrapolating rules — often more subtly and accurately than a human can.

Will designers stop training their eyes without the routine practice?
For me, that’s not really the question. Some might forget to practice, others will keep refining their skills and continue to improve.

It all depends on how you choose to use the tool. It’s a personal choice, a matter of mindset.

:sleeping_face: When I’m feeling lazy and the stakes are low, I let the AI handle it while I move on to something else. :fishing_pole:

:high_voltage: But when I want to understand more deeply — when it really matters — I ask the AI to support and inform me. :books:

So maybe the real question is: how much does laziness factor into our relationship with AI?

@mekkablue

I will try to develop a CMP server for a basic AI client to interact with Glyphs. I seek your guidance. I possess several libraries that could be utilized to create a script capable of monitoring I/O events.

However, I am contemplating incorporating it as a plugin within Glyphs.

The plugin should possess a single user function: start and stop.

Which template would be the most suitable option:

  • a dialog,
  • a palette,
  • or a menu item?

Is the plugin the best option?
Can Python macros run indefinitely ?

Thank you in advance for your help.

While macros can kick off a task that runs in the background, they are not well-suited for such long-running work. A plug-in would be a better choice.

However, I recommend first focusing on a task you want to accomplish instead of creating an AI plug-in and then figuring out what it should do and how it should work. Maybe there are better technologies for the task at hand.

3 Likes

Of course, thank you for your advice.
For now, I’m looking at the Anthropic MCP server options.

TC

For information:

This is a fascinating discussion, and it resonates deeply with my own experiments.

I agree that using AI directly for creative production feels unpredictable right now. The real power, I believe, lies in using AI to accelerate the creation of our own custom tools.

A major hurdle, however, is the reliability of AI-generated code. General-purpose models often hallucinate methods or use outdated information, which can be frustrating. To address this, I’ve been working on a personal project that takes a different approach.

Instead of letting the AI guess, it’s built around a Model Context Protocol (MCP). This provides the AI with a strict set of tools to interact with a verified knowledge base of the Glyphs API and handbook. It is required to use these tools to fetch ground-truth data before generating any code. This dramatically improves the accuracy of the resulting scripts, as they are based on reliable, structured information, not just the model’s memory.

Ultimately, the goal is to make scripting more accessible and trustworthy, empowering more designers—especially those who aren’t expert coders—to create the specific small tools they need. It’s exciting to see the community exploring these possibilities!

2 Likes

The hallucinated methods are indeed frustrating (and they are so certain of themselves too. haha). I’d love to learn more about your MCP project. Is there a place to follow it?

1 Like

Thanks so much! I’m glad that point about hallucinated methods resonated with you. It’s a surprisingly common frustration, right?

I really appreciate you asking about the project. To be honest, it’s still my personal ‘workbench’ at a very early stage. I’m currently focused on refactoring the architecture and preparing the data for internationalization before I’d feel comfortable sharing it more widely.

So there isn’t a public repo to follow just yet, but I’d be happy to share some progress with you directly down the line. Your interest is a great motivator!

@erikyin

Any progress on your MCP experiment?

Fonts have quirks that generic AI still misreads, especially around spacing, diacritic placement, and kerning. A dedicated MCP service—housing a small vision model fine-tuned on glyph metrics—could handle those chores, while an LLM agent (Claude, Llama-3.1, etc.) orchestrates the workflow inside Glyphs.

I’m sketching a proof-of-concept now. Anyone keen to think it through or hack on it together? Happy to share scripts and notes.

TC

@gor.jious + @michaelrafailyk :light_bulb: ?

1 Like

I 100% agree with that.

2 Likes

Keep me posted

1 Like

QAs long as AI stays the hell away from the design process and focuses on lessoning the burden of the tedious things, I am fine with it.

1 Like

model context is already handled by the AI tooling, e.g. the concept of @Docs in cursor or similar stuff in claude code and copilot and cline (so no need for MCP server). what’s the use case for natural language instruction in glyphs itself? you dont want an AI screwing up a bunch of times on your live file/vectors, you probably want it to iterate in software. if the macro panel became a full blown IDE that would be a different case.

1 Like

focuses on lessoning the burden of the tedious things

exactly. writing software :slight_smile:
petr van blokland had a good talk about “sketching” with software/AI at ATD3 ATD3_2-08 Petr VAN BLOKLAND Combining ML & rule based digital assistants to create short design cycles for type design projects

1 Like

I’ve got two projects in mind and I’m looking for collaborators or feedback:

  1. Glyphs-LLM bridge (local MCP server) – a lightweight MCP service running on your Mac that lets Claude / ChatGPT talk to Glyphs, similar to their Figma MCP integrations.
    Help wanted, if you know the Glyphs Python API, let’s pair up. :right_facing_fist: :left_facing_fist:

[Claude / ChatGPT agent] ↔ HTTP JSON ↔ [local MCP server] ↔ Glyphs Python API

or

[Glyphs UI] ↔ HTTP JSON ↔ [local MCP server] ↔ Glyphs Python API

  1. Typographic helper model – a trained model for repetitive font tasks (spacing, kerning, etc.), not full-blown type design. Keen to swap ideas on training data and scope.

Interested? Drop me a note and let’s chat.

Thierry,
(Nice to see you on this topic @tribby) :waving_hand:

I would be interested in the MCP stuff.
Where would be a good place to discuss this? Maybe a git repo?