Design method: Create an font from an Scanned image

Hello.

A friend from me made a font 20 years (OMG!) ago. This font was just made in freehand, but I found just the printed presentation from it. (okay it has just the base ABC characters)

Anyway, I just wanted to ask if is there a fast method (or script), to create from this scanned paper an font. I mean something like: slice every characters from the scan, than import these all at once… and then trace them.

Like in fontlab here

(i know it’s possible to adjust the backgrounds with mekkablue-s script… and then I can trace them together)

Maybe that’s not the right way… Maybe there is a better method?

The image trace in Glyphs is quite okay. There is no magic button that does all the slicing etc. automatically. You can otherwise try Fontself for Illustrator, that is supposed to be pretty good for generating fonts from quickly drawn artwork.

How complex are the outlines? Depending on that, just tracing them by hand might be worth it, also to correct any inconsistencies you might happen upon.

Yes I know… The trace image is realy good in glyphs.
And yes… it’s not a big deal to import just a couple characters as background image… I was just wondering that isn’t there a better solution. Because for example: what if i want to import 300 characters.
Maybe a script could help…(?)

A script probably could help, yes. Are you familiar with the Glyphs Python API?

well I’m still learning it :).

I suspect I’m not telling many people what they don’t already know, but the quick route for me is Scan > photoshop for sharpening to black and white > Slice in photoshop, name the scanned letters there > Glyphs import to backgrounds.

Or, if I want to autotrace, I skip the slicing, trace it in Illustrator, and either copy-paste each or use the fairly cheap Fontself Maker plugin to quickly create a font file that I can then edit in Glyphs.

Every method has its downsides though, and If you do a lot of work that needs such methods, python is good.

If you want to save time and speed things up, ChatGPT 4 (the paid one) is getting decent at script code. I tried making it write glyphs scripts in ChatGPT 3.5 (the free one) and it was all rubbish. I haven’t tried doing Glyphs scripting in 4, but it did Illustrator and InDesign scripts for me, no problem.

You still have to learn, since it’s up to you to talk to it in the right language, but it’ll figure out the structuring for you, and if you ask it, it’ll also explain the structuring.

Just remember that it still hallucinates, it’s not a source of truth. If you see it like a teenage intern, capable, enthusiastic but ultimately fallible, you’ll get along fine.

1 Like

Yes, I did that way. I’ve in photoshop sliced the scan, then I’ve added with “drag and drop” to the glyphs background. And then I used the “trace image” in glyphs.

But… I didn’t know that chatgpt can help me with scripting in glyphs. :slight_smile: I will try it.

I don’t bother slicing each glyph before putting the image into the background. I like to trace everything in a couple of temporary glyphs, and then copy the traced outlines to their correct glyph.

In this image you see the glyph box behind the M, which I used to scale the scan correctly.

You can use the “Crop image to Layer bounds” (from the context menu) before you do the tracing to only get the part you need. Then copy paste the image to the next glyph, shift the image, crop again …