Pua encoding best practices

Could you clear up some of the discussion on PUA encoding. I have looked at previous discussions and it seams to me previously you have mentioned that if the unicode is blank, just leave it blank. I have several fonts that people are using in the program Silhouette, and it require PUA encoding in order for these end swashes to be seen. So several questions:

  1. Is there an easy way to add PUA encoding?
  2. Does PUA encoding cause problems or glitches.
    Thanks for your help, you have the best customer support ever!
    Lori

If you want to use PUA, then you can use it without a problem. If you go to Glyph > Add Glyphs… and type uniE000 uniE001, then you’ll get the first two PUA glyphs.

Hint to what Toshi said: you can use uniE000:uniE0FF (note the colon) to create a whole range.

There is a mekkablue script for applying Unicode ranges to existing glyphs.

Thanks Tosche and Mekkablue. I will look into that script.

Can I make a feature request for your script here? A button/option to automatically start the PUA numbering at the next lowest available code point? I don’t know how complicated it would be to also check for existing code points as to not accidentally overwrite those…

Good idea. Update your scripts with Scripts > mekkablue > App > Update git Repositories in Scripts Folder, then reload scripts (Cmd-Opt-Shift-Y). The script now has this button that resets the Unicode value to the first available PUA after the highest existing PUA:
image

3 Likes

That’s awesome Rainer! Thank you!!!

1 Like

Thanks so much!!

I’d love to revive this discussion for a moment to get to the underlying question of best practices.

It’s always been my understanding that the best practice when creating OpenType fonts is to NOT add PUA-encoding to the OpenType characters because this can lead to problems in professional design software. The main problem I’ve seen mentioned is, for example, “breaking” the ability of the design software to “find and replace.” The logic behind this as I’ve seen it is this: when your find-and-replace tool is looking for the letter a, it’s looking for Unicode 0061. In an OpenType font that doesn’t have PUA encoding, every version of a maps to Unicode 0061. But in one that has PUA codes assigned to all of the alternate versions of letter a, they each have their own code. So one version of a is still 0061, but one version might be Unicode E0AC and another version of a might be E0B9. So find-and-replace can’t work properly.

So I have always created a second PUA-encoded version of my fonts for those users who need to be able to use the fonts with Character Map software. I always name it FontName-PUA so that both can be installed without interfering with one another.

But I’m wondering if in 2023 this is still necessary. I’ve played around with my PUA-encoded versions in programs like Adobe Illustrator and others and I can’t see any discernible difference between how the software renders the PUA-encoded and non-PUA-encoded versions of the fonts. Find and replace seems to work as expected, etc.

Am I creating more work for myself by creating two versions of the font? Is it best, now, to just PUA-encode all of my fonts so that everyone can use them everywhere without needing to ask for a separate file?

Should you encode PUA? If you’re not sure, the answer is no.

If you’re not developing a new script or additions to keyboards for future generations, you’re very highly likely misusing Unicode.

If it is something for this one font only, I bet my money it’s completely wrong.

I use a script in Glyphs 3, and it works perfectly! I remember the times when I had to enter everything manually. Since my fonts had numerous alternatives, it used to take me days to add PUA, and I still ended up with some minor errors.

Do you know if there is a similar solution for FontLab? I’m trying to help a friend who uses FontLab and has to enter names manually, which is quite a hassle. :frowning:

It seems to me your friend would do better to ask that question in the FontLab forums.

1 Like