Algorithm for spacing idea

I’ve been thinking about developing an algorithm for calculating spacing and eventually kerning.

The algorithm idea is based on two steps:

  1. Creating a blur effect layer around every letter (all directions)
  2. Calculating the darkness of overlapping blurred layers
  3. The user sets a parameter for how much darkness every blur overlap can produce
  4. The algorithm calculates metrics for a set of glyphs
  5. The median value of metrics defines the norm
  6. All other values deviating from the median can be defined kering

The idea seems quite obvious. Has anyone tried such an approach before?
Is it worth it to try, or is it too naive?

This is a valid idea. But calculating the darkness is not trivial. If you find a good algorithm …

Algorithmic spacing and kerning has been considered since at least 1979 and no one has brought it to market that I know of. I would not use it, especially on kerning, because no computer can outdo the human perception of what looks right.

Of course, things do change and in a future world it might be… different, but not necessarily better.

KernOn?

1 Like

iKern?

1 Like

@xyzajacf What you describe is exactly what Behdad uses for Half Kern. The tool is meant to be used for auditing kerning but it achieves that goal by doing automated kerning and then comparing the results against what was originally in the font.

4 Likes

I’ve got some ideas on how to do it.

However, I’ve got stuck with the visualisation of the blur/shadow effect.

Currently, I am able to display the effect only on the active glyph layer.

However, for visualisation, it would be much better to be able to display the blur on all the glyphs on the canvas.

Current approach using GeneralPlugin with DRAWBACKGROUND callback.

Any guidance on how to apply it for all the glyphs in the view would be greatly appreciated.

May I ask how would you deal with the difference between HH and AV? I assume blur will suggest making them the same, as well as overkerning LT. My hot take currently is that spacing is less about the space and more about centering the letters between each other, plus traditions (i.e. we just expect certain pairs to be spaced looser than the math would suggest, LT for instance).

Can’t answer now. It’s still a hypothesis, and I just need to prove it’s either wrong or there is something in it. For now, I suspect that linear diminishing shadow will not work.

there is a “DRAWINACTIVE” callback, too.

1 Like

Thanks it helped

I have seen people try this approach, including Siva’s talk at ATypI Copenhagen. I myself have tried this kind of shape with combination of my BubbleKern, without success. What tends to happen is over-kerning; this method does not take into account open counter of each letter (e.g. T-ness of T includes the space below the arms, and you want to keep it somewhat clear). It comes in super handy when super-tight kerning is exactly what’s needed, since manually kerning very tight type facing is very time-consuming.

For this kind of algorithm to work, you need a clever way to maintain open counter, which is where the true difficulty lies in my opinion.

1 Like

Thanks for the insights @Tosche. The inspiration for the blur/shadow effect came from your BubbleKern. I haven’t seen Siva’s talk at ATypI Copenhagen and can’t find the video.

Currently, the algorithm sums the darkness of intersected areas between letters. However, the results produced too many differences. Then I’ve tried to map the area and find median values, which resulted in more stable numbers, but still not satisfactory.

I have to think more about what exactly to take into consideration. Whether the complete area of the pair, or just some space in between, or a completely blurred intersection area.

Whoops, was drafting a response but pressed “reply” too soon.

this sort of thing is hard not just for technical reasons. in a given style or genre, you can follow conventions to achieve better results (whether optically, or based on a reader’s subconscious familiarity with these conventions), but not every type designer or reader is familiar with (or partial to) the same conventions. so I think adding potentially very many parameters might be necessary, even if you develop a really good general purpose algorithm. and if you start from this position it might change how you think about developing the algorithm.

nick sherman’s revival of franklin gothic is interesting - it prescribes a range of good results to some of the typographer’s needs that deviate from mathematical purity with the “tyght” and “touching” axes, but those are only two parameters. an ideal auto spacing engine would allow for these kinds of adjustments (and many more) to establish the default for the font, IMO

1 Like

I am trying CIImage with blur, converts to NSImage, and uses drawInRect_() with expanded dimensions.

The blur naturally expands the image bounds (original glyph + blur radius padding), but when drawn, it creates much larger than the original glyph (see attached image) and is shifted to the top right direction.

Question: For blur effects that inherently change image dimensions, should I
a) Counter-scale using the options[“Scale”] parameter to fit within original bounds
b) Apply a different approach for effects that expand beyond original glyph bounds

Edit: I’ve tried to implement the position and scaling of the image, but still can’t find the proper way to calculate the scale and position. This is quite a crucial step, as I am then unable to provide the correct data to the blur analyser to calculate side bearings.

:raising_hands: Thanks

CleanShot 2025-08-12 at 10.43.47

Here is the current implementation.


from AppKit import NSBezierPath, NSColor, NSGraphicsContext, NSImage, NSAffineTransform, NSRect, NSPoint, NSZeroRect, NSCompositingOperationSourceOver
from Quartz import CIImage, CIFilter, CIContext

from Foundation import NSMidX, NSMidY

class CoreImageBlurDrawer:
    def __init__(self, blur_radius=1, scale_factor=200.0, position_offset=2.0):
        self.blur_radius = blur_radius
        self.scale_factor = scale_factor
        self.position_offset = position_offset
        self.context = CIContext.contextWithOptions_(None)
    
    def convert_path_to_ciimage(self, bezier_path, bounds):
        if not bezier_path or bezier_path.isEmpty():
            return None
            
        try:
            blur_padding = self.blur_radius
            expanded_width = bounds.size.width + blur_padding
            expanded_height = bounds.size.height + blur_padding
            
            if expanded_width <= 0 or expanded_height <= 0:
                return None
            
            ns_image = NSImage.alloc().initWithSize_((expanded_width, expanded_height))
            ns_image.lockFocus()
            
            try:
                NSGraphicsContext.currentContext().saveGraphicsState()
                
                transform = NSAffineTransform.alloc().init()
                transform.translateXBy_yBy_(-bounds.origin.x, -bounds.origin.y)
                transform.concat()
                
                NSColor.blackColor().set()
                bezier_path.fill()
                
                NSGraphicsContext.currentContext().restoreGraphicsState()
                
            finally:
                ns_image.unlockFocus()
            
            ci_image = CIImage.imageWithCGImage_(ns_image.CGImage())
            return ci_image
            
        except Exception as e:
            print(f"Error converting path to CIImage: {e}")
            return None
    
    def apply_blur_filter(self, ci_image):
        if not ci_image:
            return None
            
        try:
            blur_filter = CIFilter.filterWithName_("CIGaussianBlur")
            blur_filter.setDefaults()
            blur_filter.setValue_forKey_(ci_image, "inputImage")
            blur_filter.setValue_forKey_(self.blur_radius, "inputRadius")
            
            output_image = blur_filter.valueForKey_("outputImage")
            return output_image
            
        except Exception as e:
            print(f"Error applying blur filter: {e}")
            return None
    
    def draw_to_canvas(self, layer, context, scale=1.0):
        if not context or not layer:
            return
            
        try:
            bezier_path = layer.completeBezierPath
            if not bezier_path or bezier_path.isEmpty():
                print("No bezier path available")
                return
            
            bounds = bezier_path.bounds()
            if bounds.size.width <= 0 or bounds.size.height <= 0:
                print("Invalid bounds")
                return
            
            print(f"Converting path to CIImage with bounds: {bounds}")
            ci_image = self.convert_path_to_ciimage(bezier_path, bounds)
            if not ci_image:
                print("Failed to convert path to CIImage")
                return
            
            print(f"Applying blur filter with radius: {self.blur_radius}")
            blurred_image = self.apply_blur_filter(ci_image)
            if not blurred_image:
                print("Failed to apply blur filter")
                return
            
            print("Drawing blurred image to context")
            context.saveGraphicsState()
            
            cg_image = self.context.createCGImage_fromRect_(blurred_image, blurred_image.extent())
            if cg_image:
                ns_image = NSImage.alloc().initWithCGImage_(cg_image)
                
                center_x = NSMidX(bounds)
                center_y = NSMidY(bounds)
                
                # Create destination rect that matches blur expansion
                blur_scale_factor = 1.0 + (self.blur_radius / self.scale_factor)  # Progressive scaling
                scaled_width = bounds.size.width * blur_scale_factor
                scaled_height = bounds.size.height * blur_scale_factor
                
                dest_origin = NSPoint(
                    center_x - scaled_width / self.position_offset, 
                    center_y - scaled_height / self.position_offset
                )
                dest_rect = NSRect(dest_origin, (scaled_width, scaled_height))
                
                ns_image.drawInRect_fromRect_operation_fraction_(
                    dest_rect,
                    NSZeroRect,
                    NSCompositingOperationSourceOver,
                    1.0
                )
            
            context.restoreGraphicsState()
            
        except Exception as e:
            print(f"Error in draw_to_canvas: {e}")
    


1 Like