Updating filter lozenge when assigning unicode

This is a very obscure bug, but:
If I have a list filter showing that I am missing a glyph from the list, and if instead of covering that lack by generating the missing glyph, I instead assign the unicode of the missing glyph to another existing glyph, the lozenge isn’t updated to reflect that it’s no longer missing if it was the last missing one in the list.
For context, I found this working on a unicase design with double encoding throughout. The lozenge does seem to update until it hits that last missing one (i.e. goes from 245/248 to 246/248 to 247/248 but then gets stuck).
Closing and reopening the file fixes the issue.