We’re generally content to control font outlines by pushing points around on a screen, but an intuitive interface for managing the entire gestalt of a type family remains elusive. H&FJ’s Andy Clymer tends to develop fonts and tools together (one always seems to be the excuse to create the other), and this is his latest exploration: using facial recognition to control the basic parameters of a font’s design.
Behold Andy modeling his latest creation, which employs Kyle McDonald’s FaceOSC library, GlyphMath from RoboFab, and Tal Leming’s Vanilla to mutate the geometries behind our Ideal Sans typeface in realtime. I’m intrigued by the potential to control local and global qualities of a typeface at the same time: fingers and mouse to design the details, faces and cameras to determine their position in a whole realm of design possibilities. I wonder about the possibilities of a facial feedback loop, in which one’s expression of wonder and delight could instantly undo a moment of evanescent beauty. And then there are the possibilities of environmental pathogens affecting letterforms: what might too much caffeine, air conditioning, or ragweed pollen do to a typeface? Listening to Louis C.K.? Too many whiskey sours? —JH