This research blog is a collection of thoughts, internet fragments and pieces of information I find noteworthy. In general, you can expect posts about technology, politics, art and design.

For weekly updates, follow me on Facebook or subscribe to the Atom/RSS.

Typography Machine Learning Article Graphic Design

Font pairing is a classic part of the typography concept during the design process. Different font pairings have different effects on how the content is presented, they can draw or deter attention, express personality, shape an identity or attitude, and of course improve legibility and user experiences. Important factors to aid your decision are visual contrast, various typographic measures and features (like x-height and ascenders/descenders) and the history and origin of the fonts. The developer of Fontjoy tried to quantify and analyse, what generally makes a good font pairing: "Good font combinations tend to be fonts that share certain similarities, but contrast in some specific way.". As highlighted multiple times in this blog, neural networks are great at finding similarities and correlations in big data sets. For Fontjoy, the developers used machine learning to analyse more than 1800 fonts, the algorithm itself identifies the features and orders them in a multi dimensional grid and outputs font pairings based on the developers definition.

Of course, this interests me, because it actively enters my own domain as a graphic designer and in a way, questions my own creativity and decision making. Fontjoy is not the only AI driven and design focused tool out there, it is clear, that a trend in AI assisted design features is emerging. Take Wix for example, one of the more popular website building tools, uses an algorithmic approach to make it easy for amateurs to build websites that are pleasing to the eye. Wix feeds the algorithm high-quality websites and tries to make style suggestions relevant to the client’s industry. Firedrop.ai is able to generate landing-pages with an AI Assistant called Sacha, you actually write your changes and desired features in a chat, Sacha talks back and delivers. Autodesk Dreamcatcher is able to generate thousands of iterations and alternative design solutions for industry designers and CAD users. The Grid is a paid service, that offers "websites, that design themselves". LogoJoy claims to generate logos "you will be proud of".

So that's it, right? Graphic design will be a obsolete in the future, a fragment of the past like a pixelated photograph of a Blockbuster store you took on your rad new flip-phone.

Well, the answer is not a clear yes or no. While it is somehow true, that the design process can often be reduced to variables, to inputs and a desired outputs, the reality is, as so often, more complex. Probably not so complex, that a algorithm could not grasp it, distill its essence and reflect it like a mirror, but what I mean is the history and development of design as a cultural phenomena, as an expression of the zeitgeist and society. In that context, I think that we are in a transition phase, we will look back at graphic design in the 2010's as we look back to the days typesetting was done by hand, either through phototypesetting or hot metal typesetting. Maybe we will be nostalgic about how back in the days, we actually did mock-ups by hand in Photoshop or when we had to apply and define 200 pages of branding guidelines for every medium manually.

I learned in university how to set type by hand using hot metal typesetting not only to get a better understanding of the origin of the technical terminology in inDesign, but also to grasp why typesetting works the way it does and why certain dogmas are valid and what it means if you break them. So I can imagine a future, where students have to, for example, write stylesheets for different screen sizes "by hand", just to understand how and why the program/app/digital assistant/algorithm/[...] acts the way it does. And that's a good thing. Understanding the tools, the "why and how" behind the GUI or machine, leads us to the "why not and how else", to experiments, new ways of expression and better suitable solutions to unique questions. And ultimately back to a mirrored zeitgeist through graphic design.

To stay in the example of typesetting, we can look at desktop publishing (DTP), that replaced phototypesetting with a digital equivalent in form of layout programs. In matter of years, the job of a graphic designer changed, layout work that used to take hours could now be done in minutes, with instant visual feedback. The new tools did not only push productivity and reduced costs, but opened new ways of expression. For example, an explosion of new typefaces hit the market and a new aesthetic developed. Take Emigre, Neville Brody or David Carson, all shaped the zeitgeist of the 90s with their aesthetic. That development is eventually tied closely to the technical possibilities of that time, because the tools that are deeply connected to the aesthetic, well, simply did not exist before.

That is why I am optimistic about AI assisted design, it will be a powerful new addition to the designers toolbelt, able to free us from mundane tasks. I think that we should cherish this phase of transition as what it is: A possibility for something new. Algorithms, automation and AI assisted design will change the job of a graphic designer, once again, productivity will rise while costs will decrease. At the same time, new challenges, demands and problems will emerge, but so will new solutions, applications and ideas. The examples above still may signal a demise of graphic design, but for me they are the equivalent of fast food, an instant and short lived gratification. LogoJoy is for the eye, what a Big Mac is for the stomach. For the everyday user, this will suffice, but, well, so did WordArt.

more information:
The automation of design, by Kai Brunner (Techcrunch)
Algorithm-Driven Design: How Artificial Intelligence Is Changing Design, by Yury Vetrov (Smashing Magazine)
Taking The Robots To Design School, Part 1, Great read by Jon Gold, who worked at The Grid

Machine Learning Neural Network computer generated

Mike Tyka's “Portraits Of Imaginary People” is an experiment, where he is looking for new ways to use generative neural networks to make portraits of, well, as the title suggests, imaginary people. His approach combines multiple networks in different stages. The actual generation of the faces is restricted to a resolution of roughly 256 × 256 Pixels. In order to overcome this technical restriction of conventional neural networks, he upscales the output into higher resolution using multiple stages of machine learning methods, achieving printable pictures with a resolution of up to 4000 × 4000 Pixels. The aesthetic of the actual outcome is rough, haptic and has its very own quality, sometimes evoking associations with oil paintings or surrealism. Two things are to note: This is still a work-in-progress, an experiment with uncertain outcomes and the results are highly cherry-picked. Visit his page for more information.

On a side note: The results reminded me of a project called “Composite” by Brian Joseph Davis, where he generated police sketches of literary characters, by running their book description through composite sketch software used by law enforcement. Seeing the results and the implications of Tyka's experiments, you can see that law enforcement is also going to be changed by machine learning, computer vision and neural networks.

Machine Learning Neural Network Musicvideo computer generated

This project by Damien Henry is an hour-long video set to music by Steve Reich. What you are seeing is not a style or filter applied to a video, but actually new footage generated by a neural network. This neural network is trained to videos recorded from train windows, with landscapes that moves from right to left. The algorithm uses a motion-prediction technique, basically it is trying to predict the next frame of the video. After training the neural network, it only needs one frame as an input to start generating new frames indefinitely.

Eerily enough, the predicted footage is capturing the feeling of riding a train pretty well. Even though the landscapes are more dreamlike than realistic, it is interesting enough that the algorithm itself figured out, what makes a train ride a train ride: For example that the background has to move slower than the foreground. It is important to note, that the resolution is currently pretty low due to the technical restriction neural networks have. But you can expect an increase in resolution and quality of such experiments in a not so distant future. Machine learning is still in children's shoes and engineers, artists and coders are trying to figure out, how they actually work and what they can and cannot do. I guess, now you have to cross "dreaming of train rides" off that list.