This research blog is a collection of thoughts, internet fragments and pieces of information I find noteworthy. In general, you can expect posts about technology, politics, art and design.
Together with Monotype, Google engineered a universal typeface family that spans more than 800 languages, 100 writing systems and hundreds of thousands of characters. The name "Noto" is an abbreviation of “no more tofu”. Tofu, in this case, is a nickname for the blank boxes (▯) when a computer or website lacks font support for a specific character or letter. The amount of work, research and dedication is breathtaking, I am surprised that the team realized this mammoth project in only five years. Some of the languages in this typeface family had been never digitised before, they are niche languages which only existed in spoken form or are found mostly on monuments and manuscripts. For example Adlam, a writing system for the fulani language of Afrika, Monotype worked with the script’s original creators. Having direct access to the inventors of this writing system allowed the designers to incorporate stylistic choices and features that would reflect the creators’ original intentions, and bring the Fulani-speaking community the first chance to use the script digitally.
This cultural preservation is what I love the most about this project. Some of the typefaces and writing systems would probably be forgotten, so the font family serves as a kind of contemporary digital Tower of Babel. The fact that the whole project is open source, free to use and constantly expanding is a great example of how graphic design has the possibility to connect mankind, democratizes communication and preserve culture and tradition in our digital age.
The short cartoon "A Moment in Time" by Goro Fujita is quite remarkable. The cartoon itself is filled with life and details, generating an interesting and vivid atmosphere. But what makes it stand out are the means of creating this virtual painting. Everything you see was painted "by hand" in virtual reality, every animation was done frame by frame and the rendering happens in real-time in the VR App Quill itself. Positional sound was added to complete the immersion. Goro Fujita explored the possibilities of the VR App and talked about the process:
"What if I painted and animated a moment in time that people could explore and experience from multiple angles at their own pace?"
This is when I started working on my first animated Quillustration. I started with a street and animated a guy walking down this street frame by frame. Then I added a guy smoking a cigarette on the other side of the street all as looping animations. The more I added to the scene the more magical it became. Seeing my painted characters come alive and being inside my painting with them was incredible. The scene kept growing and as I added positional audio to the mix it became complete. This piece took me about 80 hours to finish and the fact that Quill allows a single person to create something like this is still mind boggling to me. Important to note, what you are about to see is a real time capture of the animated Quillustration meaning it’s all running in real time inside Quill. This new medium is truly something extraordinary, it's more than I could have ever dreamed of!
I have to agree, I am quite intrigued to see more like this. To really grasp the scale and the implications of this, check out the behind the scenes clip. I also highly recommend another video by Goro Fujita, called "Worlds in Worlds".
⇝ His Instagram
This project by Damien Henry is an hour-long video set to music by Steve Reich. What you are seeing is not a style or filter applied to a video, but actually new footage generated by a neural network. This neural network is trained to videos recorded from train windows, with landscapes that moves from right to left. The algorithm uses a motion-prediction technique, basically it is trying to predict the next frame of the video. After training the neural network, it only needs one frame as an input to start generating new frames indefinitely.
Eerily enough, the predicted footage is capturing the feeling of riding a train pretty well. Even though the landscapes are more dreamlike than realistic, it is interesting enough that the algorithm itself figured out, what makes a train ride a train ride: For example that the background has to move slower than the foreground. It is important to note, that the resolution is currently pretty low due to the technical restriction neural networks have. But you can expect an increase in resolution and quality of such experiments in a not so distant future. Machine learning is still in children's shoes and engineers, artists and coders are trying to figure out, how they actually work and what they can and cannot do. I guess, now you have to cross "dreaming of train rides" off that list.
Interesting project by the Hasso Plattner Institute Potsdam. The software can design structures using coke bottles. It outputs the necessary 3D printable trusses, allowing the rapid construction of reliable furniture prototypes and structures.
TrussFab is an integrated end-to-end system that allows users to fabricate large scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by complementing 3D print with plastic bottles.
Unlike previous systems that stacked bottles as if they were “bricks”, TrussFab considers them as beams and uses them to form structurally sound node link structures based on closed triangles, also known as trusses. TrussFab embodies the required engineering knowledge, allowing non-engineers to design such structures.