Pix2Pix Rendering

The first Neural Network that I was introduced to was pix2pix. This was a great introduction as it allowed the concept of a neural network to be more easily understood. Simply, the pix2pix network takes a pair of images and attempts to learn how the two images relate to each other. Below are pictured pairs of images that on the right contain an architectural photograph and on the left contains an edge trace of the paired image. With pix2pix a collection of these image pairs are given to the algorithm and the algorithm learns how to correspond the shapes found in the line drawings with the corresponding areas in the photograph. By learning from a large dataset of pairs the network updates to best account for all the data it sees and tries to find similarities across all the images.
After the training phase the network is then able to take one side of the image pairs, in the cases below the line drawings and then create a corresponding image based on the generalizations that it had learned from the dataset. While this is far from a perfect render the outputs from the initial tests of the architectural image dataset suggested that if the input images where more carefully curated to address a specific aesthetic the pix2pix algorithm could be used to create quick stylized renders from basic line drawings.
Pairs dataset with architectural image on right and line tracing on left for pix2pix training









Line tracing drawings from a images not in the training dataset paired with their pix2pix outputs









Counterfeiting Daily Renders
After experimenting with other algorithms, I returned to pix2pix as a way to take what I had output as a designer and push it back towards the aesthetic of the AI that had been initially informing my design process and was lost as I moved toward more conventional design outputs. To accomplish this I created pairs from outputs created by the DCGAN algorithm in my Counterfeiting Daily project for both categories of interior and exterior perspectives. Once training was completed I then took line drawings created from the make2d function in rhino that had been stylized in Photoshop to bring them closer towards the line qualities that were found in original image traced sketches. While still far from functioning as realistic renders, the outputs from the pix2pix algorithm were able to recapture the GAN aesthetic I was searching for and bring the outputs influenced by the heavy hand of conventional design back to a space that challenged the way that I needed to look at and perceive the images and designs that were being created.
Exterior Renders
Pairs dataset with Counterfeiting Daily Outputs on right and line tracing on left for pix2pix training









Line drawings made from stylized make2d of rhino model on left paired with pix2pix outputs on right









Interior Renders
Pairs dataset with Counterfeiting Daily Outputs on right and line tracing on left for pix2pix training









Line drawings made from stylized make2d of rhino model on left paired with pix2pix outputs on right








