Jan Hünermann

One video gen­er­at­ed using this technique.

Abstract Art with ML

Randomly initialised neural networks are able to produce visual appealing images. In this post, we explore what compositional pattern producing networks are, how they can be parameterised and what kind of images can be obtained using this technique.

Filed under

  • cppn,
  • art,
  • generation

Did you know that ran­dom­ly ini­tialised neur­al net­works actu­al­ly pro­duce pret­ty cool pictures?

Well, I didn’t, until I recent­ly dis­cov­ered pat­tern pro­duc­ing net­works. This post will be all about them, and will con­tribute a pat­tern gen­er­a­tor run­ning in your brows­er, so you can expe­ri­ence these net­works for your­self. But first, let me explain.

Com­po­si­tion­al Pat­tern Pro­duc­ing Networks

A com­po­si­tion­al pat­tern pro­duc­ing net­work , or CPPN in short, is a net­work, in this case we will focus main­ly on neur­al nets, that giv­en some para­me­ters pro­duces a visu­al pat­tern. The main idea here is to slide the net­work over all the x-y-coor­di­nates in an image and pro­duce a 3-chan­nel (rgb) col­or out­put for each pix­el in the out­put image. We can look at this net­work ff in the fol­low­ing way: (rgb)=f(xy)\begin{pmatrix}r\g\b\end{pmatrix} = f\begin{pmatrix}x\y\end{pmatrix}

This net­work, since it’s con­tin­u­ous and dif­fer­en­tiable, out­puts local­ly cor­re­lat­ed val­ues, mean­ing, sam­pling the net­work on two dif­fer­ent very close points will lead to a very sim­il­iar out­put val­ue. This basi­cal­ly results in the prop­er­ty that the image we can gen­er­ate from it, we could call smooth.

Anoth­er cool prop­er­ty this has, is that it has infi­nite” res­o­lu­tion, because you can just scale the coor­di­nates the net­work receives as inputs.

One exam­ple of a com­po­si­tion­al pat­tern pro­duc­ing net­work. Using a sim­ple 3-lay­er net­work with tanh as the acti­va­tion function.

Para­me­ters

Now we could sim­ply run the net­work as it is. And this in fact works. But we take this a lit­tle step fur­ther by adding cer­tain oth­er inputs to the net­work with the aim of hav­ing the net­work gen­er­ate more com­plex images. 

For exam­ple we can add the radius rr and an adjustable para­me­ter α\alpha. With these mod­i­fi­ca­tions our net­work ff looks like this (rgb)=f(xyrα)\begin{pmatrix}r\g\b\end{pmatrix} = f\begin{pmatrix}x\y\r\\alpha\end{pmatrix} with r=x2+y2r = \sqrt{x^2 + y^2}. The radius not only pro­vides a nice non-lin­ear­i­ty but it also enables the net­work to cor­re­late the out­put col­or to the dis­tance to the ori­gin, because points on the same cir­cum­fer­ence receive the same val­ues for rr.

While the radius rr changes with xx and yy, the α\alpha para­me­ter is sta­t­ic over the course of the image. In essence, you can think of this para­me­ter as the z-para­me­ter. When sam­pling from the 3-dimen­sion­al (x y z) cube we look at the slice at posi­tion z=αz = \alpha.

You can get very cre­ative with these para­me­ters and we’ll explore more exot­ic con­fig­u­ra­tions lat­er on.

What about a 9-lay­er DenseNet? well see for yourself.

Ani­mat­ing along α\alpha.

Ini­tial­i­sa­tion

The out­put of a neur­al net­work is defined a) by its inputs which we talked about in the last sec­tion and b) by its weights. The weights there­fore play a cru­cial role in how the net­work behaves and thus what the out­put image will look like. 

In the exam­ple images through­out this post, I main­ly sam­pled the weights WW from a Gauss­ian dis­tri­b­u­tion (N\mathcal{N}), with a mean of zero and with a vari­ance depen­dend on the num­ber of input neu­rons and a para­me­ter β\beta which I could adjust to my taste. W(Nin,β)=N(μ=0,σ=β1Nin)W(N{in}, \beta) = \mathcal{N}(\mu = 0, \sigma = \beta \frac{1}{N{in}})

We can also ask the net­work to just spit a sin­gle val­ue out, and inter­pret that as a black and white image.

Now to the fun part. Here is a pro­gres­sive image gen­er­a­tor based on the prin­ci­ple of CPP­Ns. You can adjust the Z-val­ue, the vari­ance (which equals β\beta in the above descrip­tion), choose if you want a black and white image (B/W) and explore the seed space. Note, that the ran­dom num­ber gen­er­a­tor used for the Gauss­ian dis­tri­b­u­tion will always pro­duce the same val­ues for the same seed, which is cool, because you can share the seed and retrieve the same result. 

I com­piled a list of exam­ple seeds that I found quite com­pelling while I played with the tool. Try these seeds for the start and then explore the space yourself! :)

And these are just some exam­ples. Try chang­ing the seed, the time or the vari­ance your­self and maybe, just maybe, you come across some masterpiece! 😛

Explor­ing oth­er architectures

In this sec­tion, I want to show you some results I’ve been get­ting with some more exot­ic architectures.

As you can see, these images behave sur­pris­ing­ly very dif­fer­ent­ly. A sin­gle para­me­ter more makes a huge dif­fer­ence in the acti­va­tions of the networks.

Wrap­ping up

Now that we have gone through sev­er­al archi­tec­tures, explored mul­ti­ple con­fig­u­ra­tions and looked at a bunch of images, it’s time to wrap up. But I want to wrap up by point­ing to some pos­si­ble improvements.

In terms of use case: these images make great col­or and gra­di­ent inspi­ra­tion! Also, I just recent­ly replaced my Spo­ti­fy playlist cov­ers with these. They make pret­ty great album art­works (don’t they?). (Flume, hit me up ;-) )

Any­way, that’s it for now. I hope you have enjoyed my first blog post. If you did, I’d appre­ci­ate if you con­sid­er sub­scrib­ing to my blog (via RSS or JSON-Feed). I’ll try to pub­lish blog posts reg­u­lary here. See you then!

Acknowledgements

I want to thank David Ha (@hardmaru) for his work on his blog series on CPP­Ns[1] and his open source pub­li­ca­tions which were a great resource to me dur­ing devel­op­ment of this blog post. Also, Den­nis Kerzig’s imple­men­ta­tion of CPP­Ns[2] served me as a great base for gen­er­at­ing these pic­tures in Python.

Code

I pub­lished the code used to gen­er­ate the images in this post as well as the source code for fig­ure 1[^] on GitHub. As men­tioned, the Python code is based on [2] and runs using Ten­sor­flow. You can find the repos­i­to­ry here.

References

  1. Gen­er­at­ing Abstract Pat­terns with TensorFlow [link]
    David Ha
  2. CPPN in Keras 2 [link]
    Den­nis Kerzig