I’ve been doing some self-study around machine learning and deep learning, and once upon a time stumbled on a unsupervised image to image translation network called pix2pix .

Very interesting network architecture. I’ve looked at it from the perspective of image colorization. It seems like a great option and it trains quite quickly. Although I haven’t had a large enough training set to gather any specific metrics, it feels like it would generalize well on range of images.

There are several implementations of pix2pix available in the frameworks like Tensorflow, Caffe, and Scala/MXNet.

My version of pix2pix here is in Python and MXNet, aiming to colorize 256×256 images using GPU and CUDA.

https://github.com/skirdey/mxnet-pix2pix