No Description

Jason Antic 69461f8200 Updating NoGAN portion to be more accurate. 6 years ago
.vscode 90c49c994f More cleanup 6 years ago
fasterai cc1d53c2d6 Adding gaussian noise augmentation 6 years ago
result_images 03274527c3 Testing lfs image fix 6 years ago
test_images e4a3600608 Latest random updates 6 years ago
.gitattributes fafac86688 Converting git-lfs (large file storage) for images 6 years ago
.gitignore d09d0ea4c2 Getting rid of unused code and generally cleaning up 6 years ago
ColorizeTrainingDeep.ipynb d09d0ea4c2 Getting rid of unused code and generally cleaning up 6 years ago
ColorizeTrainingDeepWide.ipynb d09d0ea4c2 Getting rid of unused code and generally cleaning up 6 years ago
ColorizeTrainingWide.ipynb e4a3600608 Latest random updates 6 years ago
ImageColorizerArtistic.ipynb e4a3600608 Latest random updates 6 years ago
ImageColorizerColab.ipynb e4a3600608 Latest random updates 6 years ago
ImageColorizerStable.ipynb e4a3600608 Latest random updates 6 years ago
LICENSE fe7bc7f521 Initial commit 6 years ago
README.md 69461f8200 Updating NoGAN portion to be more accurate. 6 years ago
SuperResTraining.ipynb 2a52040220 More progress on FastAI v1 upgrade 6 years ago
VideoColorizer.ipynb d09d0ea4c2 Getting rid of unused code and generally cleaning up 6 years ago
VideoColorizerColab.ipynb e4a3600608 Latest random updates 6 years ago
environment.yml e871496276 Miscelaneous stuff that I just didn't check in yet. Derp 6 years ago
requirements.txt e871496276 Miscelaneous stuff that I just didn't check in yet. Derp 6 years ago

README.md

DeOldify

Image | Video

Get more updates on Twitter

Simply put, the mission of this project is to colorize and restore old images and film footage. I'll get into the details in a bit, but first let's see some pretty pictures and videos!


Example Images

Maria Anderson as the Fairy Fleur de farine and Lyubov Rabtsova as her page in the ballet “Sleeping Beauty” at the Imperial Theater, St. Petersburg, Russia, 1890.

Ballerinas

Woman relaxing in her livingroom (1920, Sweden)

SwedenLivingRoom

Medical Students pose with a cadaver around 1890

MedStudents

Surfer in Hawaii, 1890

1890Surfer

Whirling Horse, 1898

WhirlingHorse

Interior of Miller and Shoemaker Soda Fountain, 1899

SodaFountain

Paris in the 1880s

Paris1880s

Edinburgh from the sky in the 1920s

Edinburgh

Texas Woman in 1938

TexasWoman

People watching a television set for the first time at Waterloo station, London, 1936

Television

Geography Lessons in 1850

Geography

Chinese Opium Smokers in 1880

OpiumReal

Note that even really old and/or poor quality photos will still turn out looking pretty cool:

Deadwood, South Dakota, 1877

Deadwood

Siblings in 1877

Deadwood

Portsmouth Square in San Franscisco, 1851

PortsmouthSquare

Samurais, circa 1860s

Samurais

Granted, the model isn't always perfect. This one's red hand drives me nuts because it's otherwise fantastic:

Seneca Native in 1908

Samurais

It can also colorize b&w line drawings:

OpiumDrawing


The Technical Details

This is a deep learning based model. More specifically, what I've done is combined the following approaches:

Self-Attention Generative Adversarial Network (https://arxiv.org/abs/1805.08318)

Except the generator is a pretrained U-Net, and I've just modified it to have the spectral normalization and self-attention. It's a pretty straightforward translation.

Two Time-Scale Update Rule (https://arxiv.org/abs/1706.08500)

This is also very straightforward – it's just one to one generator/critic iterations and higher critic learning rate. This is modified to incorporate a "threshold" critic loss that makes sure that the critic is "caught up" before moving on to generator training. This is particularly useful for the "NoGAN" method described below.

NoGAN

There's no paper here! This is a new type of GAN training that I've developed to solve some key problems in the previous DeOldify model. The gist is that you get the benefits of GAN training while spending minimal time doing direct GAN training. During this very short amount of GAN training the generator not only gets the full realistic colorization capabilities that used to take days of progressively resized GAN training, but it also doesn't accrue any of the artifacts and other ugly baggage of GANs. As far as I know this is a new technique. And it's incredibly effective.

The steps are as follows: First train the generator in a conventional way by itself with just the feature loss. Then you generate images from that, and train the critic on distinguishing between those outputs and real images as a basic binary classifier. Finally, you train the generator and critic together in a GAN setting (starting right at the target size of 192px in this case). This training is super fast- only 5%-40% of the Imagenet dataset is iterated through, once! More data is required for larger models.

This builds upon a technique developed in collaboration with Jeremy Howard and Sylvain Gugger for Fast.AI's Lesson 7 in version 3 of Practical Deep Learning for Coders Part I. The particular lesson notebook can be found here: https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson7-superres-gan.ipynb

Generator Loss

Loss during NoGAN learning is two parts: One is a basic Perceptual Loss (or Feature Loss) based on VGG16 – this just biases the generator model to replicate the input image. The second is the loss score from the critic. For the curious – Perceptual Loss isn't sufficient by itself to produce good results. It tends to just encourage a bunch of brown/green/blue – you know, cheating to the test, basically, which neural networks are really good at doing! Key thing to realize here is that GANs essentially are learning the loss function for you – which is really one big step closer to toward the ideal that we're shooting for in machine learning. And of course you generally get much better results when you get the machine to learn something you were previously hand coding. That's certainly the case here.

Of note: There's no longer any "Progressive Growing of GANs" type training going on here. It's just not needed in lieu of the superior results obtained by the "NoGAN" technique described above.

The beauty of this model is that it should be generally useful for all sorts of image modification, and it should do it quite well. What you're seeing above are the results of the colorization model, but that's just one component in a pipeline that I'm developing with the exact same approach.


This Project, Going Forward

So that's the gist of this project – I'm looking to make old photos and film look reeeeaaally good with GANs, and more importantly, make the project useful. In the meantime though this is going to be my baby and I'll be actively updating and improving the code over the foreseeable future. I'll try to make this as user-friendly as possible, but I'm sure there's going to be hiccups along the way.

Oh and I swear I'll document the code properly...eventually. Admittedly I'm one of those people who believes in "self documenting code" (LOL).


Getting Started Yourself- Easiest Approach

The easiest way to get started is to go straight to the Colab notebooks:

Image | Video

Special thanks to Matt Robinson and María Benavente for their image Colab notebook contributions, and Robert Bell for the video Colab notebook work!


Getting Started Yourself- Your Own Machine (not -as- easy)

Hardware and Operating System Requirements

  • (Training Only) BEEFY Graphics card. I'd really like to have more memory than the 11 GB in my GeForce 1080TI (11GB). You'll have a tough time with less. The Generators and Critic are ridiculously large.
  • (Colorization Alone) A decent graphics card. Approximately 4GB+ memory video cards should be sufficient.
  • Linux (or maybe Windows 10) I'm using Ubuntu 16.04, but nothing about this precludes Windows 10 support as far as I know. I just haven't tested it and am not going to make it a priority for now.

Easy Install

You should now be able to do a simple install with Anaconda. Here are the steps:

Open the command line and navigate to the root folder you wish to install. Then type the following commands

git clone https://github.com/jantic/DeOldify.git DeOldify
cd DeOldify
conda env create -f environment.yml

Then start running with these commands:

source activate deoldify
jupyter lab

From there you can start running the notebooks in Jupyter Lab, via the url they provide you in the console.

More Details for Those So Inclined

This project is built around the wonderful Fast.AI library. Prereqs, in summary:

  • Fast.AI 1.0.46 (and its dependencies)
  • Jupyter Lab conda install -c conda-forge jupyterlab
  • Tensorboard (i.e. install Tensorflow) and TensorboardX (https://github.com/lanpa/tensorboardX). I guess you don't have to but man, life is so much better with it. FastAI now comes with built in support for this- you just need to install the prereqs: conda install -c anaconda tensorflow-gpu and pip install tensorboardX
  • ImageNet – Only if you're training, of course. It has proven to be a great dataset for my purposes. http://www.image-net.org/download-images

Pretrained Weights

To start right away on your own machine with your own images or videos without training the models yourself, you'll need to download the weights and drop them in the /models/ folder.

Download image weights here

Download video weights here

You can then do image colorization in this notebook: ImageColorizer.ipynb

And you can do video colorization in this notebook: VideoColorizer.ipynb

The notebooks should be able to guide you from here.

Want More?

I'll be posting more results on Twitter.