|
@@ -1,65 +1,84 @@
|
|
|
# DeOldify
|
|
|
|
|
|
+
|
|
|
**NEW!** Try out colorization here on Colab: https://colab.research.google.com/github/mc-robinson/DeOldify/blob/master/DeOldify_colab.ipynb . Huge thanks to Matt Robinson.
|
|
|
|
|
|
Simply put, the mission of this project is to colorize and restore old images. I'll get into the details in a bit, but first let's get to the pictures! BTW – most of these source images originally came from the r/TheWayWeWere subreddit, so credit to them for finding such great photos.
|
|
|
|
|
|
-#### Some of many results- These are pretty typical!
|
|
|
+
|
|
|
+#### Some of many results - These are pretty typical!
|
|
|
|
|
|
Maria Anderson as the Fairy Fleur de farine and Lyubov Rabtsova as her page in the ballet “Sleeping Beauty” at the Imperial Theater, St. Petersburg, Russia, 1890.
|
|
|
+
|
|
|

|
|
|
|
|
|
Woman relaxing in her livingroom (1920, Sweden)
|
|
|
+
|
|
|

|
|
|
|
|
|
Medical Students pose with a cadaver around 1890
|
|
|
+
|
|
|

|
|
|
|
|
|
Surfer in Hawaii, 1890
|
|
|
+
|
|
|

|
|
|
|
|
|
Whirling Horse, 1898
|
|
|
+
|
|
|

|
|
|
|
|
|
Interior of Miller and Shoemaker Soda Fountain, 1899
|
|
|
+
|
|
|

|
|
|
|
|
|
Paris in the 1880s
|
|
|
+
|
|
|

|
|
|
|
|
|
Edinburgh from the sky in the 1920s
|
|
|
+
|
|
|

|
|
|
|
|
|
Texas Woman in 1938
|
|
|
+
|
|
|

|
|
|
|
|
|
People watching a television set for the first time at Waterloo station, London, 1936
|
|
|
+
|
|
|

|
|
|
|
|
|
Geography Lessons in 1850
|
|
|
+
|
|
|

|
|
|
|
|
|
Chinese Opium Smokers in 1880
|
|
|
+
|
|
|

|
|
|
|
|
|
|
|
|
#### Note that even really old and/or poor quality photos will still turn out looking pretty cool:
|
|
|
|
|
|
Deadwood, South Dakota, 1877
|
|
|
+
|
|
|

|
|
|
|
|
|
Siblings in 1877
|
|
|
+
|
|
|

|
|
|
|
|
|
Portsmouth Square in San Franscisco, 1851
|
|
|
+
|
|
|

|
|
|
|
|
|
Samurais, circa 1860s
|
|
|
+
|
|
|

|
|
|
|
|
|
#### Granted, the model isn't always perfect. This one's red hand drives me nuts because it's otherwise fantastic:
|
|
|
|
|
|
Seneca Native in 1908
|
|
|
+
|
|
|

|
|
|
|
|
|
#### It can also colorize b&w line drawings:
|
|
@@ -67,7 +86,6 @@ Seneca Native in 1908
|
|
|

|
|
|
|
|
|
|
|
|
-
|
|
|
### The Technical Details
|
|
|
|
|
|
This is a deep learning based model. More specifically, what I've done is combined the following approaches:
|
|
@@ -88,50 +106,59 @@ So that's the gist of this project – I'm looking to make old photos look reeee
|
|
|
Oh and I swear I'll document the code properly...eventually. Admittedly I'm *one of those* people who believes in "self documenting code" (LOL).
|
|
|
|
|
|
### Getting Started Yourself
|
|
|
-
|
|
|
The easest way to get started is to simply try out colorization here on Colab: https://colab.research.google.com/github/mc-robinson/DeOldify/blob/master/DeOldify_colab.ipynb . This was contributed by Matt Robinson, and it's simply awesome.
|
|
|
|
|
|
This project is built around the wonderful Fast.AI library. Unfortunately, it's the -old- version and I have yet to upgrade it to the new version. (That's definitely on the agenda.) So prereqs, in summary:
|
|
|
* ***Old* Fast.AI library** [**UPDATE 11/7/2018**] Easiest thing to do in my mind is just to take the fastai/fastai folder and drop it in the root of this project, right next to fasterai's folder. Just today, I found this thread on installing fast.ai 0.7- This is probably your best resource on this subject! https://forums.fast.ai/t/fastai-v0-7-install-issues-thread/24652 . Do this first- this will take you most of the way, including dependencies.
|
|
|
* **Pytorch 0.4.1** (needs spectral_norm, so latest stable release is needed). https://pytorch.org/get-started/locally/
|
|
|
-* **Jupyter Lab** [conda install -c conda-forge jupyterlab]
|
|
|
-* **Tensorboard** (i.e. install Tensorflow) and **TensorboardX** (https://github.com/lanpa/tensorboardX). I guess you don't *have* to but man, life is so much better with it. And I've conveniently provided hooks/callbacks to automatically write all kinds of stuff to tensorboard for you already! The notebooks have examples of these being instantiated (or commented out since I didn't really need the ones doing histograms of the model weights). Noteably, progress images will be written to Tensorboard every 200 iterations by default, so you get a constant and convenient look at what the model is doing. [conda install -c anaconda tensorflow-gpu]
|
|
|
+* **Jupyter Lab** `conda install -c conda-forge jupyterlab`
|
|
|
+* **Tensorboard** (i.e. install Tensorflow) and **TensorboardX** (https://github.com/lanpa/tensorboardX). I guess you don't *have* to but man, life is so much better with it. And I've conveniently provided hooks/callbacks to automatically write all kinds of stuff to tensorboard for you already! The notebooks have examples of these being instantiated (or commented out since I didn't really need the ones doing histograms of the model weights). Noteably, progress images will be written to Tensorboard every 200 iterations by default, so you get a constant and convenient look at what the model is doing. `conda install -c anaconda tensorflow-gpu`
|
|
|
* **ImageNet** – Only if training of course. It proved to be a great dataset. http://www.image-net.org/download-images
|
|
|
* **BEEFY Graphics card**. I'd really like to have more memory than the 11 GB in my GeForce 1080TI (11GB). You'll have a tough time with less. The Unet and Critic are ridiculously large but honestly I just kept getting better results the bigger I made them.
|
|
|
* **Linux** (I'm using Ubuntu 16.04) is assumed, but nothing from the above precludes Windows 10 support as far as I know. I just haven't tested it and am not going to make it a priority for now.
|
|
|
|
|
|
-**For those wanting to start transforming their own images right away:** To start right away with your own images without training the model yourself, download the weights here: https://www.dropbox.com/s/7r2wu0af6okv280/colorize_gen_192.h5 (right click and download from this link). Then open the ColorizationVisualization.ipynb in Jupyter Lab. Make sure that there's this sort of line in the notebook referencing the weights:
|
|
|
+**For those wanting to start transforming their own images right away:** To start right away with your own images without training the model yourself, [download the weights here](https://www.dropbox.com/s/7r2wu0af6okv280/colorize_gen_192.h5) (right click and download from this link). Then open the [ColorizationVisualization.ipynb](ColorizationVisualization.ipynb) in Jupyter Lab. Make sure that there's this sort of line in the notebook referencing the weights:
|
|
|
|
|
|
- colorizer_path = Path('/path/to/colorizer_gen_192.h5')
|
|
|
+```python
|
|
|
+colorizer_path = Path('/path/to/colorizer_gen_192.h5')
|
|
|
+```
|
|
|
|
|
|
-Then the colorizer model needs to be loaded via this line after netG is initialized:
|
|
|
+Then the colorizer model needs to be loaded via this line after `netG` is initialized:
|
|
|
|
|
|
- load_model(netG, colorizer_path)
|
|
|
+```python
|
|
|
+load_model(netG, colorizer_path)
|
|
|
+```
|
|
|
|
|
|
-Then you'd just drop whatever images in the /test_images/ folder you want to run this against and you can visualize the results inside the notebook with lines like this:
|
|
|
+Then you'd just drop whatever images in the `/test_images/` folder you want to run this against and you can visualize the results inside the notebook with lines like this:
|
|
|
|
|
|
+```python
|
|
|
vis.plot_transformed_image("test_images/derp.jpg", netG, md.val_ds, tfms=x_tfms, sz=500)
|
|
|
+```
|
|
|
|
|
|
-I'd keep the size around 500px, give or take, given you're running this on a gpu with plenty of memory (11 GB GeForce 1080Ti, for example). If you have less than that, you'll have to go smaller or try running it on CPU. I actually tried the latter but for some reason it was -really- absurdly slow and I didn't take the time to investigate why that was other than to find out that the Pytorch people were recommending building from source to get a big performance boost. Yeah...I didn't want to bother at that point.
|
|
|
+I'd keep the size around 500px, give or take, given you're running this on a gpu with plenty of memory (11 GB GeForce 1080Ti, for example). If you have less than that, you'll have to go smaller or try running it on CPU. I actually tried the latter but for some reason it was -really- absurdly slow and I didn't take the time to investigate why that was other than to find out that the Pytorch people were recommending building from source to get a big performance boost. Yeah... I didn't want to bother at that point.
|
|
|
|
|
|
|
|
|
### Additional Things to Know
|
|
|
|
|
|
-Visualizations of generated images as training progresses -can- be done in Jupyter as well – it's just a simple boolean flag here when you instantiate this visualization hook: GANVisualizationHook(TENSORBOARD_PATH, trainer, 'trainer', jupyter=True, visual_iters=100)
|
|
|
+Visualizations of generated images as training progresses -can- be done in Jupyter as well – it's just a simple boolean flag here when you instantiate this visualization hook:
|
|
|
+
|
|
|
+```python
|
|
|
+GANVisualizationHook(TENSORBOARD_PATH, trainer, 'trainer', jupyter=True, visual_iters=100)
|
|
|
+```
|
|
|
|
|
|
I prefer keeping this false and just using Tensorboard though. Trust me – you'll want it. Plus if you leave it running too long Jupyter will eat up a lot of memory with said images.
|
|
|
|
|
|
-Model weight saves are also done automatically during the training runs by the GANTrainer – defaulting to saving every 1000 iterations (it's an expensive operation). They're stored in the root training folder you provide, and the name goes by the save_base_name you provide to the training schedule. Weights are saved for each training size separately.
|
|
|
+Model weight saves are also done automatically during the training runs by the `GANTrainer` – defaulting to saving every 1000 iterations (it's an expensive operation). They're stored in the root training folder you provide, and the name goes by the save_base_name you provide to the training schedule. Weights are saved for each training size separately.
|
|
|
|
|
|
-I'd recommend navigating the code top down – the Jupyter notebooks are the place to start. I treat them just as a convenient interface to prototype and visualize – everything else goes into .py files (and therefore a proper IDE) as soon as I can find a place for them. I already have visualization examples conveniently included – just open the xVisualization notebooks to run these – they point to test images already included in the project so you can start right away (in test_images).
|
|
|
+I'd recommend navigating the code top down – the Jupyter notebooks are the place to start. I treat them just as a convenient interface to prototype and visualize – everything else goes into `.py` files (and therefore a proper IDE) as soon as I can find a place for them. I already have visualization examples conveniently included – just open the `xVisualization` notebooks to run these – they point to test images already included in the project so you can start right away (in test_images).
|
|
|
|
|
|
The "GAN Schedules" you'll see in the notebooks are probably the ugliest looking thing I've put in the code, but they're just my version of implementing progressive GAN training, suited to a Unet generator. That's all that's going on there really.
|
|
|
|
|
|
-Pretrained weights for the colorizer generator again are here: https://www.dropbox.com/s/7r2wu0af6okv280/colorize_gen_192.h5 (right click and download from this link). The DeFade stuff is still a work in progress so I'll try to get good weights for those up in a few days.
|
|
|
+[Pretrained weights for the colorizer generator again are here](https://www.dropbox.com/s/7r2wu0af6okv280/colorize_gen_192.h5) (right click and download from this link). The DeFade stuff is still a work in progress so I'll try to get good weights for those up in a few days.
|
|
|
|
|
|
Generally with training, you'll start seeing good results when you get midway through size 192px (assuming you're following the progressive training examples I laid out in the notebooks).
|
|
|
|
|
|
-I'm sure I screwed up something putting this up, so please let me know if that's the case.
|
|
|
+I'm sure I screwed up something putting this up, so [please let me know](https://github.com/jantic/DeOldify/issues/new) if that's the case.
|
|
|
|
|
|
### Known Issues
|
|
|
|
|
@@ -143,10 +170,7 @@ I'm sure I screwed up something putting this up, so please let me know if that's
|
|
|
|
|
|
### Want More?
|
|
|
|
|
|
-I'll be posting more results here on Twitter: https://twitter.com/citnaj
|
|
|
+I'll be posting more results [here on Twitter](https://twitter.com/citnaj): https://twitter.com/citnaj
|
|
|
|
|
|
### UPDATE 11/6/2018
|
|
|
-Wow this project blew up in popularity way more than I expected, in just a few days. As you might have gathered from the state of the project- I don't know what the hell I'm doing with managing a large GitHub project with lots of people. Never expected that I'd need to. So I'll be trying to get things caught up over the next few days with things like documentation and whatnot. Looks like even a Colab notebook! The whole point is to make this useful, after all, right?
|
|
|
-
|
|
|
-
|
|
|
-
|
|
|
+Wow this project blew up in popularity way more than I expected, in just a few days. As you might have gathered from the state of the project - I don't know what the hell I'm doing with managing a large GitHub project with lots of people. Never expected that I'd need to. So I'll be trying to get things caught up over the next few days with things like documentation and whatnot. Looks like even a Colab notebook! The whole point is to make this useful, after all, right?
|