Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like an easy problem for DL, as you have an enormous amount of data available (just take any color image, convert it to grayscale and you have a pair of training images).

(This is also the case for e.g. the superresolution problem.)



You probably need an enormous GPU (24GB RAM) as well to make as large model as possible for as good generalization as you can (there are so many different types of objects/surfaces/fabric and their compositions).


Something amusing about needing a ridiculously large model to claim good generalization. Analytical models typically go the other way, right?


It's Deep Learning, not much to do with any analytical model, it's not thinking like a human :-(. Recently even good NLP processing needs 24GB+ for training (won't fit into 16GB), a good quality colorizing (no spills, natural colors) could be expected to be as demanding.

From the article:

"BEEFY Graphics card. I'd really like to have more memory than the 11 GB in my GeForce 1080TI (11GB). You'll have a tough time with less. The Unet and Critic are ridiculously large but honestly I just kept getting better results the bigger I made them."


I get that. I just have a hard time thinking that is "generalizing" the model, so much as making the model all encompassing.


It's the difference between the sense of training the model to be a "generalist" and it doing "generalizing".

I strongly doubt that you can "generalize" colourization in the sense that you talk about (over a wide variety of subject matter).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: