2024-03-28T18:41:08Zhttps://eprints.lib.hokudai.ac.jp/dspace-oai/requestoai:eprints.lib.hokudai.ac.jp:2115/722432022-11-17T02:08:08Zhdl_2115_20053hdl_2115_145Fine-tuning deep convolutional neural networks for distinguishing illustrations from photographsGando, GotaYamada, TaigaSato, HaruhikoOyama, SatoshiKurihara, MasahitoAggregation systemsMachine learningDeep learningIllustrations548Systems for aggregating illustrations require a function for automatically distinguishing illustrations from photographs as they crawl the network to collect images. A previous attempt to implement this functionality by designing basic features that were deemed useful for classification achieved an accuracy of only about 58%. On the other hand, deep neural networks had been successful in computer vision tasks, and convolutional neural networks (CNNs) had performed good at extracting such useful image features automatically. We evaluated alternative methods to implement this classification functionality with focus on deep neural networks. As the result of experiments, the method that fine-tuned deep convolutional neural network (DCNN) acquired 96.8% accuracy, outperforming the other models including the custom CNN models that were trained from scratch. We conclude that DCNN with fine-tuning is the best method for implementing a function for automatically distinguishing illustrations from photographs.ElsevierJournal Articleapplication/pdfhttp://hdl.handle.net/2115/72243https://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/72243/1/manuscript_eswa0820.pdf0957-4174Expert Systems with Applications662953012016-12-30enginfo:doi/10.1016/j.eswa.2016.08.057© 2016, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0/author