Where Do I Get My Pretrained Networks?

Pretrained networks are very useful.

A pretrained network is a deep learning model which has been already trained on some data and the weights of the model have been made publicly available for free use.

The famous example of a pretrained network is the VGG series of networks.

VGG stands for “Visual Geometry Group“, which is a department of engineering science at the University of Oxford.

Back in 2015, they released a paper, which you can find here, in which they present the results of the model which they created for the ILSVRC (ImageNet Large-Scale Visual Recognition Challenge) 2014 classification challenge. The ILSVRC is an annual competition which provides you with a huge dataset of labeled images, and you need to create a model which should correctly guess the class of the image. They used a convolutional neural network which got pretty good results.

The paper shows results of a couple configurations of the convolutional network. The two best performing ones were the 16 and the 19 weight layer ones. So, they made those two models publicly available.

Their original model was written in Caffe, but people have converted those weights to be compatible with various deep learning frameworks. Some of those frameworks have officially supported versions of those weights. Unfortunately, Tensorflow is not one of them.

But in a way, it kinda is.

You see, a couple of weeks ago, I started trying to write a Tensorflow style transfer model. In order to get that model model working, most people used one of the VGG networks (either VGG16 or VGG19). I downloaded a couple of models I found online… None of them worked for me.

It was probably because of my as  a programmer, but none the less, it didn’t work.

So then, I gave up and started working on the model using Keras. Keras has multiple officially supported model weights available.

The reason why I said that Tensorflow in a way does and doesn’t have officially supported models, is because science Tensorflow 1.4, Keras has been used as the official high-level API.

If you don’t already know, Keras is a high-level deep learning framework which needs a low-level framework to be able to run. You can use Theano, Tensorflow, Microsoft’s Cognitive Tool Kit and a couple more.

I used that in my advantage. I did find a blog post on using pretrained Keras models with Tensorflow, but the implementation was a bit wonky, so I worked out a new way, which I used to create a style transfer model which I described here.

What you’ll need

In order to get these models working with Tensorflow, you’ll need:

  • Tensorflow 1.4 or higher
  • h5py

You could use an older version of Tensorflow and some version of Keras, but there isn’t really a valid reason why you couldn’t just update your version of Tensorflow.

And h5py is needed because these models are all stored in .h5 files.

VGG models

The difference between the regular VGG models and the no top ones is that the regular ones feature fully connected layers which are used for classifications. Oh, and about 240 MB.

If you need this model for stuff like style transfer or super resolution, then just use the no top version.

This file hash is for VGG19 no top. To use another model, use one of these:

  • VGG19: cbe5617147190e668d6c5d5026f83318
  • VGG19 no top: 253f8cb515780f3b799900260a226db6
  • VGG16: 64373286793e3c8b2b4e3219cbf3544b
  • VGG16 no top: 6d6bbae143d832006294945121d1f1fc


ResNet50 is using the deep residual architecture.

And the file hashes:

  • ResNet50: a7b3fe01876f51b976af0dea6bc144eb
  • ResNet50 no top: a268eb855778b3df3c7506639542a6af


Another model trained on the ImageNet dataset

And the file hashes:

  • InceptionV3: 9a0d58056eeedaa3f26cb7ebd46da564
  • InceptionV3 no top: bcbd6486424b2319ff4ef7d526e38f63