this is for holding javascript data
chengds edited section_First_Idea_The_first__.tex
almost 9 years ago
Commit id: 534cdc8d9fa0d8c748eadf276a5fecb7ec583795
deletions | additions
diff --git a/section_First_Idea_The_first__.tex b/section_First_Idea_The_first__.tex
index 53fc9c1..3ac5f1d 100644
--- a/section_First_Idea_The_first__.tex
+++ b/section_First_Idea_The_first__.tex
...
Flickr datasets as
data and the traits scores, both self and attributed as target, and fine-tune the
Imagenet model of classifying 1000 classes of objects into our task of predicting
the personality trait given an image.
The initial goal was indeed to surgery the
prototxt changing the last layer so that the net has to learn it for the new task
and changing the classification layer to a regression layer.
We divide the dataset
into 75% for the training set and the remaining 25% for the testing set. Some
observation: we can't build the files containing the list of images of training and
testing together with all the labels for the train.
To work with the regression
we build a txt file containing the path of the images and a hdf5 file for the 10
labels of the traits as they are
oat number and not integer.
When we launched
the first trait we noticed that all the network layers, both weights and data
were set to zeros. To overcome this problem we decrease the learning rate and
build the training files using random permutation of the images.
Then we test
the fine-tuned model building a deploy.prototxt file that takes as input images
and predict the traits of them. Testing the net we noticed that the net wasn't
able to learn to much...maybe the task is to hard... We pretend to generalize a
personality trait form 45000 images a predict a
oat float number given a new image.