this is for holding javascript data
cristina edited section_First_Idea_The_first__.tex
almost 9 years ago
Commit id: 3040a615735aa084d4c44e28e2e44634de367384
deletions | additions
diff --git a/section_First_Idea_The_first__.tex b/section_First_Idea_The_first__.tex
index 3ac5f1d..93826f7 100644
--- a/section_First_Idea_The_first__.tex
+++ b/section_First_Idea_The_first__.tex
...
\section{First Idea}
The first idea was to adapt the prototxt containing the structure of the network so that it was able to take as input the images of the psycho Flickr datasets as
data and the traits scores, both self and attributed as target, and fine-tune the Imagenet model of classifying 1000 classes of objects into our task of predicting
the personality trait given an image.
The initial goal was indeed to surgery the prototxt changing the last layer so that the net has to learn it for the new task and changing the classification layer to a regression layer.
We divide the dataset into 75\% for the training set and the remaining 25\% for the testing set. Some observation: we can't build the files containing the list of images of training and testing together with all the labels for the train.
We divide the dataset
into 75% for the training set and the remaining 25% for To work with the
testing set. Some
observation: regression we
can't build
the files a txt file containing the
list path of
the images
of training and
testing together with all a hdf5 file for the
10 labels
for of the
train. traits as they are oat number and not integer.
To work with When we launched the
regression first trait we
build a txt file containing noticed that all the
path of network layers, both weights and data were set to zeros. To overcome this problem we decrease the
images learning rate and
a hdf5 file for build the
10
labels training files using random permutation of the
traits as they are
oat number and not integer. images.
When we launched
the first trait we noticed that all the network layers, both weights and data
were set to zeros. To overcome this problem we decrease the learning rate and
build the training files using random permutation of the images. Then we test the fine-tuned model building a deploy.prototxt file that takes as input images and predict the traits of them. Testing the net we noticed that the net wasn't able to learn to much...maybe the task is to hard... We pretend to generalize a personality trait form 45000 images a predict a float number given a new image.