Ace_Code

1.) I chose to use the RMS prop optimizer because it has a high learning rate which helps with the rapidly diminishing data sets. Though Similar to Agridad, Agridad adds element-wise scaling of the gradient based on the historical sum of squares in each dimension. Which does not do as well with diminishing data.
Another option was Rprop. This method lies in the area of adaptive learning rate methods and uses algorithm that are meant for full-batch optimization. Rprop tries to resolve the issues caused by variation in gradients in magnitudes. Rprop combines the idea of only using the sign of the gradient with the idea of adapting the step size individually for each weight. Rprop does not work with large datasets and it does not work with mini batches as well.

2.) Describe your selected loss function and its implementation. How is it effectively penalizing bad predictions?

My selected loss function binary cross entropy because the model is identifying ether a cat or a dog. Log loss penalizes both types of errors, but especially those predictions accurate and incorrect. Cross-entropy and log loss are slightly different, but in machine learning when calculating error rates between 0 and 1 they resolve to the same thing and we do so by penalizing bad predictions. If the probability associated with the true class is 1.0, the loss needs to be zero. By taking the (negative) log of the probability we achieve our goal for this model.

3.) What is the purpose of the metric= argument in your model. Compile () function? (look here - https://keras.io/api/metrics/) The purpose of the metric= argument is to judge the performance of your model. The metric functions are similar to loss functions, except that the results from evaluating a metric are not used when training the model.

4.) Plot the accuracy and loss results for both the training and test datasets. Include these in your response. Assess the model and describe how good you think it performed. I do not think the model performed well at all. By looking at the graph, one can see that the model is over fitting. See images of the plots below:

C&D

C&D2

5.) use the model to evaluate 3 dog images and 3 cat images. Upload your images and the prediction. How did your model perform in practice? Do you have any ideas of how to improve the model’s performance?

Cat_2 Cat_3 cat-read-to-pounce-julie-austin-photography IMG_20170520_055548_741 Foster-Tesla Dog_3

I ran the images of 3 cats and 3 dogs, and the model was able to correctly identify them all. I suspect that I could make the model better by lowering my number epochs in order to prevent over fitting. I also think we could train with more data as well, and that would also improve the model.