Responses:
A.
Using NLP to build a sarcasm classifier1.Pick two or three news sources and select a few news titles from their feed (about 5 is likely enough). For example you could select CNN, Fox News, MSNBC, NPR, PBS, Al Jazeera, RT (Russia Today), Deutsche Welle, Facebook, BBC, France24, CCTV, NHK World or another source you wish you analyze. Run your sarcasm model to predict whether the tittles are interpreted as sarcastic or not. Analyze, the results and comment on the different news sources you have selected.
“Police searching for man”, “Broncos president says it will back players who kneel”,” Politicians demand awnsers”, “Georgia Mother arrested for Hidding Guns”, “Covid Symptoms that never go away”
Perdicted sarcastic outputs perspectivly.
[7.8158194e-01] [1.6591918e-07] [4.2859977e-03] [4.1089302e-06] [6.4585358e-01]
The higher the values the more sarcastic the headline was identified to be. I think this model is usless, as it perdicted some things accuratly as “not sarcastic”, but in reality None of the headlines I used as inputs were sarcastic. Perhaps with more training data this model can become more accurate at identifing sarcasm in text.
B.
Text generation with an RNN1.Use the generate_text() command at the end of the exercise to produce synthetic output from your RNN model. Run it a second time and review the output. How has your RNN model been able to “learn” and “remember” the shakespeare text in order to reproduce a similar output?2.Stretch goal - replace the Shakespeare text with your own selected text and run the model again. Modify the source text as needed in order to generate_text() from the newly trained model.
The model learns by starting with the first letter. The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one character. After predicting the next character, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted characters. By looking at the output, I see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. The plearning process is best layed out:
he procedure works as follows:
First, initialize the RNN state. We do this by calling the tf.keras.Model.reset_states method.
Next, iterate over the dataset (batch by batch) and calculate the predictions associated with each.
Open a tf.GradientTape, and calculate the predictions and loss in that context.
Calculate the gradients of the loss with respect to the model variables using the tf.GradientTape.grads method.
Finally, take a step downwards by using the optimizer’s tf.train.Optimizer.apply_gradients method.
C.
Neural machine translation with attention1.Use the translate() command at the end of the exercise to translate three sentences from Spanish to English. How did your translations turn out?
translate(u’¿todavia estan en casa?’) Out put: Predicted translation: are you still at home ?
The accurate translation is are they still at home? The other translations were correct.