This paper is available on arxiv under CC 4.0 license.
Authors:
(1) D.Sinclair, Imense Ltd, and email: [email protected];
(2) W.T.Pye, Warwick University, and email: [email protected].
Table of Links
- Abstract and Introduction
- Interrogating the LLM with an Emotion Eliciting Tail Prompt
- PCA analysis of the Emotion of Amazon reviews
- Future Work
- Conclusions
- Acknowledgments and References
5. Conclusions
LLMs are by their nature designed to provide text strings as a response to a test prompt. This is not always the most useful format for information to be returned in. Internally within the LLM there exist probability distributions over tokens. The paper presents an example of how to build part of an emotion based synthetic consciousness by deriving the vector of emotion descriptor probabilities over a dictionary of emotional terms. There are a range of things that can be done with this emotion probability vector including fine grained review analysis, predicting a response to marketing messages, offence detection etc. It is possible that the emotion probability vector might be a step on the road to synthetic consciousness and that it might provide a means of making robots more empathetic through allowing them to make a prediction as to how something they might say will make the recipient feel.
If reasonable responses are desired from an LLM it might be a good policy not to train the LLM on the mad shouting that pervades anti-social media and analogously it might be a good idea not to train young minds similarly.