Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., ... & Lowe, R.
(2022). Training language models to follow instructions with human feedback. arXiv
preprint arXiv:2203.02155.Pennington, J., Socher, R., & Manning, C. D. (2014,
October). Glove: Global vectors for word representation. In Proceedings of the 2014
conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).
Radford, A.,Wu, J., Child, R., Luan, D., Amodei, D.,&Sutskever, I. (2019). Language models are
unsupervised multitask learners. OpenAI Blog, 1(8), 9.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020).
Exploring the limits of transfer learning with a unified text-to-text transformer. The
Journal of Machine Learning Research, 21(1), 5485-5551.
Raina, V., & Gales, M. (2022). Multiple-Choice Question Generation: Towards an Automated
Assessment Framework. arXiv preprint arXiv:2209.11830.
Razavian Sharif, A., Azizpour, H., Sullivan, J., & Carlsson, S. (2014). CNN features off-the
shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on
computer vision and pattern recognition workshops (pp. 806-813).
Reynolds, L. & McDonell, K. (2021, May). Prompt programming for large language models:
Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on
Human Factors in Computing Systems (pp. 1-7).
Ruder, S. (2018). A Review of the Neural History of Natural Language Processing.
http://ruder.io/a-review-of-the-recent-history-of-nlp/, 2018
Settles, B., T. LaFlair, G., & Hagiwara, M. (2020). Machine learning–driven language
assessment. Transactions of the Association for computational Linguistics, 8, 247-263.