top of page
Search

Neural Search Talks [6] — OPT: Open Pre-trained Transformer Language Models (Meta AI)

Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella i Sapé discuss the recent "Open Pre-trained Transformer (OPT) Language Models" from Meta AI (formerly Facebook).


In this replication work, Meta replicated OpenAI's GPT-3 training as per their original paper, documenting the process in detail, including the nitty gritty details, to share their findings with the community. The code, pretrained weights, and logbook are available on their Github repository (links below).


Timestamps:

00:00 Introduction and housekeeping: new feedback form, ACL conference highlights

02:42 The convergence between NLP and Neural IR techniques

06:43 Open Pretrained Transformer motivation and scope, reproducing GPT-3 and open-sourcing

08:16 Basics of OPT: architecture, pre-training objective, teacher forcing, tokenizer, training data

13:40 Preliminary experiments findings: hyperparameters, training stability, spikiness

20:08 Problems that appear at scale when training with 992 GPUs

23:01 Using temperature to check whether GPUs are working

25:00 Training the largest model: what to do when the loss explodes? (which happens quite often)

29:15 When they switched away from AdamW to SGD

32:00 Results: successful but not quite GPT-3 level. Toxicity?

35:45 Replicability of Large Language Models research. Was GPT-3 replicable? What difference does it make?

37:25 What makes a paper replicable?

40:33 Directions in which large Language Models are applied to Information Retrieval

45:15 Final thoughts and takeaways

38 views0 comments

Recent Posts

See All

Comments


bottom of page