EleutherAI/pythia-6.9b · Hugging Face (2024)

The Pythia Scaling Suite is a collection of models developed to facilitate interpretability research (see paper). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches.

The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models match or exceed the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites.

Details on previous early release and naming convention.

Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card lists the changes; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are still available, but we suggest the retrained suite if you are just starting to use Pythia.
This is the current release.

Please note that all models in the Pythia suite were renamed in January 2023. For clarity, a table comparing the old and new names is provided in this model card, together with exact parameter counts.


Model Details

  • Developed by: EleutherAI
  • Model type: Transformer-based Language Model
  • Language: English
  • Learn more: Pythia's GitHub repository for training procedure, config files, and details on how to use. See paper for more evals and implementation details.
  • Library: GPT-NeoX
  • License: Apache 2.0
  • Contact: to ask questions about this model, join the EleutherAI Discord, and post them in #release-discussion. Please read the existing Pythia documentation before asking about it in the EleutherAI Discord. For general correspondence: contact@eleuther. ai.
Pythia modelNon-Embedding ParamsLayersModel DimHeadsBatch SizeLearning RateEquivalent Models
70M18,915,328651282M1.0 x 10-3
160M85,056,00012768122M6.0 x 10-4GPT-Neo 125M, OPT-125M
410M302,311,424241024162M3.0 x 10-4OPT-350M
1.0B805,736,44816204882M3.0 x 10-4
1.4B1,208,602,624242048162M2.0 x 10-4GPT-Neo 1.3B, OPT-1.3B
2.8B2,517,652,480322560322M1.6 x 10-4GPT-Neo 2.7B, OPT-2.7B
6.9B6,444,163,072324096322M1.2 x 10-4OPT-6.7B
12B11,327,027,200365120402M1.2 x 10-4

Uses and Limitations

Intended Use

The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial step0, 10 log-spaced checkpoints step{1,2,4...512}, and 143 evenly-spaced checkpoints from step1000 to step143000. These checkpoints are hosted on Hugging Face as branches. Note that branch 143000 corresponds exactly to the model checkpoint on the main branch of each model.

You may also further fine-tune and adapt Pythia-6.9B for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face Transformers Library. If you decide to use pre-trained Pythia-6.9B as a basis for your fine-tuned model, please conduct your own risk and bias assessment.

Out-of-scope use

The Pythia Suite is not intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risksassociated with your particular use case.

Pythia models are English-language only, and are not suitable for translation or generating text in other languages.

Pythia-6.9B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-6.9B will not respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions.

Limitations and biases

The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-6.9B to produce factually accurate output.

This model was trained on the Pile, a dataset known to contain profanity and texts that are lewd or otherwise offensive. See Section 6 of the Pile paper for a discussion of documented biases with regards to gender, religion, and race. Pythia-6.9B may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.

If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-6.9B.

Quickstart

Pythia models can be loaded and used via the following code, demonstrated here for the third pythia-70m-deduped checkpoint:

from transformers import GPTNeoXForCausalLM, AutoTokenizermodel = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000",)tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000",)inputs = tokenizer("Hello, I am", return_tensors="pt")tokens = model.generate(**inputs)tokenizer.decode(tokens[0])

Revision/branch step143000 corresponds exactly to the model checkpoint on the main branch of each model.
For more information on how to use all Pythia models, see documentation on GitHub.

Training

Training data

The Pile is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See the Pile paper for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult the datasheet for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the official website, or from a community mirror.
The Pile was not deduplicated before being used to train Pythia-6.9B.

Training procedure

All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from step1000 to step143000 (which is the same as main). In addition, we also provide frequent early checkpoints: step0 and step{1,2,4...512}.This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.

All Pythia models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).
See GitHub for more details on training procedure, including how to reproduce it.
Pythia uses the same tokenizer as GPT-NeoX-20B.

Evaluations

All 16 Pythia models were evaluated using the LM Evaluation Harness. You can access the results by model and step at results/json/* in the GitHub repository.
Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM.

LAMBADA – OpenAI EleutherAI/pythia-6.9b · Hugging Face (1)
Physical Interaction: Question Answering (PIQA) EleutherAI/pythia-6.9b · Hugging Face (2)
WinoGrande EleutherAI/pythia-6.9b · Hugging Face (3)
AI2 Reasoning Challenge—Easy Set EleutherAI/pythia-6.9b · Hugging Face (4)
SciQ EleutherAI/pythia-6.9b · Hugging Face (5)

Changelog

This section compares differences between previously released Pythia v0 and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance.

  • All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens.
  • We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,128,256,512} in addition to every 1000 training steps.
  • Flash Attention was used in the new retrained suite.
  • We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR.

Naming convention and parameter count

Pythia models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count.

current Pythia suffixold suffixtotal paramsnon-embedding params
70M19M70,426,62418,915,328
160M125M162,322,94485,056,000
410M350M405,334,016302,311,424
1B800M1,011,781,632805,736,448
1.4B1.3B1,414,647,8081,208,602,624
2.8B2.7B2,775,208,9602,517,652,480
6.9B6.7B6,857,302,0166,444,163,072
12B13B11,846,072,32011,327,027,200
EleutherAI/pythia-6.9b · Hugging Face (2024)

FAQs

What is EleutherAI pythia? ›

Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling — EleutherAI.

What size are Pythia models? ›

It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated.

How do you become a Pythia? ›

Though little is known of how the priestess was chosen, the Pythia was probably selected, at the death of her predecessor, from amongst a guild of priestesses of the temple. These women were all natives of Delphi and were required to have had a sober life and be of good character.

Who created Pythia LLM? ›

Pythia by Eleuther AI is a suite of 16 LLMs trained on publicly available Pile and deduplicated Pile datasets.

How many tokens are in Pythia? ›

All Pythia models trained for 143000 steps at a batch size of 2M (2,097,152 tokens). See GitHub for more details on training procedure, including how to reproduce it. Pythia uses the same tokenizer as GPT-NeoX- 20B.

Is Pythia an encoder decoder? ›

In this paper we introduce Pythia, a suite of decoder-only autoregressive language models ranging from 70M to 12B parameters designed specifically to facilitate such scientific research.

What is the meaning of EleutherAI? ›

EleutherAI is an open-source collective artificial intelligence project that aims to create a fully decentralized singleton artificial intelligence with an associated autonomous decentralized civilization.

What is pythia 12b? ›

Open-Assistant SST SFT-1 Pythia-12B or oasst-sft-1-pythia-12b is a GPT-style large language model that has been trained to follow human instructions. Based on EleutherAI's pythia-12b model, oasst-sft-1-pythia-12b was trained by Open-Assistant on ~22k human demonstrations of assistant conversations.

Where is EleutherAI located? ›

The company was founded in 2023 and is based in New York.

What is GPT neo? ›

The GPT-Neo Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits ). This model inherits from PreTrainedModel.

Top Articles
Latest Posts
Article information

Author: Wyatt Volkman LLD

Last Updated:

Views: 6517

Rating: 4.6 / 5 (46 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Wyatt Volkman LLD

Birthday: 1992-02-16

Address: Suite 851 78549 Lubowitz Well, Wardside, TX 98080-8615

Phone: +67618977178100

Job: Manufacturing Director

Hobby: Running, Mountaineering, Inline skating, Writing, Baton twirling, Computer programming, Stone skipping

Introduction: My name is Wyatt Volkman LLD, I am a handsome, rich, comfortable, lively, zealous, graceful, gifted person who loves writing and wants to share my knowledge and understanding with you.