Hugging Face Transformers is a library that provides pre-trained models for NLP, computer vision, and audio tasks.
Example: Load and use a pre-trained model in just 3 lines of code!
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
result = classifier("I love Hugging Face!")
print(result)
Learn to use specific models, tokenizers, and customize pipelines for your needs.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
inputs = tokenizer(
"Hello, Hugging Face!",
padding=True,
truncation=True,
return_tensors="pt"
)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
Fine-tune models, implement custom architectures, and optimize for production deployment.
from transformers import (
Trainer, TrainingArguments,
DataCollatorWithPadding
)
from datasets import load_dataset
dataset = load_dataset("imdb")
def preprocess_function(examples):
return tokenizer(
examples["text"],
truncation=True,
padding="max_length",
max_length=512
)
tokenized_datasets = dataset.map(preprocess_function, batched=True)
training_args = TrainingArguments(
output_dir="./results",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=True,
hub_model_id="my-awesome-model"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
tokenizer=tokenizer,
data_collator=DataCollatorWithPadding(tokenizer=tokenizer)
)
trainer.train()
trainer.push_to_hub()