- cross-posted to:
- fosai
- cross-posted to:
- fosai
cross-posted from: https://lemmy.world/post/4069150
In this video, we’ll learn how to build Large Language Model (LLM) + Retrieval Augmented Generation (RAG) pipelines using open-source models from Hugging Face deployed on AWS SageMaker. We use the MiniLM sentence transformer to power our semantic search component with Pinecone.
You must log in or register to comment.