- cross-posted to:
- aicompanions
- cross-posted to:
- aicompanions
In this video, we’ll learn how to build Large Language Model (LLM) + Retrieval Augmented Generation (RAG) pipelines using open-source models from Hugging Face deployed on AWS SageMaker. We use the MiniLM sentence transformer to power our semantic search component with Pinecone.
You must log in or register to comment.