@xzskywalkersun515

This lecturer should be given credit for such an amazing explanation.

@vt1454

IBM should start a learning platform. Their videos are so good.

@geopopos

I love seeing a large company like IBM invest in educating the public with free content! You all rock!

@ericadar

Marina is a talented teacher.  This was brief, clear and enjoyable.

@jordonkash

4:15 Marina combines the colors of the word prompt to emphasis her point. Nice touch

@ntoscano01

Very well explained!!!  Thank you for your explanation of this.  Iโ€™m so tired of 45 minute YouTube videos with a college educated professional trying to explain ML topics.  If you canโ€™t explain a topic in your own language in 10 minutes or less than you have failed to either understand it yourself or communicate effectively.

@TheAllnun21

Wow, this is the best beginner's introduction I've seen on RAG!

@digvijaysingh6882

Einstein said, "If you can't explain it simply, you don't understand it well enough." And you explained it beautifuly in most simple and easy to understand way ๐Ÿ‘๐Ÿ‘. Thank you

@ReflectionOcean

1. Understanding the challenges with LLMs - 0:36

2. Introducing Retrieval-Augmented Generation (RAG) to solve LLM issues - 0:18

3. Using RAG to provide accurate, up-to-date information - 1:26

4. Demonstrating how RAG uses a content store to improve responses - 3:02

5. Explaining the three-part prompt in the RAG framework - 4:13

6. Addressing how RAG keeps LLMs current without retraining - 4:38

7. Highlighting the use of primary sources to prevent data hallucination - 5:02

8. Discussing the importance of improving both the retriever and the generative model - 6:01

@natoreus

I'm sure it was already said, but this video is the most thorough, simple way I've seen RAG explained on YT hands down. Well done.

@aam50

That's a really great explanation of RAG in terms most people will understand. I was also sufficiently fascinated by how the writing on glass was done to go hunt down the answer from other comments!

@m.kaschi2741

Wow, I opened youtube coming from the ibm blog just to leave a comment. Clearly explained, very good example, and well presented as well!! :) Thank you

@vikramn2190

I believe the video is slightly inaccurate. As one of the commenters mentioned, the LLM is frozen and the act of interfacing with external sources and vector datastores is not carried out by the LLM. 

The following is the actual flow: 


Step 1: User makes a prompt
Step 2: Prompt is converted to a vector embedding
Step 3: Nearby documents in vector space are selected
Step 4: Prompt is sent along with selected documents as context
Step 5: LLM responds with given context

Please correct me if I'm wrong.

@maruthuk

Loved the simple example to describe how RAG can be used to augment the responses of LLM models.

@redwinsh258

The interesting part is not retrieval from the internet, but retrieval from long term memory, and with a stated objective that builds on such long term memory, and continually gives it "maintenance" so it's efficient and effective to answer. LLMs  are awesome because even though there are many challenges ahead, they sort of give us a hint of what's possible, without them it would be hard to have the motivation to follow the road

@AlexandraSteskal

I love IBM teachers/trainers, I used to work at IBM and their in-house education quality was AMAZING!

@kallamamran

We also need the models to cross check their own answers with the sources of information before printing out the answer to the user. There is no self control today. Models just say things. "I don't know" is actually a perfectly fine answer sometimes!

@hamidapremani6151

The explanation was spot on! 
IBM is the go to platform to learn about new technology with their high quality content explained and illustrated with so much simplicity.

@javi_park

hold up - the fact that the board is flipped is the most underrated modern education marvel nobody's talking about

@ghtgillen

Your ability to write backwards on the glass is amazing! ;-)