AI in Education

How Does Curiously™ Enhance AI Learning Companion With Retrieval-Augmented Generation (RAG)?

Apr 5, 2023

How Does Curiously™ Enhance AI Learning Companion With Retrieval-Augmented Generation (RAG)?

In the evolving landscape of education, the ability to provide personalized and precise answers to student inquiries is critical. That's where Curiously™'s AI Learning Companions come into play, leveraging the cutting-edge technique known as Retrieval-Augmented Generation (RAG) to enhance the learning experience.

What is Retrieval-Augmented Generation (RAG)?

Retrieval-Augmented Generation (RAG) combines the capabilities of a large language model with an external information retrieval system to enhance the performance of language models, especially in tasks requiring access to a broad range of factual information (Ji et al., 2023).

How Curiously™’s AI Learning Companions Use RAG

To employ RAG, we first created an external vector database to index different kinds of class materials (i.e lecture notes, lecture recording transcription, assignments, slides). Each type of class materials was parsed and segmented in the chunk size that is optimal for its characteristics and usage.

As shown in Figure 1, the language model generates text, while the information retrieval system fetches relevant external information. When given an inquiry, the RAG system first uses its retrieval component to search for and gather relevant documents from the external vector database that contains class materials. Next, the retrieved information is combined with the original input query and serves as the new input for the language generation model.

Using this method, our AI Learning Companion can answer highly specific questions that require up-to-date information related to the class. Also, our AI Learning Companion provides citation/ source information in the response to all relevant documents from the external vector database to enable fact-checking and try to avoid "hallucination" (Zhang et al., 2023).

Figure 1: Flow diagram illustrating how Retrieval Augmented Generation (RAG) works.

Flow diagram illustrating how Retrieval Augmented Generation (RAG) works.

Advanced Techniques in Retrieval-Augmented Generation (RAG)

Advanced RAG techniques such as Hyperparameter Optimization, Hybrid Search and Small-to-Big Retrieval are used to improve the performance of our AI Learning Companions. The setting of hyperparameters such as chunk size, overlap and top K is crucial to the performance of a RAG system (Lyu et. al., 2024).

For example, the selection of an optimal chunk size requires a balance; too small, and the model may focus narrowly, overlooking essential contextual information. Conversely, too large a size may result in the inclusion of extraneous information, potentially diluting the relevance and focus of the information retrieved. Also, splitting articles into non-overlapping passages may force some nearboundary answer spans to lose useful contexts (Wang et. al., 2019).

In our chunking and retrieval strategy, we did multiple trials with careful consideration of several vital factors, such as the nature of the indexed content, the embedding model and its optimal block size, the expected length and complexity of user queries, and the specific application’s utilization of the retrieved results. 

In addition, we use Hybrid Search and Small-to-Big Retrieval to ensure our AI Learning Companion produces precise responses. Since there are many academic terms in the class materials, we want to make sure the AI Learning Companion answers questions based on information that are directly related to that term, instead of utilizing information that are discussing similar topics but not the exact topic.

In addition, instead of using the same chunks for both search and retrieval, we use smaller text chunks during the retrieval process and subsequently provide the larger text chunk to which the retrieved text belongs to the large language model. Using smaller text chunks enhances the accuracy of retrieval since there may exist some irrelevant texts in the lagger chunks that affect the semantic representation. On the other hand, larger text chunks offer more contextual information for a Large Language model to understand. 

References:

Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., ... & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1-3

Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., ... & Shi, S. (2023). Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models. arXiv preprint arXiv:2309.01219.

Lyu, Y., Li, Z., Niu, S., Xiong, F., Tang, B., Wang, W., ... & Chen, E. (2024). Crud-rag: A comprehensive chinese benchmark for retrieval-augmented generation of large language models. arXiv preprint arXiv:2401.17043.

Wang, Z., Ng, P., Ma, X., Nallapati, R., & Xiang, B. (2019). Multi-passage bert: A globally normalized bert model for open-domain question answering. arXiv preprint arXiv:1908.08167. 

Ready to see how Curiously can elevate your classroom? Request a demo today and discover how personalized learning at scale can become a reality for your students!

Want to customize an AI Learning Companion for your students?

Discover how easy it is with a live demo. Click the button to see Curiously™ in action!

Want to customize an AI Learning Companion for your students?

Discover how easy it is with a live demo. Click the button to see Curiously™ in action!

Want to customize an AI Learning Companion for your students?

Discover how easy it is with a live demo. Click the button to see Curiously™ in action!