![Quickly build high-accuracy Generative AI applications on enterprise data using Amazon Kendra, LangChain, and large language models | AWS Machine Learning Blog Quickly build high-accuracy Generative AI applications on enterprise data using Amazon Kendra, LangChain, and large language models | AWS Machine Learning Blog](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2023/05/02/ML-13807-image001-new.png)
Quickly build high-accuracy Generative AI applications on enterprise data using Amazon Kendra, LangChain, and large language models | AWS Machine Learning Blog
![RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application? | by Heiko Hotz | Aug, 2023 | Towards Data Science RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application? | by Heiko Hotz | Aug, 2023 | Towards Data Science](https://miro.medium.com/v2/resize:fit:1127/1*Jq9bEbitg1Pv4oASwEQwJg.png)
RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application? | by Heiko Hotz | Aug, 2023 | Towards Data Science
![Semantic Search in Confluence Wiki With LlamaIndex and Pinecone | by Wenqi Glantz | Better Programming Semantic Search in Confluence Wiki With LlamaIndex and Pinecone | by Wenqi Glantz | Better Programming](https://miro.medium.com/v2/resize:fit:1400/1*XYrzXkFcHf0nb8Ine-8QcA.png)
Semantic Search in Confluence Wiki With LlamaIndex and Pinecone | by Wenqi Glantz | Better Programming
![Tackling Hallucinations: Microsoft's LLM-Augmenter Boosts ChatGPT's Factual Answer Score | by Synced | SyncedReview | Medium Tackling Hallucinations: Microsoft's LLM-Augmenter Boosts ChatGPT's Factual Answer Score | by Synced | SyncedReview | Medium](https://miro.medium.com/v2/resize:fit:1086/0*eliDT_3aSRX9J10L.png)
Tackling Hallucinations: Microsoft's LLM-Augmenter Boosts ChatGPT's Factual Answer Score | by Synced | SyncedReview | Medium
![Tackling Hallucinations: Microsoft's LLM-Augmenter Boosts ChatGPT's Factual Answer Score | by Synced | SyncedReview | Medium Tackling Hallucinations: Microsoft's LLM-Augmenter Boosts ChatGPT's Factual Answer Score | by Synced | SyncedReview | Medium](https://miro.medium.com/v2/resize:fit:1400/0*CCD_Dbb9nj8d0Trv.png)
Tackling Hallucinations: Microsoft's LLM-Augmenter Boosts ChatGPT's Factual Answer Score | by Synced | SyncedReview | Medium
![RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application? | by Heiko Hotz | Aug, 2023 | Towards Data Science RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application? | by Heiko Hotz | Aug, 2023 | Towards Data Science](https://miro.medium.com/v2/resize:fit:1400/1*JSJBBnslBE9S5i77Rz9r_g.png)
RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application? | by Heiko Hotz | Aug, 2023 | Towards Data Science
![Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback - Microsoft Research Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback - Microsoft Research](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/llm-augmenter-diagram-1024x721.png)
Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback - Microsoft Research
![Ankit on X: "(2/4) We took @cohereai 's recently released wikipedia embeddings and put them in a vector database (@pinecone). Our Verifier LLM uses the statement to find the k nearest sources Ankit on X: "(2/4) We took @cohereai 's recently released wikipedia embeddings and put them in a vector database (@pinecone). Our Verifier LLM uses the statement to find the k nearest sources](https://pbs.twimg.com/media/FugPvhjaIAA77uP.jpg:large)
Ankit on X: "(2/4) We took @cohereai 's recently released wikipedia embeddings and put them in a vector database (@pinecone). Our Verifier LLM uses the statement to find the k nearest sources
![Andrej Karpathy on X: "Two notes I wanted to add: 1) In addition to parallel inference and training, prompt encoding is also parallelizable even at batch_size=1 because the prompt tokens can be Andrej Karpathy on X: "Two notes I wanted to add: 1) In addition to parallel inference and training, prompt encoding is also parallelizable even at batch_size=1 because the prompt tokens can be](https://pbs.twimg.com/media/F3qjqQ0bYAAxb_B.jpg:large)