The no-code angle here is underrated. Most RAG tutorials assume you are building everything from scratch with Python and a vector database, which puts the approach out of reach for teams that live in the Microsoft ecosystem. Being able to point Azure AI Search at OneLake through the portal and have chunking, embedding, and indexing handled as a managed service dramatically lowers the barrier. For enterprise teams already invested in Fabric this is a much more realistic starting point than spinning up a custom LangChain pipeline. Good practical post.