aniketj.dev
Back to Case Studies
AI

Building an Enterprise RAG Search System

GenAI-Powered Documentation Intelligence

Challenge

Vanderlande's engineering teams needed to query vast technical documentation intelligently. Traditional keyword search failed on complex, context-dependent questions about system specifications, maintenance procedures, and integration guides.

Approach

Designed a Retrieval-Augmented Generation pipeline that ingests technical documents, vectorizes them using embedding models, and stores them in a searchable index. At query time, relevant document chunks are retrieved and fed to an LLM for context-aware response generation.

Architecture

Documents are ingested through Java backend services that handle parsing, chunking, and vectorization. Embeddings are stored in Unity Catalog via Databricks. Azure Functions serve as the orchestration layer, coordinating between the search index and Microsoft AI Services for response generation. Infrastructure is fully automated with Bicep templates.

Outcomes

  • Enabled context-aware querying across thousands of technical documents
  • Reduced time-to-answer for engineering queries significantly
  • Built secure integration patterns between Azure Functions and Unity Catalog
  • Fully automated infrastructure provisioning via Bicep

Tech Stack

JavaSpring BootDatabricksUnity CatalogAzure FunctionsMicrosoft AI ServicesBicepRAG