Tag: RAG

  • Build Your Own and Free AI Health Assistant, Personalized Healthcare

    Build Your Own and Free AI Health Assistant, Personalized Healthcare

    Imagine having a 24/7 health companion that analyzes your medical history, tracks real-time vitals, and offers tailored advice—all while keeping your data private. This is the reality of AI health assistants, open-source tools merging artificial intelligence with healthcare to empower individuals and professionals alike. Let’s dive into how these systems work, their transformative benefits, and how you can build one using platforms like OpenHealthForAll 

    What Is an AI Health Assistant?

    An AI health assistant is a digital tool that leverages machine learning, natural language processing (NLP), and data analytics to provide personalized health insights. For example:

    • OpenHealth consolidates blood tests, wearable data, and family history into structured formats, enabling GPT-powered conversations about your health.
    • Aiden, another assistant, uses WhatsApp to deliver habit-building prompts based on anonymized data from Apple Health or Fitbit.

    These systems prioritize privacy, often running locally or using encryption to protect sensitive information.


    Why AI Health Assistants Matter: 5 Key Benefits

    1. Centralized Health Management
      Integrate wearables, lab reports, and EHRs into one platform. OpenHealth, for instance, parses blood tests and symptoms into actionable insights using LLMs like Claude or Gemini.
    2. Real-Time Anomaly Detection
      Projects like Kavya Prabahar’s virtual assistant use RNNs to flag abnormal heart rates or predict fractures from X-rays.
    3. Privacy-First Design
      Tools like Aiden anonymize data via Evervault and store records on blockchain (e.g., NearestDoctor’s smart contracts) to ensure compliance with regulations like HIPAA.
    4. Empathetic Patient Interaction
      Assistants like OpenHealth use emotion-aware AI to provide compassionate guidance, reducing anxiety for users managing chronic conditions.
    5. Cost-Effective Scalability
      Open-source frameworks like Google’s Open Health Stack (OHS) help developers build offline-capable solutions for low-resource regions, accelerating global healthcare access.

    Challenges and Ethical Considerations

    While promising, AI health assistants face hurdles:

    • Data Bias: Models trained on limited datasets may misdiagnose underrepresented groups.
    • Interoperability: Bridging EHR systems (e.g., HL7 FHIR) with AI requires standardization efforts like OHS.
    • Regulatory Compliance: Solutions must balance innovation with safety, as highlighted in Nature’s call for mandatory feedback loops in AI health tech.

    Build Your Own AI Health Assistant: A Developer’s Guide

    Step 1: Choose Your Stack

    • Data Parsing: Use OpenHealth’s Python-based parser (migrating to TypeScript soon) to structure inputs from wearables or lab reports.
    • AI Models: Integrate LLaMA or GPT-4 via APIs, or run Ollama locally for privacy.

    Step 2: Prioritize Security

    • Encrypt user data with Supabase or Evervault.
    • Implement blockchain for audit trails, as seen in NearestDoctor’s medical records system.

    Step 3: Start the setup

    Clone the Repository:

    git clone https://github.com/OpenHealthForAll/open-health.git
    cd open-health

    Setup and Run:

    # Copy environment file
    cp .env.example .env
    
    # Add API keys to .env file:
    # UPSTAGE_API_KEY - For parsing (You can get $10 credit without card registration by signing up at https://www.upstage.ai)
    # OPENAI_API_KEY - For enhanced parsing capabilities
    
    # Start the application using Docker Compose
    docker compose --env-file .env up

    For existing users, use:

    docker compose --env-file .env up --build
    1. Access OpenHealth: Open your browser and navigate to http://localhost:3000 to begin using OpenHealth.

    The Future of AI Health Assistants

    1. Decentralized AI Marketplaces: Platforms like Ocean Protocol could let users monetize health models securely.
    2. AI-Powered Diagnostics: Google’s Health AI Developer Foundations aim to simplify building diagnostic tools for conditions like diabetes.
    3. Global Accessibility: Initiatives like OHS workshops in Kenya and India are democratizing AI health tech.

    Your Next Step

    • Contribute to OpenHealth’s GitHub repo to enhance its multilingual support.
  • How to Install and Run Virtuoso-Medium-v2 Locally: A Step-by-Step Guide

    How to Install and Run Virtuoso-Medium-v2 Locally: A Step-by-Step Guide

    Virtuoso-Medium-v2 is here, Are you ready to harness the power of Virtuoso-Medium-v2 , the next-generation 32-billion-parameter language model? Whether you’re building advanced chatbots, automating workflows, or diving into research simulations, this guide will walk you through installing and running Virtuoso-Medium-v2 on your local machine. Let’s get started!

    Virtuoso-Medium-v2

    Why Choose Virtuoso-Medium-v2?

    Before we dive into the installation process, let’s briefly understand why Virtuoso-Medium-v2 stands out:

    • Distilled from Deepseek-v3 : With over 5 billion tokens worth of logits, it delivers unparalleled performance in technical queries, code generation, and mathematical problem-solving.
    • Cross-Architecture Compatibility : Thanks to “tokenizer surgery,” it integrates seamlessly with Qwen and Deepseek tokenizers.
    • Apache-2.0 License : Use it freely for commercial or non-commercial projects.

    Now that you know its capabilities, let’s set it up locally.

    Prerequisites

    Before installing Virtuoso-Medium-v2, ensure your system meets the following requirements:

    1. Hardware :
      • GPU with at least 24GB VRAM (recommended for optimal performance).
      • Sufficient disk space (~50GB for model files).
    2. Software :
      • Python 3.8 or higher.
      • PyTorch installed (pip install torch).
      • Hugging Face transformers library (pip install transformers).

    Step 1: Download the Model

    The first step is to download the Virtuoso-Medium-v2 model from Hugging Face. Open your terminal and run the following commands:

    # Install necessary libraries
    pip install transformers torch
    
    # Clone the model repository
    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    model_name = "arcee-ai/Virtuoso-Medium-v2"
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)

    This will fetch the model and tokenizer directly from Hugging Face.


    Step 2: Prepare Your Environment

    Ensure your environment is configured correctly:
    1. Set up a virtual environment to avoid dependency conflicts:

    python -m venv virtuoso-env
    source virtuoso-env/bin/activate  # On Windows: virtuoso-env\Scripts\activate

    2. Install additional dependencies if needed:

    pip install accelerate

    Step 3: Run the Model

    Once the model is downloaded, you can test it with a simple prompt. Here’s an example script:

    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    # Load the model and tokenizer
    model_name = "arcee-ai/Virtuoso-Medium-v2"
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)
    
    # Define your input prompt
    prompt = "Explain the concept of quantum entanglement in simple terms."
    inputs = tokenizer(prompt, return_tensors="pt")
    
    # Generate output
    outputs = model.generate(**inputs, max_new_tokens=150)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))

    Run the script, and you’ll see the model generate a concise explanation of quantum entanglement!

    Step 4: Optimize Performance

    To maximize performance:

    Use quantization techniques to reduce memory usage.

    Enable GPU acceleration by setting device_map="auto" during model loading:

    model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

    Troubleshooting Tips

    • Out of Memory Errors : Reduce the max_new_tokens parameter or use quantized versions of the model.
    • Slow Inference : Ensure your GPU drivers are updated and CUDA is properly configured.

    With Virtuoso-Medium-v2 installed locally, you’re now equipped to build cutting-edge AI applications. Whether you’re developing enterprise tools or exploring STEM education, this model’s advanced reasoning capabilities will elevate your projects.

    Ready to take the next step? Experiment with Virtuoso-Medium-v2 today and share your experiences with the community! For more details, visit the official Hugging Face repository .

  • Build Local RAG with DeepSeek models using LangChain

    Build Local RAG with DeepSeek models using LangChain

    Could DeepSeek be a game-changer in the AI landscape? There’s a buzz in the tech world about DeepSeek outperforming models like ChatGPT. With its DeepSeek-V3 boasting 671 billion parameters and a development cost of just $5.6 million, it’s definitely turning heads. Interestingly, Sam Altman himself has acknowledged some challenges with ChatGPT, which is priced at a $200 subscription, while DeepSeek remains free. This makes the integration of DeepSeek with LangChain even more exciting, opening up a world of possibilities for building sophisticated AI-powered solutions without breaking the bank. Let’s explore how you can get started.

    DeepSeek with LangChain

    What is DeepSeek?

    DeepSeek provides a range of open-source AI models that can be deployed locally or through various inference providers. These models are known for their high performance and versatility, making them a valuable asset for any AI project. You can utilize these models for a variety of tasks such as text generation, translation, and more.

    Why use LangChain with DeepSeek?

    LangChain simplifies the development of applications using large language models (LLMs), and using it with DeepSeek provides the following benefits:

    • Simplified Workflow: LangChain abstracts away complexities, making it easier to interact with DeepSeek models.
    • Chaining Capabilities: Chain operations like prompting and translation to create sophisticated AI applications.
    • Seamless Integration: A consistent interface for various LLMs, including DeepSeek, for smooth transitions and experiments.

    Setting Up DeepSeek with LangChain

    To begin, create a DeepSeek account and obtain an API key:

    1. Get an API Key: Visit DeepSeek’s API Key page to sign up and generate your API key.
    2. Set Environment Variables: Set the DEEPSEEK_API_KEY environment variable.
    import getpass
    import os
    
    if not os.getenv("DEEPSEEK_API_KEY"):
        os.environ["DEEPSEEK_API_KEY"] = getpass.getpass("Enter your DeepSeek API key: ")
    
    # Optional LangSmith tracing
    # os.environ["LANGSMITH_TRACING"] = "true"
    # os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

    3. Install the Integration Package: Install the langchain-deepseek-official package.

    pip install -qU langchain-deepseek-official

    Instantiating and Using ChatDeepSeek

    Instantiate ChatDeepSeek model:

    from langchain_deepseek import ChatDeepSeek
    
    llm = ChatDeepSeek(
        model="deepseek-chat",
        temperature=0,
        max_tokens=None,
        timeout=None,
        max_retries=2,
        # other params...
    )

    Invoke the model:

    messages = [
        (
            "system",
            "You are a helpful assistant that translates English to French. Translate the user sentence.",
        ),
        ("human", "I love programming."),
    ]
    ai_msg = llm.invoke(messages)
    print(ai_msg.content)

    This will output the translated sentence in french.

    Chaining DeepSeek with LangChain Prompts

    Use ChatPromptTemplate to create a translation chain:

    from langchain_core.prompts import ChatPromptTemplate
    
    prompt = ChatPromptTemplate(
        [
            (
                "system",
                "You are a helpful assistant that translates {input_language} to {output_language}.",
            ),
            ("human", "{input}"),
        ]
    )
    
    chain = prompt | llm
    result = chain.invoke(
        {
            "input_language": "English",
            "output_language": "German",
            "input": "I love programming.",
        }
    )
    print(result.content)

    This demonstrates how easily you can configure language translation using prompt templates and DeepSeek models.

    Integrating DeepSeek using LangChain allows you to create advanced AI applications with ease and efficiency, and offers a potential alternative to other expensive models in the market. By following this guide, you can set up, use, and chain DeepSeek models to perform various tasks. Explore the API Reference for more detailed information.

  • AI Agents by Google: Revolutionizing AI with Reasoning and Tools

    AI Agents by Google: Revolutionizing AI with Reasoning and Tools

    Artificial Intelligence is rapidly changing, and AI Agents by Google are at the forefront. These aren’t typical AI models. Instead, they are complex systems. They can reason, make logical decisions, and interact with the world using tools. This article explores what makes them special. Furthermore, it will examine how they are changing AI applications.

    Understanding AI Agents

    AI Agents by Google

    Essentially, AI Agents by Google are applications. The aim of AI Agents to achieve goals. They do this by observing their environment. They also use available tools. Unlike basic AI, agents are autonomous. They act independently. Moreover, they proactively make decisions. This helps them meet objectives, even without direct instructions. This is possible through their cognitive architecture, which includes three key parts:

    • The Model: This is the core language model. It is the central decision-maker. It uses reasoning frameworks like ReAct. Also, it uses Chain-of-Thought and Tree-of-Thoughts.
    • The Tools: These are crucial for external interaction. They allow the agent to connect to real-time data and services. For example, APIs can be used. They bridge the gap between internal knowledge and outside resources.
    • The Orchestration Layer: This layer manages the agent’s process. It determines how it takes in data. Then, it reasons internally. Finally, it informs the next action or decision in a continuous cycle.

    AI Agents vs. Traditional AI Models

    Traditional AI models have limitations. They are restricted by training data. They perform single inferences. In contrast, AI Agents by Google overcome these limits. They do this through several capabilities:

    • External System Access: They connect to external systems via tools. Thus, they interact with real-time data.
    • Session History Management: Agents track and manage session history. This enables multi-turn interactions with context.
    • Native Tool Implementation: They include built-in tools. This allows seamless execution of external tasks.
    • Cognitive Architectures: They utilize advanced frameworks. For instance, they use CoT and ReAct for reasoning.

    The Role of Tools: Extensions, Functions, and Data Stores

    AI Agents by Google interact with the outside world through three key tools:

    Extensions

    These tools bridge agents and APIs. They allow agents to use APIs to carry out actions through examples. For instance, they can use the Google Flights API. Extensions run on the agent-side. They are designed to make integrations scalable and strong.

    Functions

    Functions are self-contained code modules. Models use them for specific tasks. Unlike Extensions, these run on the client side. They don’t directly interact with APIs. This gives developers greater control over data flow and system execution.

    Data Stores

    Data Stores enable agents to access diverse data. This includes structured and unstructured data from various sources. For instance, they can access websites, PDFs, and databases. This dynamic interaction with current data enhances the model’s knowledge. Furthermore, it aids applications using Retrieval Augmented Generation (RAG).

    Improving Agent Performance

    To get the best results, AI Agents need targeted learning. These methods include:

    • In-context learning: Examples provided during inference let the model learn “on-the-fly.”
    • Retrieval-based in-context learning: External memory enhances this process. It provides more relevant examples.
    • Fine-tuning based learning: Pre-training the model is key. This improves its understanding of tools. Moreover, it improves its ability to know when to use them.

    Getting Started with AI Agents

    If you’re interested in building with AI Agents, consider using libraries like LangChain. Also, you might use platforms such as Google’s Vertex AI. LangChain helps users ‘chain’ sequences of logic and tool calls. Meanwhile, Vertex AI offers a managed environment. It supports building and deploying production-ready agents.

    AI Agents by Google are transforming AI. They go beyond traditional limits. They can reason, use tools, and interact with the external world. Therefore, they are a major step forward. They create more flexible and capable AI systems. As these agents evolve, their ability to solve complex problems will also grow. In addition, their capacity to drive real-world value will expand.

    Read More on the AI Agents by Google Whitepaper by Google.

  • Enterprise Agentic RAG Template by Dell AI Factory with NVIDIA

    In today’s data-driven world, organizations are constantly seeking innovative solutions to extract value from their vast troves of information. The convergence of powerful hardware, advanced AI frameworks, and efficient data management systems is critical for success. This post will delve into a cutting-edge solution: Enterprise Agentic RAG on Dell AI Factory with NVIDIA and Elasticsearch vector database. This architecture provides a scalable, compliant, and high-performance platform for complex data retrieval and decision-making, with particular relevance to healthcare and other data-intensive industries.

    Understanding the Core Components

    Before diving into the specifics, let’s define the key components of this powerful solution:

    • Agentic RAG: Agentic Retrieval-Augmented Generation (RAG) is an advanced AI framework that combines the power of Large Language Models (LLMs) with the precision of dynamic data retrieval. Unlike traditional LLMs that rely solely on pre-trained knowledge, Agentic RAG uses intelligent agents to connect with various data sources, ensuring contextually relevant, up-to-date, and accurate responses. It goes beyond simple retrieval to create a dynamic workflow for decision-making.
    • Dell AI Factory with NVIDIA: This refers to a robust hardware and software infrastructure provided by Dell Technologies in collaboration with NVIDIA. It leverages NVIDIA GPUs, Dell PowerEdge servers, and NVIDIA networking technologies to provide an efficient platform for AI training, inference, and deployment. This partnership brings together industry-leading hardware with AI microservices and libraries, ensuring optimal performance and reliability.
    • Elasticsearch Vector Database: Elasticsearch is a powerful, scalable search and analytics engine. When configured as a vector database, it stores vector embeddings of data (e.g., text, images) and enables efficient similarity searches. This is essential for the RAG process, where relevant information needs to be retrieved quickly from large datasets.

    The Synergy of Enterprise Agentic RAG, Dell AI Factory, and Elasticsearch

    The integration of Agentic RAG on Dell AI Factory with NVIDIA and Elasticsearch vector database creates a powerful ecosystem for handling complex data challenges. Here’s how these components work together:

    1. Data Ingestion: The process begins with the ingestion of structured and unstructured data from various sources. This includes documents, PDFs, text files, and structured databases. Dell AI Factory leverages specialized tools like the NVIDIA Multimodal PDF Extraction Tool to convert unstructured data (e.g., images and charts in PDFs) into searchable formats.
    2. Data Storage and Indexing: The extracted data is then transformed into vector embeddings using NVIDIA NeMo Embedding NIMs. These embeddings are stored in the Elasticsearch vector database, which allows for efficient semantic searches. Elasticsearch’s fast search capabilities ensure that relevant data can be accessed quickly.
    3. Data Retrieval: Upon receiving a query, the system utilizes the NeMo Retriever NIM to fetch the most pertinent information from the Elasticsearch vector database. The NVIDIA NeMo Reranking NIM refines these results to ensure that the highest quality, contextually relevant content is delivered.
    4. Response Generation: The LLM agent, powered by NVIDIA’s Llama-3.1-8B-instruct NIM or similar LLMs, analyzes the retrieved data to generate a contextually aware and accurate response. The entire process is orchestrated by LangGraph, which ensures smooth data flow through the system.
    5. Validation: Before providing the final answer, a hallucination check module ensures that the response is grounded in the retrieved data and avoids generating false or unsupported claims. This step is particularly crucial in sensitive fields like healthcare.

    Benefits of Agentic RAG on Dell AI Factory with NVIDIA and Elasticsearch

    This powerful combination offers numerous benefits across various industries:

    • Scalability: The Dell AI Factory’s robust infrastructure, coupled with the scalability of Elasticsearch, ensures that the solution can handle massive amounts of data and user requests without performance bottlenecks.
    • Compliance: The solution is designed to adhere to stringent security and compliance requirements, particularly relevant in healthcare where HIPAA compliance is essential.
    • Real-Time Decision-Making: Through efficient data retrieval and analysis, professionals can access timely, accurate, and context-aware information.
    • Enhanced Accuracy: The combination of a strong retrieval system and a powerful LLM ensures that the responses are not only contextually relevant but also highly accurate and reliable.
    • Flexibility: The modular design of the Agentic RAG framework, with its use of LangGraph, makes it adaptable to diverse use cases, whether for chatbots, data analysis, or other AI-powered applications.
    • Comprehensive Data Support: This solution effectively manages a wide range of data, including both structured and unstructured formats.
    • Improved Efficiency: By automating the data retrieval and analysis process, the framework reduces the need for manual data sifting and improves overall productivity.

    Real-World Use Cases for Enterprise Agentic RAG

    This solution can transform workflows in many different industries and has particular relevance for use cases in healthcare settings:

    • Healthcare:
      • Providing clinicians with fast access to patient data, medical protocols, and research findings to support better decision-making.
      • Enhancing patient interactions through AI-driven chatbots that provide accurate, secure information.
      • Streamlining processes related to diagnosis, treatment planning, and drug discovery.
    • Finance:
      • Enabling rapid access to financial data, market analysis, and regulations for better investment decisions.
      • Automating processes related to fraud detection, risk analysis, and regulatory compliance.
    • Legal:
      • Providing legal professionals with quick access to case laws, contracts, and legal documents.
      • Supporting faster research and improved decision-making in legal proceedings.
    • Manufacturing:
      • Providing access to operational data, maintenance logs, and training manuals to improve efficiency.
      • Improving workflows related to predictive maintenance, quality control, and production management.

    Getting Started with Enterprise Agentic RAG

    The Dell AI Factory with NVIDIA, when combined with Elasticsearch, is designed for enterprises that require scalability and reliability. To implement this solution:

    1. Leverage Dell PowerEdge servers with NVIDIA GPUs: These powerful hardware components provide the computational resources needed for real-time processing.
    2. Set up Elasticsearch Vector Database: This stores and indexes your data for efficient retrieval.
    3. Install NVIDIA NeMo NIMs: Integrate NVIDIA’s NeMo Retriever, Embedding, and Reranking NIMs for optimal data retrieval and processing.
    4. Utilize the Llama-3.1-8B-instruct LLM: Utilize NVIDIA’s optimized LLM for high-performance response generation.
    5. Orchestrate workflows with LangGraph: Connect all components with LangGraph to manage the end-to-end process.

    Enterprise Agentic RAG on Dell AI Factory with NVIDIA and Elasticsearch vector database is not just an integration; it’s a paradigm shift in how we approach complex data challenges. By combining the precision of enterprise-grade hardware, the power of NVIDIA AI libraries, and the efficiency of Elasticsearch, this framework offers a robust and scalable solution for various industries. This is especially true in fields such as healthcare where reliable data access can significantly impact outcomes. This solution empowers organizations to make informed decisions, optimize workflows, and improve efficiency, setting a new standard for AI-driven data management and decision-making.

    Read More by Dell: https://infohub.delltechnologies.com/en-us/t/agentic-rag-on-dell-ai-factory-with-nvidia/

    Start Learning Enterprise Agentic RAG Template by Dell

  • NVIDIA NV Ingest for Complex Unstructured PDFs, Enterprise Documents

    What is NVIDIA NV Ingest?

    NVIDIA NV Ingest is not a static pipeline; it’s a dynamic microservice designed for processing various document formats, including PDF, DOCX, and PPTX. It uses NVIDIA NIM microservices to identify, extract, and contextualize information, such as text, tables, charts, and images. The core aim is to transform unstructured data into structured metadata and text, facilitating its use in downstream applications

    At its core, NVIDIA NV Ingest is a performance-oriented, scalable microservice designed for document content and metadata extraction. Leveraging specialized NVIDIA NIM microservices, this tool goes beyond simple text extraction. It intelligently identifies, contextualizes, and extracts text, tables, charts, and images from a variety of document formats, including PDFs, Word, and PowerPoint files. This enables a streamlined workflow for feeding data into downstream generative AI applications, such as retrieval-augmented generation (RAG) systems.

    NVIDIA Ingest works by accepting a JSON job description, outlining the document payload and the desired ingestion tasks. The result is a JSON dictionary containing a wealth of metadata about the extracted objects and associated processing details. It’s crucial to note that NVIDIA Ingest doesn’t simply act as a wrapper around existing parsing libraries; rather, it’s a flexible and adaptable system that is designed to manage complex document processing workflows.

    Key Capabilities

    Here’s what NVIDIA NV Ingest is capable of:

    • Multi-Format Support: Handles a variety of documents, including PDF, DOCX, PPTX, and image formats.
    • Versatile Extraction Methods: Offers multiple extraction methods per document type, balancing throughput and accuracy. For PDFs, you can leverage options like pdfium, Unstructured.io, and Adobe Content Extraction Services.
    • Advanced Pre- and Post-Processing: Supports text splitting, chunking, filtering, embedding generation, and image offloading.
    • Parallel Processing: Enables parallel document splitting, content classification (tables, charts, images, text), extraction, and contextualization via Optical Character Recognition (OCR).
    • Vector Database Integration: NVIDIA Ingest also manages the computation of embeddings and can optionally store these into vector database like Milvus

    Why NVIDIA NV Ingest?

    Unlike static pipelines, NVIDIA Ingest provides a flexible framework. It is not a wrapper for any specific parsing library. Instead, it orchestrates the document processing workflow based on your job description.

    The need to parse hundreds of thousands of complex, messy unstructured PDFs is often a major hurdle. NVIDIA Ingest is designed for exactly this scenario, providing a robust and scalable system for large-scale data processing. It breaks down complex PDFs into discrete content, contextualizes it through OCR, and outputs a structured JSON schema which is very easy to use for AI applications.

    Getting Started with NVIDIA NV Ingest

    To get started, you’ll need:

    • Hardware: NVIDIA GPUs (H100 or A100 with at least 80GB of memory, with minimum of 2 GPUs)

    Software

    • Operating System: Linux (Ubuntu 22.04 or later is recommended)
    • Docker: For containerizing and managing microservices
    • Docker Compose: For multi-container application deployment
    • CUDA Toolkit: (NVIDIA Driver >= 535, CUDA >= 12.2)
    • NVIDIA Container Toolkit: For running NVIDIA GPU-accelerated containers
    • NVIDIA API Key: Required for accessing pre-built containers from NVIDIA NGC. To get early access for NVIDIA Ingest https://developer.nvidia.com/nemo-microservices-early-access/join

    Step-by-Step Setup and Usage

    1. Starting NVIDIA NIM Microservices Containers

    1. Clone the repository:
      git clone
      https://github.com/nvidia/nv-ingest
      cd nv-ingest
    2. Log in to NVIDIA GPU Cloud (NGC):
      docker login nvcr.io
      # Username: $oauthtoken
      # Password: <Your API Key>
    3. Create a .env file: 
      Add your NGC API key and any other required paths:
      NGC_API_KEY=your_api_key NVIDIA_BUILD_API_KEY=optional_build_api_key
    4. Start the containers:
      sudo nvidia-ctk runtime configure --runtime=docker --set-as-default
      docker compose up

    Note: NIM containers might take 10-15 minutes to fully load models on first startup.

    2. Installing Python Client Dependencies

    1. Create a Python environment (optional but recommended):
      conda create --name nv-ingest-dev --file ./conda/environments/nv_ingest_environment.yml
      conda activate nv-ingest-dev
    2. Install the client:
      cd client
      pip install .

    if you are not using conda you can install directly

    #pip install -r requirements.txt
    #pip install .
    “`
    Note: You can perform these steps from your host machine or within the nv-ingest container.

    3. Submitting Ingestion Jobs

    Python Client Example:

    import logging, time
    
    from nv_ingest_client.client import NvIngestClient
    from nv_ingest_client.primitives import JobSpec
    from nv_ingest_client.primitives.tasks import ExtractTask
    from nv_ingest_client.util.file_processing.extract import extract_file_content
    
    logger = logging.getLogger("nv_ingest_client")
    
    file_name = "data/multimodal_test.pdf"
    file_content, file_type = extract_file_content(file_name)
    
    job_spec = JobSpec(
     document_type=file_type,
     payload=file_content,
     source_id=file_name,
     source_name=file_name,
     extended_options={
         "tracing_options": {
             "trace": True,
             "ts_send": time.time_ns()
         }
     }
    )
    
    extract_task = ExtractTask(
     document_type=file_type,
     extract_text=True,
     extract_images=True,
     extract_tables=True
    )
    
    job_spec.add_task(extract_task)
    
    client = NvIngestClient(
     message_client_hostname="localhost",  # Host where nv-ingest-ms-runtime is running
     message_client_port=7670  # REST port, defaults to 7670
    )
    
    job_id = client.add_job(job_spec)
    client.submit_job(job_id, "morpheus_task_queue")
    result = client.fetch_job_result(job_id, timeout=60)
    print(f"Got {len(result)} results")

    Command Line (nv-ingest-cli) Example:

    nv-ingest-cli \
        --doc ./data/multimodal_test.pdf \
        --output_directory ./processed_docs \
        --task='extract:{"document_type": "pdf", "extract_method": "pdfium", "extract_tables": "true", "extract_images": "true"}' \
        --client_host=localhost \
        --client_port=7670

    Note: Make sure to adjust the file_path, client_host and client_port as per your setup.

    Note: extract_tables controls both table and chart extraction, you can disable chart extraction using extract_charts parameter set to false.

    4. Inspecting Results

    Post ingestion, results can be found in processed_docs directory, under text, image and structured subdirectories. Each result will contain corresponding json metadata files. You can inspect the extracted images using the provided image viewer script:

    1. First, install tkinter by running the following commands depending on your OS.

      For Ubuntu/Debian:
      sudo apt-get update
      sudo apt-get install python3-tk

      # For Fedora/RHEL:
      sudo dnf install python3-tkinter

      # For MacOS
      brew install python-tk
    2. Run image viewer:
      python src/util/image_viewer.py --file_path ./processed_docs/image/multimodal_test.pdf.metadata.json

    Understanding the Output

    The output of NVIDIA NV Ingest is a structured JSON document, which contains:

    • Extracted Text: Text content from the document.
    • Extracted Tables: Table data in structured format.
    • Extracted Charts: Information about charts present in the document.
    • Extracted Images: Metadata for extracted images.
    • Processing Annotations: Timing and tracing data for analysis.

    This output can be easily integrated into various systems, including vector databases for semantic search and LLM applications.

    This output can be easily integrated into various systems, including vector databases for semantic search and LLM applications.

    NVIDIA NV Ingest Use Cases

    NVIDIA NV Ingest is ideal for various applications, including:

    • Retrieval-Augmented Generation (RAG): Enhance LLMs with accurate and contextualized data from your documents.
    • Enterprise Search: Improve search capabilities by indexing text and metadata from large document repositories.
    • Data Analysis: Unlock hidden patterns and insights within unstructured data.
    • Automated Document Processing: Streamline workflows by automating the extraction process from unstructured documents.

    Troubleshooting

    Common Issues

    • NIM Containers Not Starting: Check resource availability (GPU memory, CPU), verify NGC login details, and ensure the correct CUDA driver is installed.
    • Python Client Errors: Verify dependencies are installed correctly and the client is configured to connect with the running service.
    • Job Failures: Examine the logs for detailed error messages, check the input document for errors, and verify task configuration.

    Tips

    • Verbose Logging: Enable verbose logging by setting NIM_TRITON_LOG_VERBOSE=1 in docker-compose.yaml to help diagnose issues.
    • Container Logs: Use docker logs to inspect logs for each container to identify problems.
    • GPU Utilization: Use nvidia-smi to monitor GPU activity. If it takes more than a minute for nvidia-smi command to return there is a high chance that the GPU is busy setting up the models.

  • Cache-Augmented Generation (CAG): Superior Alternative to RAG

    In the rapidly evolving world of AI and Large Language Models (LLMs), the quest for efficient and accurate information retrieval is paramount. While Retrieval-Augmented Generation (RAG) has become a popular technique, a new paradigm called Cache-Augmented Generation (CAG) is emerging as a more streamlined and effective solution. This post will delve into Cache-Augmented Generation (CAG), comparing it to RAG, and highlight when CAG is the better choice for enhanced performance.

    What is Cache-Augmented Generation (CAG)?

    Cache-Augmented Generation (CAG) is a method that leverages the power of large language models with extended context windows to bypass the need for real-time retrieval systems, which are required by the RAG approach. Unlike RAG, which retrieves relevant information from external sources during the inference phase, CAG preloads all relevant resources into the LLM’s extended context. This includes pre-computing and caching the model’s key-value (KV) pairs.

    Here are the key steps involved in CAG:

    1. External Knowledge Preloading: A curated collection of documents or relevant knowledge is processed and formatted to fit within the LLM’s extended context window. The LLM then converts this data into a precomputed KV cache.
    2. Inference: The user’s query is loaded alongside the precomputed KV cache. The LLM uses this cached context to generate responses without needing any retrieval at this step.
    3. Cache Reset: The KV cache is managed to allow for rapid re-initialization, ensuring sustained speed and responsiveness across multiple inference sessions.

    Essentially, CAG trades the need for real-time retrieval with pre-computed knowledge, leading to significant performance gains.

    CAG vs RAG: A Direct Comparison

    Understanding the difference between CAG vs RAG is crucial for determining the most appropriate approach for your needs. Let’s look at a direct comparison:

    FeatureRAG (Retrieval-Augmented Generation)CAG (Cache-Augmented Generation)
    RetrievalPerforms real-time retrieval of information during inference.Preloads all relevant knowledge into the model’s context beforehand.
    LatencyIntroduces retrieval latency, potentially slowing down response times.Eliminates retrieval latency, providing much faster response times.
    ErrorsSubject to potential errors in document selection and ranking.Minimizes retrieval errors by ensuring holistic context is present.
    ComplexityIntegrates retrieval and generation components, which increases system complexity.Simplifies architecture by removing the need for separate retrieval components.
    ContextContext is dynamically added with each new query.A complete and unified context from preloaded data.
    PerformancePerformance can suffer with retrieval failures.Maintains consistent and high-quality responses by leveraging the whole context.
    Memory UsageUses additional memory and resources for external retrieval.Uses preloaded KV-cache for efficient resource management.
    EfficiencyCan be inefficient, and require resource-heavy real-time retrieval.Faster and more efficient due to elimination of real-time retrieval.

    Which is Better: CAG or RAG?

    The question of which is better, CAG or RAG, depends on the specific context and requirements. However, CAG offers significant advantages in certain scenarios, especially:

    • For limited knowledge base: When the relevant knowledge fits within the extended context window of the LLM, CAG is more effective.
    • When real-time performance is critical: By eliminating retrieval, CAG provides faster, more consistent response times.
    • When consistent and accurate information is required: CAG avoids the errors caused by real-time retrieval systems and ensures the LLM uses the complete dataset.
    • When streamlined architecture is essential: By combining knowledge and model in one approach it simplifies the development process.

    When to Use CAG and When to Use RAG

    While CAG provides numerous benefits, RAG is still relevant in certain use cases. Here are general guidelines:

    Use CAG When:

    • The relevant knowledge base is relatively small and manageable.
    • You need fast and consistent responses without the latency of retrieval systems.
    • System simplification is a key requirement.
    • You want to avoid the errors associated with real-time retrieval.
    • Working with Large Language Models supporting long contexts

    Use RAG When:

    • The knowledge base is very large or constantly changing.
    • The required information varies greatly with each query.
    • You need to access real-time data from diverse or external sources.
    • The cost of retrieving information in real time is acceptable for your use case.

    Use Cases of Cache-Augmented Generation (CAG)

    CAG is particularly well-suited for the following use cases:

    • Specialized Domain Q&A: Answering questions based on specific domains, like legal, medical, or financial, where all relevant documentation can be preloaded.
    • Document Summarization: Summarizing lengthy documents by utilizing the complete document as preloaded knowledge.
    • Technical Documentation Access: Allowing users to quickly find information in product manuals, and technical guidelines.
    • Internal Knowledge Base Access: Provide employees with quick access to corporate policies, guidelines, and procedures.
    • Chatbots and Virtual Assistants: For specific functions requiring reliable responses.
    • Research and Analysis: Where large datasets with known context are used.

    Cache-Augmented Generation (CAG) represents a significant advancement in how we leverage LLMs for knowledge-intensive tasks. By preloading all relevant information, CAG eliminates the issues associated with real-time retrieval, resulting in faster, more accurate, and more efficient AI systems. While RAG remains useful in certain circumstances, CAG presents a compelling alternative, particularly when dealing with manageable knowledge bases and when high-performance, and accurate response is needed. Make the move to CAG and experience the next evolution in AI-driven knowledge retrieval.

  • ECL vs RAG, What is ETL: AI Learning, Data, and Transformation

    ECL vs RAG, What is ETL: AI Learning, Data, and Transformation

    ECL vs RAG: A Deep Dive into Two Innovative AI Approaches

    In the world of advanced AI, particularly with large language models (LLMs), two innovative approaches stand out: the External Continual Learner (ECL) and Retrieval-Augmented Generation (RAG). While both aim to enhance the capabilities of AI models, they serve different purposes and use distinct mechanisms. Understanding the nuances of ECL vs RAG is essential for choosing the right method for your specific needs.

    ecl vs etl vs rag
    ECL vs ETC vs RAG

    What is an External Continual Learner (ECL)?

    An External Continual Learner (ECL) is a method designed to assist large language models (LLMs) in incremental learning without suffering from catastrophic forgetting. The ECL functions as an external module that intelligently selects relevant information for each new input, ensuring that the LLM can learn new tasks without losing its previously acquired knowledge.

    The core features of the ECL include:

    • Incremental Learning: The ability to learn continuously without forgetting past knowledge.
    • Tag Generation: Using the LLM to generate descriptive tags for input text.
    • Gaussian Class Representation: Representing each class with a statistical distribution of its tag embeddings.
    • Mahalanobis Distance Scoring: Selecting the most relevant classes for each input using distance calculations.

    The goal of the ECL is to streamline the in-context learning (ICL) process by reducing the number of relevant examples that need to be included in the prompt, addressing scalability issues.

    What is Retrieval-Augmented Generation (RAG)?

    Retrieval-Augmented Generation (RAG) is a framework that enhances the performance of large language models by providing them with external information during the generation process. Instead of relying solely on their pre-trained knowledge, RAG models access a knowledge base and retrieve relevant snippets of information to inform the generation.

    The key aspects of RAG include:

    • External Knowledge Retrieval: Accessing an external repository (e.g., a database or document collection) for relevant information.
    • Contextual Augmentation: Using the retrieved information to enhance the input given to the LLM.
    • Generation Phase: The LLM generates text based on the augmented input.
    • Focus on Content: RAG aims to add domain-specific or real-time knowledge to content generation.

    Key Differences: ECL vs RAG

    While both ECL and RAG aim to enhance LLMs, their fundamental approaches differ. Here’s a breakdown of the key distinctions between ECL vs RAG:

    • Purpose: The ECL is focused on enabling continual learning and preventing forgetting, while RAG is centered around providing external knowledge for enhanced generation.
    • Method of Information Use: The ECL filters context to select relevant classes for an in-context learning prompt, using statistical measures. RAG retrieves specific text snippets from an external source and uses that for text generation.
    • Learning Mechanism: The ECL learns class statistics incrementally and does not store training instances to deal with CF and ICS. RAG does not directly learn from external data but retrieves and uses it during the generation process.
    • Scalability and Efficiency: The ECL focuses on managing the context length of the prompt, making ICL scalable. RAG adds extra steps in content retrieval and processing, which can be less efficient and more computationally demanding.
    • Application: ECL is well-suited for class-incremental learning, where the goal is to learn a sequence of classification tasks. RAG excels in scenarios that require up-to-date information or context from an external knowledge base.
    • Text Retrieval vs Tag-based Classification: RAG uses text-based similarity search to find similar instances, whereas the ECL uses tag embeddings to classify and determine class similarity.

    When to Use ECL vs RAG

    The choice between ECL and RAG depends on the specific problem you are trying to solve.

    • Choose ECL when:
      • You need to train a classifier with class-incremental learning.
      • You want to avoid catastrophic forgetting and improve scalability in ICL settings.
      • Your task requires focus on relevant class information from past experiences.
    • Choose RAG when:
      • You need to incorporate external knowledge into the output of LLMs.
      • You are working with information that is not present in the model’s pre-training.
      • The aim is to provide up-to-date information or domain-specific context for text generation.

    What is ETL? A Simple Explanation of Extract, Transform, Load

    In the realm of data management, ETL stands for Extract, Transform, Load. It’s a fundamental process used to integrate data from multiple sources into a unified, centralized repository, such as a data warehouse or data lake. Understanding what is ETL is crucial for anyone working with data, as it forms the backbone of data warehousing and business intelligence (BI) systems.

    Breaking Down the ETL Process

    The ETL process involves three main stages: Extract, Transform, and Load. Let’s explore each of these steps in detail:

    1. Extract

    The extract stage is the initial step in the ETL process, where data is gathered from various sources. These sources can be diverse, including:

    • Relational Databases: Such as MySQL, PostgreSQL, Oracle, and SQL Server.
    • NoSQL Databases: Like MongoDB, Cassandra, and Couchbase.
    • APIs: Data extracted from various applications or platforms via their APIs.
    • Flat Files: Data from CSV, TXT, JSON, and XML files.
    • Cloud Services: Data sources like AWS, Google Cloud, and Azure platforms.

    During the extract stage, the ETL tool reads data from these sources, ensuring all required data is captured while minimizing the impact on the source system’s performance. This data is often pulled in its raw format.

    2. Transform

    The transform stage is where the extracted data is cleaned, processed, and converted into a format that is suitable for the target system. The data is transformed and prepared for analysis. This stage often involves various tasks:

    • Data Cleaning: Removing or correcting errors, inconsistencies, duplicates, and incomplete data.
    • Data Standardization: Converting data to a common format (e.g., date and time, units of measure) for consistency.
    • Data Mapping: Ensuring that the data fields from source systems correspond correctly to fields in the target system.
    • Data Aggregation: Combining data to provide summary views and derived calculations.
    • Data Enrichment: Enhancing the data with additional information from other sources.
    • Data Filtering: Removing unnecessary data based on specific rules.
    • Data Validation: Ensuring that the data conforms to predefined business rules and constraints.

    The transformation process is crucial for ensuring the quality, reliability, and consistency of the data.

    3. Load

    The load stage is the final step, where the transformed data is written into the target system. This target can be a:

    • Data Warehouse: A central repository for large amounts of structured data.
    • Data Lake: A repository for storing both structured and unstructured data in its raw format.
    • Relational Databases: Where processed data will be used for reporting and analysis.
    • Specific Application Systems: Data used by business applications for various purposes.

    The load process can involve a full load, which loads all data, or an incremental load, which loads only the changes since the last load. The goal is to ensure data is written efficiently and accurately.

    Why is ETL Important?

    The ETL process is critical for several reasons:

    • Data Consolidation: It brings together data from different sources into a unified view, breaking down data silos.
    • Data Quality: By cleaning, standardizing, and validating data, ETL enhances the reliability and accuracy of the information.
    • Data Preparation: It transforms the raw data to be analysis ready, making it usable for reporting and business intelligence.
    • Data Accessibility: ETL makes data accessible and actionable, allowing organizations to gain insights and make data-driven decisions.
    • Improved Efficiency: By automating data integration, ETL saves time and resources while reducing the risk of human errors.

    When to use ETL?

    The ETL process is particularly useful for organizations that:

    • Handle a diverse range of data from various sources.
    • Require high-quality, consistent, and reliable data.
    • Need to create data warehouses or data lakes.
    • Use data to enable Business Intelligence or data driven decision making.

    ECL vs RAG

    FeatureECL (External Continual Learner)RAG (Retrieval-Augmented Generation)
    PurposeIncremental learning, prevent forgettingEnhanced text generation via external knowledge
    MethodTag-based filtering and statistical selection of relevant classesText-based retrieval of relevant information from an external source
    LearningIncremental statistical learning; no LLM parameter update.No learning; rather, retrieval of external information.
    Data HandlingUses tagged data to optimize prompts.Uses text queries to retrieve from external knowledge bases
    FocusManaging prompt size for effective ICL.Augmenting text generation with external knowledge
    Parameter UpdatesExternal module parameters updated; no LLM parameter update.No parameter updates at all.

    ETL vs RAG

    FeatureETL (Extract, Transform, Load)RAG (Retrieval-Augmented Generation)
    PurposeData migration, transformation, and preparationEnhanced text generation via external knowledge
    MethodData extraction, transformation, and loading.Text-based retrieval of relevant information from an external source
    LearningNo machine learning; a data processing pipeline.No learning; rather, retrieval of external information.
    Data HandlingWorks with bulk data at rest.Utilizes text-based queries for dynamic data retrieval.
    FocusPreparing data for storage or analytics.Augmenting text generation with external knowledge
    Parameter UpdatesNo parameter update; rules are predefinedNo parameter updates at all.

    The terms ECL, RAG, and ETL represent distinct but important approaches in AI and data management. The External Continual Learner (ECL) helps LLMs to learn incrementally. Retrieval-Augmented Generation (RAG) enhances text generation with external knowledge. ETL is a data management process for data migration and preparation. A clear understanding of ECL vs RAG vs ETL allows developers and data professionals to select the right tools for the right tasks. By understanding these core differences, you can effectively enhance your AI capabilities and optimize your data management workflows, thereby improving project outcomes.