Category: Data Engineering

  • NVIDIA NV Ingest for Complex Unstructured PDFs, Enterprise Documents

    What is NVIDIA NV Ingest?

    NVIDIA NV Ingest is not a static pipeline; it’s a dynamic microservice designed for processing various document formats, including PDF, DOCX, and PPTX. It uses NVIDIA NIM microservices to identify, extract, and contextualize information, such as text, tables, charts, and images. The core aim is to transform unstructured data into structured metadata and text, facilitating its use in downstream applications

    At its core, NVIDIA NV Ingest is a performance-oriented, scalable microservice designed for document content and metadata extraction. Leveraging specialized NVIDIA NIM microservices, this tool goes beyond simple text extraction. It intelligently identifies, contextualizes, and extracts text, tables, charts, and images from a variety of document formats, including PDFs, Word, and PowerPoint files. This enables a streamlined workflow for feeding data into downstream generative AI applications, such as retrieval-augmented generation (RAG) systems.

    NVIDIA Ingest works by accepting a JSON job description, outlining the document payload and the desired ingestion tasks. The result is a JSON dictionary containing a wealth of metadata about the extracted objects and associated processing details. It’s crucial to note that NVIDIA Ingest doesn’t simply act as a wrapper around existing parsing libraries; rather, it’s a flexible and adaptable system that is designed to manage complex document processing workflows.

    Key Capabilities

    Here’s what NVIDIA NV Ingest is capable of:

    • Multi-Format Support: Handles a variety of documents, including PDF, DOCX, PPTX, and image formats.
    • Versatile Extraction Methods: Offers multiple extraction methods per document type, balancing throughput and accuracy. For PDFs, you can leverage options like pdfium, Unstructured.io, and Adobe Content Extraction Services.
    • Advanced Pre- and Post-Processing: Supports text splitting, chunking, filtering, embedding generation, and image offloading.
    • Parallel Processing: Enables parallel document splitting, content classification (tables, charts, images, text), extraction, and contextualization via Optical Character Recognition (OCR).
    • Vector Database Integration: NVIDIA Ingest also manages the computation of embeddings and can optionally store these into vector database like Milvus

    Why NVIDIA NV Ingest?

    Unlike static pipelines, NVIDIA Ingest provides a flexible framework. It is not a wrapper for any specific parsing library. Instead, it orchestrates the document processing workflow based on your job description.

    The need to parse hundreds of thousands of complex, messy unstructured PDFs is often a major hurdle. NVIDIA Ingest is designed for exactly this scenario, providing a robust and scalable system for large-scale data processing. It breaks down complex PDFs into discrete content, contextualizes it through OCR, and outputs a structured JSON schema which is very easy to use for AI applications.

    Getting Started with NVIDIA NV Ingest

    To get started, you’ll need:

    • Hardware: NVIDIA GPUs (H100 or A100 with at least 80GB of memory, with minimum of 2 GPUs)

    Software

    • Operating System: Linux (Ubuntu 22.04 or later is recommended)
    • Docker: For containerizing and managing microservices
    • Docker Compose: For multi-container application deployment
    • CUDA Toolkit: (NVIDIA Driver >= 535, CUDA >= 12.2)
    • NVIDIA Container Toolkit: For running NVIDIA GPU-accelerated containers
    • NVIDIA API Key: Required for accessing pre-built containers from NVIDIA NGC. To get early access for NVIDIA Ingest https://developer.nvidia.com/nemo-microservices-early-access/join

    Step-by-Step Setup and Usage

    1. Starting NVIDIA NIM Microservices Containers

    1. Clone the repository:
      git clone
      https://github.com/nvidia/nv-ingest
      cd nv-ingest
    2. Log in to NVIDIA GPU Cloud (NGC):
      docker login nvcr.io
      # Username: $oauthtoken
      # Password: <Your API Key>
    3. Create a .env file: 
      Add your NGC API key and any other required paths:
      NGC_API_KEY=your_api_key NVIDIA_BUILD_API_KEY=optional_build_api_key
    4. Start the containers:
      sudo nvidia-ctk runtime configure --runtime=docker --set-as-default
      docker compose up

    Note: NIM containers might take 10-15 minutes to fully load models on first startup.

    2. Installing Python Client Dependencies

    1. Create a Python environment (optional but recommended):
      conda create --name nv-ingest-dev --file ./conda/environments/nv_ingest_environment.yml
      conda activate nv-ingest-dev
    2. Install the client:
      cd client
      pip install .

    if you are not using conda you can install directly

    #pip install -r requirements.txt
    #pip install .
    “`
    Note: You can perform these steps from your host machine or within the nv-ingest container.

    3. Submitting Ingestion Jobs

    Python Client Example:

    import logging, time
    
    from nv_ingest_client.client import NvIngestClient
    from nv_ingest_client.primitives import JobSpec
    from nv_ingest_client.primitives.tasks import ExtractTask
    from nv_ingest_client.util.file_processing.extract import extract_file_content
    
    logger = logging.getLogger("nv_ingest_client")
    
    file_name = "data/multimodal_test.pdf"
    file_content, file_type = extract_file_content(file_name)
    
    job_spec = JobSpec(
     document_type=file_type,
     payload=file_content,
     source_id=file_name,
     source_name=file_name,
     extended_options={
         "tracing_options": {
             "trace": True,
             "ts_send": time.time_ns()
         }
     }
    )
    
    extract_task = ExtractTask(
     document_type=file_type,
     extract_text=True,
     extract_images=True,
     extract_tables=True
    )
    
    job_spec.add_task(extract_task)
    
    client = NvIngestClient(
     message_client_hostname="localhost",  # Host where nv-ingest-ms-runtime is running
     message_client_port=7670  # REST port, defaults to 7670
    )
    
    job_id = client.add_job(job_spec)
    client.submit_job(job_id, "morpheus_task_queue")
    result = client.fetch_job_result(job_id, timeout=60)
    print(f"Got {len(result)} results")

    Command Line (nv-ingest-cli) Example:

    nv-ingest-cli \
        --doc ./data/multimodal_test.pdf \
        --output_directory ./processed_docs \
        --task='extract:{"document_type": "pdf", "extract_method": "pdfium", "extract_tables": "true", "extract_images": "true"}' \
        --client_host=localhost \
        --client_port=7670

    Note: Make sure to adjust the file_path, client_host and client_port as per your setup.

    Note: extract_tables controls both table and chart extraction, you can disable chart extraction using extract_charts parameter set to false.

    4. Inspecting Results

    Post ingestion, results can be found in processed_docs directory, under text, image and structured subdirectories. Each result will contain corresponding json metadata files. You can inspect the extracted images using the provided image viewer script:

    1. First, install tkinter by running the following commands depending on your OS.

      For Ubuntu/Debian:
      sudo apt-get update
      sudo apt-get install python3-tk

      # For Fedora/RHEL:
      sudo dnf install python3-tkinter

      # For MacOS
      brew install python-tk
    2. Run image viewer:
      python src/util/image_viewer.py --file_path ./processed_docs/image/multimodal_test.pdf.metadata.json

    Understanding the Output

    The output of NVIDIA NV Ingest is a structured JSON document, which contains:

    • Extracted Text: Text content from the document.
    • Extracted Tables: Table data in structured format.
    • Extracted Charts: Information about charts present in the document.
    • Extracted Images: Metadata for extracted images.
    • Processing Annotations: Timing and tracing data for analysis.

    This output can be easily integrated into various systems, including vector databases for semantic search and LLM applications.

    This output can be easily integrated into various systems, including vector databases for semantic search and LLM applications.

    NVIDIA NV Ingest Use Cases

    NVIDIA NV Ingest is ideal for various applications, including:

    • Retrieval-Augmented Generation (RAG): Enhance LLMs with accurate and contextualized data from your documents.
    • Enterprise Search: Improve search capabilities by indexing text and metadata from large document repositories.
    • Data Analysis: Unlock hidden patterns and insights within unstructured data.
    • Automated Document Processing: Streamline workflows by automating the extraction process from unstructured documents.

    Troubleshooting

    Common Issues

    • NIM Containers Not Starting: Check resource availability (GPU memory, CPU), verify NGC login details, and ensure the correct CUDA driver is installed.
    • Python Client Errors: Verify dependencies are installed correctly and the client is configured to connect with the running service.
    • Job Failures: Examine the logs for detailed error messages, check the input document for errors, and verify task configuration.

    Tips

    • Verbose Logging: Enable verbose logging by setting NIM_TRITON_LOG_VERBOSE=1 in docker-compose.yaml to help diagnose issues.
    • Container Logs: Use docker logs to inspect logs for each container to identify problems.
    • GPU Utilization: Use nvidia-smi to monitor GPU activity. If it takes more than a minute for nvidia-smi command to return there is a high chance that the GPU is busy setting up the models.

  • ECL vs RAG, What is ETL: AI Learning, Data, and Transformation

    ECL vs RAG, What is ETL: AI Learning, Data, and Transformation

    ECL vs RAG: A Deep Dive into Two Innovative AI Approaches

    In the world of advanced AI, particularly with large language models (LLMs), two innovative approaches stand out: the External Continual Learner (ECL) and Retrieval-Augmented Generation (RAG). While both aim to enhance the capabilities of AI models, they serve different purposes and use distinct mechanisms. Understanding the nuances of ECL vs RAG is essential for choosing the right method for your specific needs.

    ecl vs etl vs rag
    ECL vs ETC vs RAG

    What is an External Continual Learner (ECL)?

    An External Continual Learner (ECL) is a method designed to assist large language models (LLMs) in incremental learning without suffering from catastrophic forgetting. The ECL functions as an external module that intelligently selects relevant information for each new input, ensuring that the LLM can learn new tasks without losing its previously acquired knowledge.

    The core features of the ECL include:

    • Incremental Learning: The ability to learn continuously without forgetting past knowledge.
    • Tag Generation: Using the LLM to generate descriptive tags for input text.
    • Gaussian Class Representation: Representing each class with a statistical distribution of its tag embeddings.
    • Mahalanobis Distance Scoring: Selecting the most relevant classes for each input using distance calculations.

    The goal of the ECL is to streamline the in-context learning (ICL) process by reducing the number of relevant examples that need to be included in the prompt, addressing scalability issues.

    What is Retrieval-Augmented Generation (RAG)?

    Retrieval-Augmented Generation (RAG) is a framework that enhances the performance of large language models by providing them with external information during the generation process. Instead of relying solely on their pre-trained knowledge, RAG models access a knowledge base and retrieve relevant snippets of information to inform the generation.

    The key aspects of RAG include:

    • External Knowledge Retrieval: Accessing an external repository (e.g., a database or document collection) for relevant information.
    • Contextual Augmentation: Using the retrieved information to enhance the input given to the LLM.
    • Generation Phase: The LLM generates text based on the augmented input.
    • Focus on Content: RAG aims to add domain-specific or real-time knowledge to content generation.

    Key Differences: ECL vs RAG

    While both ECL and RAG aim to enhance LLMs, their fundamental approaches differ. Here’s a breakdown of the key distinctions between ECL vs RAG:

    • Purpose: The ECL is focused on enabling continual learning and preventing forgetting, while RAG is centered around providing external knowledge for enhanced generation.
    • Method of Information Use: The ECL filters context to select relevant classes for an in-context learning prompt, using statistical measures. RAG retrieves specific text snippets from an external source and uses that for text generation.
    • Learning Mechanism: The ECL learns class statistics incrementally and does not store training instances to deal with CF and ICS. RAG does not directly learn from external data but retrieves and uses it during the generation process.
    • Scalability and Efficiency: The ECL focuses on managing the context length of the prompt, making ICL scalable. RAG adds extra steps in content retrieval and processing, which can be less efficient and more computationally demanding.
    • Application: ECL is well-suited for class-incremental learning, where the goal is to learn a sequence of classification tasks. RAG excels in scenarios that require up-to-date information or context from an external knowledge base.
    • Text Retrieval vs Tag-based Classification: RAG uses text-based similarity search to find similar instances, whereas the ECL uses tag embeddings to classify and determine class similarity.

    When to Use ECL vs RAG

    The choice between ECL and RAG depends on the specific problem you are trying to solve.

    • Choose ECL when:
      • You need to train a classifier with class-incremental learning.
      • You want to avoid catastrophic forgetting and improve scalability in ICL settings.
      • Your task requires focus on relevant class information from past experiences.
    • Choose RAG when:
      • You need to incorporate external knowledge into the output of LLMs.
      • You are working with information that is not present in the model’s pre-training.
      • The aim is to provide up-to-date information or domain-specific context for text generation.

    What is ETL? A Simple Explanation of Extract, Transform, Load

    In the realm of data management, ETL stands for Extract, Transform, Load. It’s a fundamental process used to integrate data from multiple sources into a unified, centralized repository, such as a data warehouse or data lake. Understanding what is ETL is crucial for anyone working with data, as it forms the backbone of data warehousing and business intelligence (BI) systems.

    Breaking Down the ETL Process

    The ETL process involves three main stages: Extract, Transform, and Load. Let’s explore each of these steps in detail:

    1. Extract

    The extract stage is the initial step in the ETL process, where data is gathered from various sources. These sources can be diverse, including:

    • Relational Databases: Such as MySQL, PostgreSQL, Oracle, and SQL Server.
    • NoSQL Databases: Like MongoDB, Cassandra, and Couchbase.
    • APIs: Data extracted from various applications or platforms via their APIs.
    • Flat Files: Data from CSV, TXT, JSON, and XML files.
    • Cloud Services: Data sources like AWS, Google Cloud, and Azure platforms.

    During the extract stage, the ETL tool reads data from these sources, ensuring all required data is captured while minimizing the impact on the source system’s performance. This data is often pulled in its raw format.

    2. Transform

    The transform stage is where the extracted data is cleaned, processed, and converted into a format that is suitable for the target system. The data is transformed and prepared for analysis. This stage often involves various tasks:

    • Data Cleaning: Removing or correcting errors, inconsistencies, duplicates, and incomplete data.
    • Data Standardization: Converting data to a common format (e.g., date and time, units of measure) for consistency.
    • Data Mapping: Ensuring that the data fields from source systems correspond correctly to fields in the target system.
    • Data Aggregation: Combining data to provide summary views and derived calculations.
    • Data Enrichment: Enhancing the data with additional information from other sources.
    • Data Filtering: Removing unnecessary data based on specific rules.
    • Data Validation: Ensuring that the data conforms to predefined business rules and constraints.

    The transformation process is crucial for ensuring the quality, reliability, and consistency of the data.

    3. Load

    The load stage is the final step, where the transformed data is written into the target system. This target can be a:

    • Data Warehouse: A central repository for large amounts of structured data.
    • Data Lake: A repository for storing both structured and unstructured data in its raw format.
    • Relational Databases: Where processed data will be used for reporting and analysis.
    • Specific Application Systems: Data used by business applications for various purposes.

    The load process can involve a full load, which loads all data, or an incremental load, which loads only the changes since the last load. The goal is to ensure data is written efficiently and accurately.

    Why is ETL Important?

    The ETL process is critical for several reasons:

    • Data Consolidation: It brings together data from different sources into a unified view, breaking down data silos.
    • Data Quality: By cleaning, standardizing, and validating data, ETL enhances the reliability and accuracy of the information.
    • Data Preparation: It transforms the raw data to be analysis ready, making it usable for reporting and business intelligence.
    • Data Accessibility: ETL makes data accessible and actionable, allowing organizations to gain insights and make data-driven decisions.
    • Improved Efficiency: By automating data integration, ETL saves time and resources while reducing the risk of human errors.

    When to use ETL?

    The ETL process is particularly useful for organizations that:

    • Handle a diverse range of data from various sources.
    • Require high-quality, consistent, and reliable data.
    • Need to create data warehouses or data lakes.
    • Use data to enable Business Intelligence or data driven decision making.

    ECL vs RAG

    FeatureECL (External Continual Learner)RAG (Retrieval-Augmented Generation)
    PurposeIncremental learning, prevent forgettingEnhanced text generation via external knowledge
    MethodTag-based filtering and statistical selection of relevant classesText-based retrieval of relevant information from an external source
    LearningIncremental statistical learning; no LLM parameter update.No learning; rather, retrieval of external information.
    Data HandlingUses tagged data to optimize prompts.Uses text queries to retrieve from external knowledge bases
    FocusManaging prompt size for effective ICL.Augmenting text generation with external knowledge
    Parameter UpdatesExternal module parameters updated; no LLM parameter update.No parameter updates at all.

    ETL vs RAG

    FeatureETL (Extract, Transform, Load)RAG (Retrieval-Augmented Generation)
    PurposeData migration, transformation, and preparationEnhanced text generation via external knowledge
    MethodData extraction, transformation, and loading.Text-based retrieval of relevant information from an external source
    LearningNo machine learning; a data processing pipeline.No learning; rather, retrieval of external information.
    Data HandlingWorks with bulk data at rest.Utilizes text-based queries for dynamic data retrieval.
    FocusPreparing data for storage or analytics.Augmenting text generation with external knowledge
    Parameter UpdatesNo parameter update; rules are predefinedNo parameter updates at all.

    The terms ECL, RAG, and ETL represent distinct but important approaches in AI and data management. The External Continual Learner (ECL) helps LLMs to learn incrementally. Retrieval-Augmented Generation (RAG) enhances text generation with external knowledge. ETL is a data management process for data migration and preparation. A clear understanding of ECL vs RAG vs ETL allows developers and data professionals to select the right tools for the right tasks. By understanding these core differences, you can effectively enhance your AI capabilities and optimize your data management workflows, thereby improving project outcomes.

  • Garbage In, Garbage Out: Why Data Quality is the Cornerstone of AI Success

    Garbage In, Garbage Out: Why Data Quality is the Cornerstone of AI Success

    AI projects fail more often due to poor data quality than flawed algorithms. Learn why focusing on data cleansing, preparation, and governance is crucial for successful AI, Machine Learning, and Generative AI initiatives.

    We all know AI is the buzzword of the decade. From chatbots and virtual assistants to advanced predictive analytics, the possibilities seem limitless. But behind every successful AI application lies a critical, often overlooked, component: data.

    Wrong AI response and hallucination Due to bad Data
    Wrong AI response due to bad data

    We all know AI is the buzzword of the decade. From chatbots and virtual assistants to advanced predictive analytics, the possibilities seem limitless. But behind every successful AI application lies a critical, often overlooked, component: data.

    It’s easy to get caught up in the excitement of cutting-edge algorithms and powerful models, but the reality is stark: if your data is poor, your AI will be poor. The old adage “Garbage In, Garbage Out” (GIGO) has never been more relevant than in the world of Artificial Intelligence. This isn’t just about missing values or misspellings; it’s about a fundamental understanding that data quality is the bedrock of any AI initiative.

    Why Data Quality Matters More Than You Think

    Data Flow for Good AI Response
    Data Flow for Good AI Response

    You might be thinking, “Yeah, yeah, data quality. I know.” But consider this:

    • Machine Learning & Model Accuracy: Machine learning models learn from data. If the data is biased, inconsistent, or inaccurate, the model will learn to make biased, inconsistent, and inaccurate predictions. No matter how sophisticated your model is, it won’t overcome flawed input.
    • Generative AI Hallucinations: Even the most impressive generative AI models can produce nonsensical outputs (known as “hallucinations”) when fed unreliable data. These models learn patterns from data, and if the underlying data is flawed, the patterns will be flawed too.
    • The Impact on Business Decisions: Ultimately, AI is meant to drive better business decisions. If the data underlying these decisions is unreliable, the outcomes will be detrimental, leading to missed opportunities, financial losses, and damage to reputation.
    • Increased Development Time & Costs: Debugging problems caused by bad data can consume vast amounts of development time. Identifying and correcting data quality issues is time-consuming and can require specialised expertise. This significantly increases project costs and delays time-to-market.

    Beyond the Basic Clean-Up

    Data quality goes beyond just removing duplicates and correcting spelling mistakes. It involves a comprehensive approach encompassing:

    • Completeness: Ensuring all relevant data is present. Are you missing vital fields? Are critical records incomplete?
    • Accuracy: Making sure data is correct and truthful. Are values consistent across different systems?
    • Consistency: Data should be uniform across your different sources.
    • Validity: Data should conform to defined rules and formats.
    • Timeliness: Keeping data up-to-date and relevant. Outdated data can lead to inaccurate results.
    • Data Governance: Implementing policies and processes to ensure data is managed effectively.

    Key Steps to Improve Data Quality for AI:

    1. Data Audit: Start by understanding your current data landscape. Where is your data coming from? What are the potential quality issues?
    2. Define Data Quality Metrics: Identify which aspects of data quality matter most for your specific AI use case.
    3. Data Cleansing & Preparation: Develop processes to correct errors, fill missing data, and transform data into a usable format.
    4. Implement Data Governance: Define clear ownership and responsibilities for data quality.
    5. Continuous Monitoring: Data quality is an ongoing process. Implement monitoring to identify and address issues proactively.
    6. Invest in Data Engineering: A team with experience in data processing and ETL pipelines is important for the success of the project

    Don’t Neglect the Foundation

    AI has the potential to transform businesses, but its success hinges on the quality of its fuel – data. Instead of chasing the latest algorithms, make sure you’re not skipping the important part. Prioritising data quality is not just a technical consideration; it’s a strategic imperative. By investing in building a robust data foundation, you can unlock the true power of AI and realize its full potential. Remember, the best AI strategy always begins with the best data.