Overview
> Documentation > Docs > INFINI Pizza > Overview

Overview #

Introduction #

INFINI Pizza is a distributed hybrid search database system. Our mission is to deliver real-time smart search experiences tailored for enterprises by fully harnessing the potential of modern hardware and the AI capability. We are committed to meeting the demands of high concurrency and high throughput in challenging environments, all while providing seamless and efficient search capabilities.

Features #

The Next-Gen Real-Time Search & AI-Native Innovation Engine.

Major Features of INFINI Pizza:

  • True real-time: Get search results instantly after insertion, no need to refresh anymore.
  • Support partial update in place: No longer pull and push back the entire document again.
  • High performance: Extremely fast with high throughput and low latency, hardware reduced.
  • High scalability: Supports very large-scale clusters, beyond petabytes.
  • An in-memory index and columnar store using object storage for persistence.
  • Native integration with LLMs and ML, empowering AI-Native enterprise innovation.
  • Design with storage and computation separation, and also storage and index separation.

Architecture #

Why Pizza #

The name Pizza was taken from our unique sharding design.

The documents in Pizza are persisted as Parquet files in object storage. Native integration with other big data systems through object storage and the standard Parquet format.

The philosophy of INFINI Pizza is that indexes should be designed per use case, and should not attempt to fit every use case with a single index. Therefore, we introduced Views, which allow combining different document sources into a single index or separating a document into different layers of indexes for different use cases.

By emphasizes the decoupling of storage and computation, as well as the separation of storage and indexing. Which enables efficient and scalable data processing by allowing independent management and optimization of storage resources, computational resources, and indexing strategies.

Native integration with LLMs (Language Models) and ML (Machine Learning) technologies is a key aspect of INFINI Pizza, providing powerful capabilities for AI-Native enterprise innovation. By seamlessly integrating with LLMs and ML frameworks, INFINI Pizza enables advanced natural language processing, machine learning, and data analytics directly within the search and data retrieval pipeline.

This integration empowers enterprises to leverage the full potential of AI technologies, such as sentiment analysis, language understanding, recommendation systems, and predictive analytics, to enhance their search and data-driven applications. By harnessing the power of LLMs and ML models, INFINI Pizza facilitates more accurate and context-aware search results, personalized recommendations, intelligent query understanding, and other AI-driven functionalities.

The native integration with LLMs and ML in INFINI Pizza eliminates the need for separate infrastructure or complex integration efforts. It provides a seamless and efficient environment for enterprises to leverage cutting-edge AI capabilities within their search and data processing workflows. This integration opens up new possibilities for AI-driven innovation and unlocks the full potential of enterprise data for enhanced decision-making, customer experiences, and business insights.

We are in the process of building the next-generation search infrastructure, driven by our unwavering commitment to delivering real-time search experiences for enterprises, unlocking the potential of modern hardware, and catering to the demands of high concurrency and high throughput in the most challenging of environments

Next step #

Install and configure INFINI Pizza.