Apache Phoenix logo

We Put SQL Back in NoSQL

Query HBase with standard SQL and JDBC. Low latency OLTP and operational analytics for Hadoop.

Why Phoenix

The trusted data platform for OLTP and operational analytics on Hadoop.

Standard SQL & JDBC

Use familiar SQL queries and JDBC APIs with full ACID transaction capabilities.

Millisecond Performance

Low latency performance for small queries or seconds for tens of millions of rows.

Schema Flexibility

Schema-on-read flexibility from the NoSQL world leveraging HBase as backing store.

Hadoop Ecosystem

Fully integrated with Spark, Hive, Pig, Flume, and Map Reduce.

Use Cases

Proven patterns where Phoenix delivers value.

Operational Analytics

Real-time SQL queries on operational data with ACID guarantees for business insights.

Low Latency OLTP

Transactional workloads with millisecond response times and full ACID support.

Multi-tenant Applications

Build SaaS applications with tenant isolation using views and dynamic columns.

Secondary Indexing

Fast lookups on non-primary key columns with automatic index maintenance.

Time-Series Data

SQL queries over time-series data with efficient storage and retrieval patterns.

Data Integration

ETL pipelines with Spark, Hive, and Map Reduce for comprehensive data workflows.

SQL Support

Phoenix takes your SQL query, compiles it into HBase scans, and orchestrates execution to produce JDBC result sets.

Complete SQL Support

SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY, and more. Full DML and DDL support.

Optimized Execution

Queries compiled into HBase scans with coprocessors and custom filters for millisecond performance.

JDBC Connection

Connect using standard JDBC URL: jdbc:phoenix:server1,server2:port

A Vibrant Community

Apache Phoenix is a top-level Apache project with an active community of users and contributors. Join discussions, explore the language reference, and help shape the future of SQL on HBase.

Getting Started

From download to production in a few simple steps.

1. Download

Grab the latest stable release and verify checksums.

2. Read the Guide

Walk through cluster setup, schema design, and operations.

3. Connect a Client

Configure the JDBC client classpath and connection URL.