Architecture Cryptography ORCA Consensus Results Roadmap Team GitHub ↗

LQ1

LAHARA QUANTUM ONE

A clean-slate Layer-1 blockchain protocol built for the post-quantum era. Native Falcon-1024 from genesis. ORCA parallel execution. Adaptive HotStuff-2 BFT. RISC-V hardware-native execution. No classical elliptic-curve operation at any protocol layer.

121K
TPS · Phase 1 Sim
232ms
Avg Finality · Phase 3
<2s
Consensus Target
NIST
FIPS 203/205/206
Explore Architecture
System Architecture

SEVEN-LAYER PROTOCOL STACK

LQ1 organises all protocol functions into seven isolated layers — from cryptographic primitives at the base to developer tooling at the top. Each exposes well-defined interfaces, enabling governance-driven upgrades without destabilising adjacent components. Click any layer to explore its simulator.

Global Network Layer
SSSV Layer
Coordination Layer
Consensus Layer
Execution Layer
Core Trust Layer
GLOBAL NETWORK LAYER

The application-facing layer exposing NexaScript smart contracts, native account abstraction wallets, cross-chain intent protocols, and the NexaForge developer toolkit. NexaForge provides compile-time static analysis for read/write set validation, formal verification support, and proof generation tooling.

NexaScriptAA WalletsNexaForgeCross-Chain IntentsRPC Interface
▸ NexaForge Compile Pipeline
SSSV LAYER — cPQC-Agg Aggregation

Proprietary post-quantum signature set verification architecture. cPQC-Agg (Compact Post-Quantum Cryptographic Aggregation) produces constant-size quorum certificates of ~48 bytes amortised per validator with O(1) verification cost — based on the LaBRADOR lattice proof framework. Validators hold two PQ key pairs: a Falcon-1024 identity key and a cPQC-Agg consensus key.

cPQC-Agg~48B / validatorO(1) VerifyLaBRADOR Lattice
▸ cPQC-Agg Aggregation Simulation
COORDINATION LAYER — Proof & Verification

Plonky3 — FRI-based ZK proof system. No trusted setup. Post-quantum secure under BLAKE3 hash collision resistance alone. Proof generation is asynchronous — blocks finalise through HotStuff-2 first, proofs attach in a subsequent round. ZK-verified mode: ~150ms parallelised for 50k tx. Mandatory for cross-chain bridge transactions.

Plonky3 FRINo Trusted SetupRecursive ZKBLAKE3
▸ Plonky3 Proof Pipeline
CONSENSUS LAYER — HotStuff-2 BFT

Adaptive HotStuff-2 BFT with O(n) communication complexity. Tolerates up to 1/3 Byzantine validators. Propose → Vote → Commit. LQ1-VRF leader election (BLAKE3 + Falcon-1024). NexaNet uses QUIC transport with Kademlia DHT. IBLT delta propagation: <150ms effective global block propagation.

HotStuff-2 BFTO(n) ComplexityQUIC / KademliaIBLT Delta
▸ BFT Round Animation
EXECUTION LAYER — ORCA + NexaVM

ORCA constructs a dependency DAG from declared read/write sets and schedules independent paths across all processor cores — eliminating rollback in over 98% of transactions. NexaVM: RISC-V RV64GC ahead-of-time compiled bytecode via LLVM. Near-native performance with hardware acceleration for cryptographic operations.

ORCA DAGRISC-V RV64GC<2% Conflict RateLLVM AoT
▸ ORCA Parallel Execution
CORE TRUST LAYER — Cryptographic Foundation

The base of all protocol trust. Every primitive is NIST-standardised. Falcon-1024 for all transaction and validator signatures (~1,280 bytes). ML-KEM-1024 for key encapsulation (FIPS 203). Poseidon2 for ZK-compatible state hashing. BLAKE3 for general hashing and FRI commitments. Zero classical elliptic-curve operations.

Falcon-1024ML-KEM-1024cPQC-AggPoseidon2BLAKE3
▸ PQC Verification Throughput
The Problem

LEGACY CHAINS ARE VULNERABLE

The global blockchain ecosystem carries trillions in value secured by classical cryptographic assumptions that quantum computing renders obsolete. The threat is architectural — embedded in every major deployed chain.

Shor's Algorithm

Bitcoin and Ethereum use secp256k1 ECDSA — broken in polynomial time by Shor's algorithm. Solana uses Ed25519 equally vulnerable. EVM rollups depend on pairing-based cryptography reliant on elliptic curve hardness. Retrofitting requires consensus-breaking hard forks.

🔐
NIST Finalised PQC — 2024

NIST finalised FIPS 203 (ML-KEM), FIPS 205 (SLH-DSA), and FIPS 206 (Falcon) in 2024 — establishing the approved post-quantum primitive set for long-lived infrastructure. LQ1 integrates all three standard families from genesis as first-class primitives.

Sequential Execution Failure

Beyond cryptography, a second structural failure: sequential execution. Even the most advanced rollups process transactions one-by-one, leaving multi-core validator hardware almost entirely idle. A protocol designed from genesis for parallel execution delivers order-of-magnitude throughput gains.

LQ1 — Not a Retrofit

LQ1 is the only production-targeted protocol combining post-quantum security, hardware-native execution, massively parallel throughput, and BFT consensus from genesis. No currently deployed blockchain achieves all four properties simultaneously.

Cryptographic System

PQC FROM GENESIS

Every operation — transaction signing, validator consensus, key encapsulation, state hashing, and ZK proof generation — uses NIST-standardised quantum-resistant primitives exclusively. Zero classical elliptic-curve operations at any protocol layer.

PurposePrimitiveStandardKey Property
Transaction SignaturesFalcon-1024FIPS 206~1,280 byte sigs · lattice-based NTRU · HSM isolated signing
Validator SignaturesFalcon-1024FIPS 206Identity key — transaction signing & peer authentication
Consensus AggregationcPQC-AggLaBRADOR~48B/validator amortised · O(1) verify · 128B Groth16 proof · 99.98% detection rate (k=80, n=200)
Key EncapsulationML-KEM-1024FIPS 203Formerly CRYSTALS-Kyber · IND-CCA2 · 128-bit PQ security
ZK-Friendly HashPoseidon2ZK-stdState tree · execution witness commitments · ZK circuit native
General HashBLAKE3IETFBlock headers · FRI commitments · peer auth · Merkle trees
Long-Term RecoverySLH-DSA-128fFIPS 205Archival key rotation · ~17KB sigs · not used in consensus hot path
VRF Leader ElectionLQ1-VRFProtocolBLAKE3 PRF output + Falcon-1024 proof of knowledge · replaces ECVRF
Validator Dual-Key Architecture

Validators hold two post-quantum key pairs: a Falcon-1024 identity key for transaction signing and peer authentication, and a cPQC-Agg consensus key for block voting. cPQC-Agg produces ~48 bytes amortised per validator with O(1) verification cost — allowing the validator set to scale without growing consensus message overhead. There is no classical elliptic-curve key anywhere in the validator credential set.

Execution Engine

ORCA PARALLEL EXECUTION

ORCA constructs a directed acyclic dependency graph from declared read/write state sets, then schedules independent execution paths across all available processor cores simultaneously — eliminating rollback in over 98% of transactions.

01
Read/Write Declaration

Every transaction declares the state keys it will read and write before execution. The NexaForge toolchain performs static analysis at compile time. Over-declaration is safe and recommended. Undeclared accesses trigger mandatory post-execution conflict detection and rollback.

02
Dependency DAG Construction

ORCA builds a directed acyclic graph from declared state sets. Transactions with overlapping write sets are linked with dependency edges. Transactions with no shared state dependencies are disconnected — eligible for fully parallel execution across all CPU cores.

03
Parallel Core Assignment

Disconnected DAG nodes execute concurrently. Unlike Block-STM's optimistic concurrency (Aptos/Sui), ORCA uses declared dependencies — avoiding speculative execution overhead entirely. Phase 1 simulation confirmed ~121,000 TPS on 12-core hardware.

04
Post-Execution Conflict Check

After execution, ORCA performs a mandatory conflict check comparing actual state accesses against declared sets. Any undeclared access triggers rollback and re-queue. Conflict rate: <2% in practice. Rollback prevents malicious or faulty declarations from corrupting global state.

▸ ORCA DAG Live
RUNNING
TXS Parallel
Conflict Rate
12
Cores Active
Consensus Protocol

ADAPTIVE HOTSTUFF-2 BFT

Linear O(n) communication complexity. Deterministic block finality. Tolerates up to 1/3 Byzantine validators. Post-quantum leader election via LQ1-VRF. Phase 3 live testnet: 232ms average finality on AWS multi-region infrastructure.

Byzantine Tolerance
Up to 1/3 faulty validators. Safety and liveness guaranteed below threshold.
Communication
O(n) linear complexity. cPQC-Agg quorum certs: ~48B/validator amortised.
Finality
<2s design target. 232ms avg live testnet (Phase 3, AWS, real Falcon-1024).
Leader Election
LQ1-VRF: BLAKE3 PRF output + Falcon-1024 proof. Replaces ECVRF (Shor-broken).
Block Propagation
IBLT delta: <150ms. Header: <50ms. Full sync: <500ms. QUIC transport.
Block Stages
Propose → Vote → Commit. Supermajority QC finalises each block.
▸ HotStuff-2 Round Simulation
LIVE
LQ1-VRF Leader Election
Block Proposal
Validator Vote
cPQC-Agg Aggregation
Quorum Certificate
Block Commit
Block Finality
Measured Performance

NUMBERS THAT ARE MEASURED

All performance figures are simulation-confirmed and live-testnet validated across three completed phases. No figures are theoretical without explicit labelling.

Measured · Rust Prototype
95.1%

ORCA parallel efficiency at 5% conflict, 8 execution lanes

<2ms DAG overhead · 50k tx block
Live TCP Testnet
232ms

Avg block finality Virginia–Oregon with real Falcon-1024 signatures

HotStuff-2 · 0 safety violations / 100 rounds
Measured · Falcon-1024
10,121 ops

Verification throughput per core (Rust prototype)

1,274 bytes compressed signature
cPQC-Agg SNARK
95,704 R1CS

Constraint count for Merkle + CBDS sub-circuits. 128-byte Groth16 proof. ~3ms verify.

4.5s snarkjs · 2s native Rust projected
ZK-Verified vs Re-Execution
MetricRe-ExecZK-Verified
CPU/tx~0.05ms~10–30ms
Block (50k tx)~2,500ms~150ms
Proof sizeNone40–120KB
cPQC-Agg vs LQ-SSV Comparison
SchemeQuorum CertVerify Cost
cPQC-Agg~48B/validatorO(1)
Sequentialscales linearlyO(n)
Propagation<150ms IBLTglobal
Network Model

NEXANET P2P INFRASTRUCTURE

Structured peer-to-peer overlay with three node categories: Validator Nodes (consensus participants), Full Nodes (independent verifiers), and Light Clients (DAS-based verification without full block download).

Q
QUIC Transport

Multiplexed streams, 0-RTT connection, no TCP head-of-line blocking. Mandatory for all validator-to-validator consensus messaging.

target: <300ms round-trip
K
Kademlia DHT

Structured peer discovery. Nodes locate peers in O(log n) time. Bootstrap nodes seed initial connections; routing is fully distributed thereafter.

routing: O(log n)
Δ
IBLT Delta Blocks

Invertible Bloom Lookup Table delta synchronisation. Nodes exchange only differences vs local mempool — not full block payloads. Sub-150ms effective propagation.

propagation: <150ms for synced validators
NexaDA

Reed-Solomon erasure coding k=32, n=128. Any 32 of 128 shares reconstruct a complete block. DAS for light clients without full download.

RS(32,128) · light client DAS
Validator Hardware

32–128 cores, 128–256 GB RAM, 10–40 Gbps network. Data-centre grade by design — deliberate tradeoff prioritising throughput. Light validator tier planned (4–8 cores, 16–32 GB RAM).

bandwidth: 10–40 Gbps
BFT Fault Model

Up to 33% malicious validators. Multi-peer connections, adaptive slot timing, redundant propagation, geographically diverse validator set. Safety and liveness guaranteed below 1/3 threshold.

BFT: 33% faulty tolerance
Execution Layer

NEXAVM — RISC-V NATIVE

Smart contracts compile ahead-of-time into RISC-V RV64GC bytecode via LLVM, eliminating interpreted VM overhead. Hardware-native execution enables vectorised cryptographic operations and potential ASIC/FPGA acceleration.

RISC-V RV64GC ISA

Open instruction set. Contracts compile to deterministic machine-level bytecode. Full ISA enables CPU vectorisation for Falcon verification and Poseidon2 hashing.

Ahead-of-Time Compilation

Contracts compile to RISC-V bytecode before deployment. NexaForge static analysis validates read/write completeness and supports formal verification at compile time.

Execution Witness Generation

NexaVM generates execution witnesses for ZK-verified mode. Witnesses feed into Plonky3 FRI arithmetic circuit translation — enabling opt-in cryptographic proof of correct execution.

Deterministic Execution

All execution fully deterministic — required for distributed consensus. Read/write sets enforced post-execution. Any undeclared access triggers ORCA rollback and re-queue.

NexaVM · Transaction Execution Pipeline
// Transaction lifecycle in NexaVM fn execute_transaction(tx: Transaction) { // 1. Falcon-1024 signature verify verify_pq_sig(tx.sig, tx.sender)?; // 2. ORCA dependency extraction let deps = extract_rw_sets(tx.payload); dag.insert(tx.id, deps); // 3. Parallel dispatch if dag.is_independent(tx.id) { spawn(core, execute_rv64gc(tx)); } // 4. Poseidon2 state commit state.commit(poseidon2(changes)); // 5. ZK witness (opt-in) if tx.proof_type == ZK_VERIFIED { witness = plonky3.generate(circuit); } }
Target Applications

BUILT FOR REAL DEMAND

LQ1's quantum-resistant, high-throughput architecture is purpose-built for use cases that cannot accept deferred security risk or throughput ceilings.

💳
Financial Settlement

Sub-second finality + 100k+ TPS matches global card network capacity with quantum-safe security. Real-time settlement without deferred quantum migration risk.

🤖
Verifiable AI Compute

ZK-verified execution mode enables proof-of-correct-computation for AI inference — trustless AI output markets with cryptographic auditability.

📡
DePIN Networks

High-frequency sensor data and micropayment settlements require LQ1's throughput and finality profile. Hardware-native execution supports IoT-scale transaction volumes.

🪪
Sovereign Digital Identity

Falcon-1024 / SLH-DSA credentials remain secure over decade-scale time horizons against future quantum adversaries. Long-term identity anchoring.

🌉
Cross-Chain Settlement

L1–L2 interface with mandatory ZK proof verification for bridge transactions. Auditable, secure cross-chain transfers with cryptographic guarantees.

🏛
Institutional Infrastructure

NIST-standardised cryptographic primitives satisfy regulatory requirements for financial institutions, government systems, and critical digital infrastructure.

Roadmap

FOUR PHASES — THREE COMPLETE

All performance claims are phase-validated. No figures are presented without phase attribution and measurement method.

✓ COMPLETE
Research & Design — 2025–2026

Full protocol specification (v15.5), cPQC-Agg paper (v4.5), Rust simulator with real Falcon-1024 + ORCA. 232ms live TCP testnet finality achieved.

◌ NEXT
Devnet — Q3 2026

Core protocol in Go, single-node execution, full ORCA scheduler, NexaVM runtime. Validator onboarding begins.

◌ PLANNED
Testnet — Q1 2027

Multi-node consensus, validator onboarding, stress testing. Public benchmark scripts. TGE milestone prep.

◌ PLANNED
Mainnet — Q4 2027

Production launch with 200 initial validators. TGE Q3 2027 with 2% initial circulating supply. cPQC-Agg paper peer-reviewed.

Team

LAHARA PROTOCOL FOUNDATION

Every architectural claim in LQ1 is grounded in formal specification or controlled empirical measurement. No architectural decision was made without a formal security justification or performance model.

F · O · U · N · D · E · R
Libin Chacko
Founder — Lahara Protocol Foundation

Founder of the Lahara Protocol Foundation and the driving force behind LQ1. Directed the complete research programme and architecture of the LQ1 protocol — including the system-level design of post-quantum cryptographic integration, the ORCA parallel execution framework, the proprietary cPQC-Agg signature aggregation architecture, and the deterministic coordination model. Research-driven approach: every architectural claim in LQ1 is grounded in formal specification or controlled empirical measurement.

C · O · R · E   A · R · C · H · I · T · E · C · T
Lahara Protocol Foundation
Research & Architecture Team · 2026

The Lahara Protocol Foundation is the research body behind LQ1. The Foundation directed all protocol specification work — cryptographic system design, ORCA scheduler architecture, NexaVM execution model, and the cPQC-Agg signature aggregation scheme. Protocol specification v12.7 (2026) is publicly available. Reference implementation published at GitHub.

LQ1 Logo
ORCA Parallel Scheduler Simulation

ORCA constructs a dependency DAG from declared transaction read/write sets. Transactions with no shared state execute concurrently across all CPU cores. Adjust parameters and observe parallel execution in real time.

▸ Dependency DAG — Live Execution
Queued
Executing
Complete
Conflict/Retry
0
TPS (projected)
0
Parallel Waves
0.0%
Conflict Rate
0
Blocks Finalised
Transaction Count
12
CPU Cores
8
Conflict Probability
15%
Execution Log
Adaptive HotStuff-2 BFT Consensus Simulation

Visualisation of the HotStuff-2 BFT protocol. Validator nodes vote in rounds — Propose → Vote → Commit. Byzantine validators are marked red. Quorum certificates aggregate via cPQC-Agg. Toggle Byzantine faults and observe protocol recovery.

▸ Validator Network — Round Visualisation
▸ Round Progress
Leader Election
Block Proposal
Validator Vote
cPQC-Agg
QC Formed
Block Commit
Validator Count
12
Byzantine Validators
0
Block Finality
0
Rounds Done
Validator Economics Simulation

Model validator incentives — staking, block rewards, slashing conditions, and fee distribution. The LQ1 economic model aligns validator incentives with network security, performance, and decentralisation.

Staking Calculator
Network Parameters
Total Network Stake
10M
Block Reward Rate (APY)
8%
Estimated Annual Reward
Daily Block Rewards
Network Economics
Block Reward Model
Stake-proportional
Fee Structure
Base + Priority
Slashing: Equivocation
100% stake
Slashing: Downtime
Progressive
Min Validator Stake
Protocol-governed
Light Validator Tier
Planned (Phase 4)
Fee Distribution Model
Block Proposer
Priority fees + base
Validator Pool
Stake-weighted share
Protocol Treasury
Governance-controlled
Base Fee Adjustment
Auto per block
Slashing Conditions
Equivocation

Signing two conflicting blocks at the same height. Enforced deterministically. Full stake slash. No appeals.

Extended Downtime

Progressive slash rate for extended validator absence. Designed to incentivise uptime without catastrophic penalty for temporary failures.

Invalid Signature

Submitting blocks with invalid Falcon-1024 or cPQC-Agg signatures. Immediate detection via cryptographic verification. Stake at risk.

Emergency Quantum Switch Protocol

The Emergency Quantum Switch (EQS) is LQ1's cryptographic agility protocol — enabling live migration between post-quantum primitive sets without halting the network. Simulate a triggered EQS event and observe the protocol's response.

▸ Network Migration Visualisation
EQS Protocol Steps
01
Threat Detection

Governance oracle or cryptographic break disclosure triggers EQS flag. Propagated via emergency broadcast to all validators within 3 rounds.

02
Dual-Mode Transition

Network enters dual-signature mode — both current and successor primitive sets accepted simultaneously. No downtime. Blocks continue finalising.

03
Key Rotation Window

Validators rotate to new key pairs within governance-defined window. Key rotation is on-chain verifiable. Non-rotating validators face progressive exclusion.

04
State Migration

All state commitments re-anchored under new cryptographic parameters. ZK proof system updated to successor hash function. Full backward compatibility maintained.

05
Cutover Complete

Legacy primitive support deprecated. Network operates exclusively on successor cryptographic stack. Migration logged on-chain with full auditability.

NOMINAL
Protocol Status