Vantage Framework
Vantage is a data entity persistence and abstraction framework for Rust.
Rather than being a traditional ORM, Vantage introduces the concept of a DataSet — an abstract, composable handle to records living in a remote data store. You define structure, conditions, relations, and operations without loading data eagerly, and Vantage translates your intent into efficient queries for whichever backend you’re using.
This documentation covers Vantage 0.4.
Getting Started
Vantage covers a lot of ground — multiple databases, type systems, entity frameworks, UI adapters — but none of that matters until you’ve seen it do something useful.
This guide introduces Vantage concepts one at a time, each building on the last. We’ll start with something you already know — SQL — and work our way up to the bigger abstractions. Along the way we’ll build a small CLI tool that grows with each chapter.
You’ll need basic Rust experience (structs, traits, async/await, cargo). No prior Vantage knowledge required.
Start here: SQLite and the Query Builder
TODO: Concept Coverage
- Expression basics (
sqlite_expr!, parameter binding, injection safety) - Identifier quoting (
ident(), dialect-aware) - SqliteSelect via Selectable trait (with_source, with_field, with_condition, with_order, with_limit) — partial, missing with_order, with_limit
- Executing queries (ExprDataSource,
db.execute()) —db.associate()not covered - Aggregates on select (as_count, as_sum) — covered via
db.aggregate()not.as_count()directly - Table definition (Table::new, with_column_of, with_id_column, builder pattern)
- Table → select query (table.select() returns vendor-specific builder)
- DataSource concept (what it is, how you pass it to Table::new)
- Entity struct (plain Rust struct, no id field,
#[entity]macro) - Record<V> (persistence-native value bags)
- Type system (vantage_type_system! macro, AnySqliteType, typed vs untyped values) — AnySqliteType mentioned, macro not shown
- ReadableDataSet (list, get, get_some, get_count)
- WritableDataSet (insert, delete)
- ActiveEntity (get_entity, modify via DerefMut, save)
- Conditions (table[“field”], .eq/.gt/.lt, with_condition returns new table)
- Sync vs async — defining table is sync, hitting DB is async
- Relationships (with_one, with_many, declared on table not entity)
- Reference traversal (get_ref_as, subquery-based conditions)
- Computed fields (with_expression, correlated subqueries)
- AnyTable (from_table, JSON boundary, type-erased generic code)
- Model crate pattern (entities + table constructors + relationships in one crate)
- Multiple backends per entity (surreal_table, sqlite_table, csv_table)
- DataSet trait hierarchy (ReadableDataSet, InsertableDataSet, WritableDataSet) — partial, ReadableDataSet only
- ValueSet traits (schema-less Record-based access)
- Vendor-specific query builders (SqliteSelect, PostgresSelect, SurrealSelect)
- Deferred expressions (cross-database defer/map)
- Progressive persistence model (implement only what your backend supports)
- Error handling (VantageError, context, with_context) — partial (VantageError + context shown)
- Connection management (DSN pattern, OnceLock)
- Pagination (with_pagination)
- Search expressions (search_expression, vendor-specific LIKE/CONTAINS) — via
with_search - models! macro (all_tables() generation)
- Extension traits (trait CategoryTable, ref_products, adding custom methods like print)
SQLite and the Query Builder
Vantage is a big framework. It covers SQL databases, SurrealDB, MongoDB, CSV files, REST APIs — and ties them all together with a shared type system, expression engine, and data abstraction layer.
We’ll get to all of that. But right now, let’s start with something familiar: building SQL queries.
A query builder is a tool that assembles SQL from composable parts instead of string concatenation. You’ve probably seen this pattern before:
- Knex.js (JavaScript) —
knex('users').where('age', '>', 18).select('name') - SQLAlchemy Core (Python) —
select(users.c.name).where(users.c.age > 18) - JOOQ (Java) —
dsl.select(USERS.NAME).from(USERS).where(USERS.AGE.gt(18)) - Diesel (Rust) —
users.filter(age.gt(18)).select(name)
Vantage has its own query builder too. Each supported database gets a dedicated builder —
SqliteSelect, PostgresSelect, SurrealSelect — so you get the right quoting, parameter
binding, and dialect features for your target.
For this chapter we’ll use SQLite. It’s lightweight, needs no server, and works with a plain file on disk.
By the end of this page you’ll be able to:
- Connect to an SQLite database from Rust
- Build SELECT queries with fields and conditions
- Execute queries and read results
- Convert results into
Vec<Record>with typed field access - Run aggregates (COUNT, SUM) with one method call
- Understand how Vantage keeps parameters separate from SQL (no injection risk)
Set up
Create a new project:
cargo init learn-1 && cd learn-1
cargo add vantage-sql --features sqlite
cargo add vantage-expressions
cargo add tokio --features full
Three dependencies — vantage-sql gives us the SQLite query builder and connection pool,
vantage-expressions is needed by the sqlite_expr! macro, and tokio provides the async
runtime because all database operations are async.
Create and populate a database
We’ll make a small product catalog from scratch. Create seed.sql in your project root:
CREATE TABLE product (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price INTEGER NOT NULL,
category_id INTEGER,
is_deleted BOOLEAN NOT NULL DEFAULT 0
);
INSERT INTO product VALUES (1, 'Cupcake', 120, 1, 0);
INSERT INTO product VALUES (2, 'Doughnut', 135, 1, 0);
INSERT INTO product VALUES (3, 'Tart', 220, 2, 0);
INSERT INTO product VALUES (4, 'Pie', 299, 2, 0);
INSERT INTO product VALUES (5, 'Cookies', 199, 1, 0);
INSERT INTO product VALUES (6, 'Discontinued Cake', 80, 1, 1);
INSERT INTO product VALUES (7, 'Sourdough Loaf', 350, 3, 0);
CREATE TABLE category (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
INSERT INTO category VALUES (1, 'Sweet Treats');
INSERT INTO category VALUES (2, 'Pastries');
INSERT INTO category VALUES (3, 'Breads');
Run it:
sqlite3 products.db < seed.sql
You now have products.db — 7 products (6 active, 1 deleted) across 3 categories. Quick check:
sqlite3 products.db "SELECT name, price FROM product WHERE is_deleted = 0"
Start with an async main
All database operations in Vantage are async, so we need a Tokio runtime. Replace src/main.rs
with:
use vantage_sql::prelude::*;
#[tokio::main]
async fn main() {
if let Err(e) = run().await {
e.report();
}
}
async fn run() -> VantageResult<()> {
println!("Ready!");
Ok(())
}
A few things going on here:
use vantage_sql::prelude::*brings in everything we need for this chapter —SqliteDB,SqliteSelect, thesqlite_expr!macro, error types, and the traits that make builder and execution methods work.VantageResult<()>is Vantage’s own Result type. It usesVantageError, which tracks context and error chains for readable diagnostics.e.report()prints the error in a structured format. We call it frommain()because Rust’s defaultResult-returning main usesDebugformatting, which is ugly. This pattern gives us clean error output instead.
Run cargo run to make sure it compiles.
Connect to SQLite
Add this inside run():
#![allow(unused)]
fn main() {
let db = SqliteDB::connect("sqlite:products.db?mode=ro")
.await
.context("Failed to connect to products.db")?;
}
SqliteDB wraps an sqlx connection pool. The connection string is an
sqlx URL — ?mode=ro
opens read-only, which is all we need for now.
If you’re adding Vantage to an existing project that already has a SqlitePool, wrap it directly:
#![allow(unused)]
fn main() {
let db = SqliteDB::new(existing_pool);
}
The reverse works too — db.pool() gives you the underlying SqlitePool, although Vantage
expressions will eliminate any need to execute queries directly.
.context() wraps any error with a human-readable message. If the database file doesn’t exist,
instead of a raw sqlx error you get:
Error: Failed to connect to products.db
│
╰─▶ error returned from database: (code: 14) unable to open database file
You’ll see .context() used throughout Vantage code. It comes from VantageError and works on
any Result with a standard error type.
Build a SELECT
SqliteSelect is the query builder for SQLite.
Other persistences have their own —
PostgresSelect,
MongoSelect — and they all implement the
Selectable trait, so the interface is identical apart from
vendor-specific extensions. None of them need a database connection — they’re just structs that
accumulate query parts. You build them with a chain of .with_*() calls:
#![allow(unused)]
fn main() {
let select = SqliteSelect::new()
.with_source("product")
.with_field("name")
.with_field("price");
println!("{}", select.preview());
// SELECT "name", "price" FROM "product"
}
.preview() renders the final SQL as a string — handy for debugging, but never used for execution.
.with_*() consumes the builder and returns a new one. Call .with_field() as many times as you
need; skip it entirely for SELECT *.
Every .with_*() method has a corresponding .add_*() that mutates in place instead of
consuming. Use whichever fits your code:
#![allow(unused)]
fn main() {
// Builder style
let select = SqliteSelect::new()
.with_source("product")
.with_field("name");
// Mutable style
let mut select = SqliteSelect::new();
select.add_source("product", None);
select.add_field("name");
}
Same result. The .with_*() style is nicer for one-shot construction, .add_*() is useful when
you’re building a query conditionally in a loop.
Execute it
#![allow(unused)]
fn main() {
let result = db.execute(&select.expr()).await?;
println!("{:?}", result);
}
Two steps here: .expr() turns the builder into an Expression
— Vantage’s internal representation that keeps parameters separate from the SQL template. Then
db.execute() sends it to the database.
The result is AnySqliteType — a type-tagged wrapper around whatever came back. The Debug output
isn’t pretty, but you should see all 6 product rows in there.
Only on .await. Everything before that — with_source, with_field, with_condition — is
synchronous struct manipulation. You always know when a database call happens because you typed
.await.
Adding conditions
Our database has a soft-delete flag — old_cake has is_deleted = 1. Let’s filter it out:
#![allow(unused)]
fn main() {
let condition = sqlite_expr!("\"is_deleted\" = {}", false);
let select = SqliteSelect::new()
.with_source("product")
.with_field("name")
.with_field("price")
.with_condition(condition);
println!("{}", select.preview());
// SELECT "name", "price" FROM "product" WHERE "is_deleted" = 0
}
sqlite_expr! creates an Expression — a SQL template with
typed, bound parameters. That {} is not Rust’s format!: the value false is stored
separately and bound through sqlx’s parameterized query interface at execution time — no injection
risk, ever. The preview shows it inline for readability.
In SQL persistences, a condition is just an expression. When you pass it to .with_condition(),
it’s nested inside the select’s own expression tree. At execution time the whole tree is
flattened into a single template + parameter list that
the database driver can bind safely.
Run it — you should see 5 rows, with “Discontinued Cake” filtered out.
Notice that you passed false but the preview shows 0. SQLite has no native boolean — Vantage’s
SqliteType implementation for bool converts it to an integer automatically. PostgreSQL would
render FALSE instead. Each persistence maps Rust types to the correct native representation.
The {} parameter accepts any type that implements SqliteType: bool, i64, f64, String,
chrono::NaiveDate, Option<T>, and more. You can implement SqliteType for your own types too.
See Persistence-aligned Type System for details.
Typed columns and operators
Writing \"is_deleted\" in a raw expression works, but there’s a cleaner way.
Column<T> creates a typed column reference, then chain an
SqliteOperation like .eq() to build the
condition:
#![allow(unused)]
fn main() {
let is_deleted = Column::<bool>::new("is_deleted");
let condition = is_deleted.eq(false);
}
Same result, but the type parameter <bool> ensures you can only compare against matching types.
Try is_deleted.eq(42) — it won’t compile. Other operators — .gt(), .lt(), .ne(), .in_() —
enforce the same type safety.
The result is a SqliteCondition — the backend’s native condition type — ready to be passed
directly to .with_condition().
Each SQL backend has its own operation trait — SqliteOperation, PostgresOperation,
MysqlOperation — imported automatically via the prelude. These traits are
blanket-implemented for any Expressive<T> where T: Into<AnySqliteType>, so typed columns
(Column<i64>, Column<bool>, etc.) all get .eq(), .gt(), and friends for free.
The operation produces a SqliteCondition that wraps Expression<AnySqliteType>. Since the
condition type itself implements Expressive<AnySqliteType>, you can chain operations
across type boundaries:
#![allow(unused)]
fn main() {
let price = Column::<i64>::new("price");
price.gt(10).eq(false) // => (price > 10) = 0
}
Here .gt(10) returns a SqliteCondition, and .eq(false) works on it because bool is
Expressive<AnySqliteType>. Try price.gt(10).eq("foobar") — surprisingly, this compiles
too. That’s by design: type safety is enforced on the first operation (the column level),
but once you have a SqliteCondition, any AnySqliteType-compatible value is accepted.
You can also compare columns of the same type: price.eq(price.clone()) compiles. But
price.eq(is_deleted) won’t — Column<bool> isn’t Expressive<i64>.
The .clone() is needed because operations take ownership of their arguments — values are
stored inside the Expression tree until the query is executed. If you plan to reuse a
column in multiple conditions, clone it or create a fresh Column::new().
Multiple conditions combine with AND:
#![allow(unused)]
fn main() {
let is_deleted = Column::<bool>::new("is_deleted");
let price = Column::<i64>::new("price");
let select = SqliteSelect::new()
.with_source("product")
.with_field("name")
.with_condition(is_deleted.eq(false))
.with_condition(price.gt(150));
// ... WHERE "is_deleted" = 0 AND "price" > 150
}
sqlite_ident() is one of several primitives — reusable building blocks for SQL expressions.
They handle quoting, escaping, and vendor-specific syntax. To use them:
#![allow(unused)]
fn main() {
use vantage_sql::sqlite::sqlite_ident as ident;
let condition = ident("is_deleted").eq(false);
}
Each backend has its own typed identifier: sqlite_ident(), pg_ident(), mysql_ident().
These return a backend-pinned wrapper so .eq(), .gt(), etc. work without ambiguity.
There is also a generic ident() that works when the backend type can be inferred from
context — for example, inside sqlite_expr!() or when passed to a method that expects
a specific Expressive<AnySqliteType>. Use the typed variant when calling operations
directly.
Primitives are not part of the prelude — import them when needed. Besides identifiers, you get
Fx (function calls), Case, Concat, Interval, and more. See the
Primitives reference for the full list.
In Vantage, primitives, query builders, Column, Conditions and even native types
like i64 and bool — all implement Expressive<T>, where T is your database’s AnyType
(for SQLite it’s AnySqliteType, for MongoDB — AnyMongoType) — making them
eligible to be parameters of expressions.
Working with Any-types
Record<V> is an ordered map (IndexMap<String, V>) — one record per row,
with column names as keys. It lives in the vantage-types crate, so add that dependency and import
its prelude:
cargo add vantage-types --features serde
#![allow(unused)]
fn main() {
use vantage_types::prelude::*;
}
So far we’ve been printing raw AnySqliteType with Debug. That works for verifying queries, but
it’s useless for real work. When db.execute() returns a multi-row result, the
AnySqliteType holds an array of maps internally. Convert it
to Vec<Record<AnySqliteType>> to work with individual rows:
#![allow(unused)]
fn main() {
let raw = db.execute(&select.expr()).await?;
let records = Vec::<Record<AnySqliteType>>::try_from(raw)
.context("Failed to convert to records")?;
for rec in &records {
let name: String = rec["name"].try_get::<String>().unwrap();
let price: i64 = rec["price"].try_get::<i64>().unwrap();
println!("{} — {} cents", name, price);
}
// Cupcake — 120 cents
// Doughnut — 135 cents
// ...
}
Access fields by column name with rec["name"]. Each value is still an AnySqliteType, so you call
.try_get::<T>() to extract a typed Rust value. If the type doesn’t match (say you call
.try_get::<i64>() on a text column), you get None — no panics, no garbage.
If you need JSON-friendly records, call .into_record() on each Record<AnySqliteType>:
#![allow(unused)]
fn main() {
let json_rec: Record<serde_json::Value> = rec.into_record();
}
This is convenient for serialization, but serde_json::Value supports a narrower set of
types — you’ll lose precision on Decimal and chrono types (dates become strings,
decimals become floats). Stick with Record<AnySqliteType> when you need full type fidelity.
Under the hood, each persistence has its own type system for storing values. SQLite uses
CBOR — a compact binary format that preserves types like Decimal, NaiveDate,
and NaiveDateTime through tagged values. MongoDB uses BSON natively. The
AnySqliteType / AnyMongoType wrappers hide these details — you interact with .try_get::<T>()
regardless of which persistence you’re using. If you ever need to inspect the raw representation,
.value() gives you the underlying CBOR value:
#![allow(unused)]
fn main() {
let price_cbor = rec["price"].value();
println!("{:?}", price_cbor);
// Integer(Integer(120))
}
See Persistence-aligned Type System for the full picture.
Mapping rows to structs
Calling .try_get::<T>() on every field gets tedious. The #[entity] macro generates
TryFromRecord<AnySqliteType> for your struct, so conversion
happens in one call with no type information lost:
#![allow(unused)]
fn main() {
#[entity(SqliteType)]
struct Product {
name: String,
price: i64,
}
let raw = db.execute(&select.expr()).await?;
let records = Vec::<Record<AnySqliteType>>::try_from(raw)?;
for rec in records {
let product = Product::from_record(rec)?;
println!("{} — {} cents", product.name, product.price);
}
}
The macro needs vantage-core as a direct dependency (it’s already a transitive dep through
vantage-sql, but the generated code references it in your crate):
cargo add vantage-core
The macro also supports multiple type systems in one attribute —
#[entity(SqliteType, PostgresType, MongoType)] generates a separate TryFromRecord impl for each
persistence. One struct, all backends.
#[entity] converts each field directly through the persistence’s type system — your struct
fields just need to implement SqliteType. Alternatively, you can convert a
Record<AnySqliteType> into Record<serde_json::Value> and use serde, but this may lose
type information (e.g. Decimal precision, date types):
#![allow(unused)]
fn main() {
#[derive(serde::Deserialize)]
struct Product {
name: String,
price: i64,
}
for rec in records {
let json_rec: Record<serde_json::Value> = rec.into_record();
let product = Product::from_record(json_rec)?;
}
}
For simple types like String and i64 it’s fine, but Decimal values lose precision and
dates become strings. Prefer #[entity] when your schema includes those types.
Putting it together
Here’s the complete src/main.rs — connect, query, convert to entities, print:
use vantage_sql::prelude::*;
use vantage_types::prelude::*;
#[entity(SqliteType)]
struct Product {
name: String,
price: i64,
}
#[tokio::main]
async fn main() {
if let Err(e) = run().await {
e.report();
}
}
async fn run() -> VantageResult<()> {
let db = SqliteDB::connect("sqlite:products.db?mode=ro")
.await
.context("Failed to connect to products.db")?;
let select = SqliteSelect::new()
.with_source("product")
.with_field("name")
.with_field("price")
.with_condition(Column::<bool>::new("is_deleted").eq(false));
let raw = db.execute(&select.expr()).await?;
let records = Vec::<Record<AnySqliteType>>::try_from(raw)?;
for rec in records {
let p = Product::from_record(rec)?;
println!("{:<12} {:>3} cents", p.name, p.price);
}
Ok(())
}
cargo run
# Cupcake 120 cents
# Doughnut 135 cents
# Tart 220 cents
# Pie 299 cents
# Cookies 199 cents
What we covered
| Concept | What it does | More info |
|---|---|---|
SqliteSelect | Builds SELECT queries via builder pattern | Selectable |
Column | Typed column reference — enforces matching operand types | |
SqliteOperation | Ext trait giving .eq(), .gt(), etc. → SqliteCondition | |
sqlite_expr! | Creates expressions with typed, bound parameters | Expression |
db.execute() | Runs an expression, returns AnySqliteType | ExprDataSource |
Record<V> | Ordered map of column names to values — row-level access | .try_get::<T>() |
#[entity(SqliteType)] | Generates lossless record-to-struct conversion | TryFromRecord |
Tables and Typed Data Access
A table is a structure representing a collection of records in a database:
#![allow(unused)]
fn main() {
let products = Product::table(db);
for (id, product) in products.list().await? {
println!("{} — {} cents", product.name, product.price);
}
}
Think of Table<SqliteDB, Product> as a Vec<Product> — except the records aren’t in memory. They
live somewhere else (a database, a file, an API), and you don’t know how many there are or what they
contain until you list them.
If you’ve used an ORM before, you might expect Vantage to work with individual records — load one, change it, save it back. Vantage works differently: you always operate on a set of records. A set might contain one record, no records, or millions — you don’t pull them into memory to find out.
Operations like counting, filtering, and updating happen on the database side — Vantage builds the right query and sends it over. Even traversal operations and sub-queries are done by building queries and executing remotely — if the database supports it, of course.
When you .clone() a table, you don’t clone the data — you clone the definition. From there you
can narrow it down by adding conditions, turning it into a subset. Some examples of data sets you
might work with:
- all user records except those with a soft-delete flag
- orders placed today
- paid customers
- orders of paid customers
- products that sold at least 5 items today
Each of these is a set — defined by conditions, not a list of IDs. Another typical operation is expressing one set as a condition of another then perform an action:
- notify customers who have an unpaid invoice older than 30 days
- archive orders whose products have been discontinued
- apply a discount to customers who spent more than $500 this month
- list warehouses that stock at least one out-of-stock product
- flag reviews written by users who have been banned
Put these together and even complex operations are easy to express in code:
#![allow(unused)]
fn main() {
// "notify customers who have an unpaid invoice older than 30 days"
let mut overdue = Invoice::table(db);
overdue.add_condition(overdue["status"].eq("unpaid"));
overdue.add_condition(overdue["issued_at"].lt(days_ago(30)));
overdue.ref_customer().send_reminder().await;
}
#![allow(unused)]
fn main() {
// "apply a discount to customers who spent more than $500 this month"
let mut customers = Customer::table(db);
customers.add_expression("spent_this_month", |c| {
c.subquery_orders().only_this_month().field_sum("total")
});
customers.add_expression("discount", |c| {
primitives::ternary(c["spent_this_month"].gt(500), 10, 0)
});
}
Don’t worry about the exact syntax — we’ll get to all of it. The point is that sets compose naturally: define one, use it to narrow another, then act on the result.
Defining a Table
The syntax for defining a table looks similar to query building from chapter 1, but there are important differences:
| Query | Table | |
|---|---|---|
| Lifetime | Built, executed, dropped | Sticks around, spawns many queries |
| Operations | One SQL statement (SELECT, INSERT, …) | Higher-level CRUD: list, get, add, patch, delete |
| Columns | String field names via .with_field() | Typed Column<T> definitions |
| Database | Not bound — just a struct | Holds a database reference (Arc) |
| Idempotency | INSERT fails on duplicate key | replace() and delete() are idempotent |
| Data sources | Only databases with a query language | Any data source: SQL, CSV, APIs, queues, etc. |
A Table is typically defined in its own file alongside the entity.
Create src/product.rs:
#![allow(unused)]
fn main() {
// src/product.rs
use vantage_sql::prelude::*;
use vantage_types::prelude::*;
#[entity(SqliteType)]
#[derive(Clone, Default)]
pub struct Product {
pub name: String,
pub price: i64,
}
impl Product {
pub fn table(db: SqliteDB) -> Table<SqliteDB, Product> {
let is_deleted = Column::<bool>::new("is_deleted");
Table::new("product", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_column_of::<i64>("price")
.with_column_of::<bool>("is_deleted")
.with_condition(is_deleted.eq(false))
}
}
}
In most cases you already know which database a table lives in — passing db every time
is noise. A common pattern is to wrap the connection in a global accessor:
#![allow(unused)]
fn main() {
impl Product {
fn table() -> Table<SqliteDB, Product> {
Table::new("product", get_sqlite_db())
// ...columns...
}
}
}
We’ll keep passing db explicitly in this tutorial for clarity, but real applications
typically use this pattern.
This defines the product table once. You can now generate queries from it:
#![allow(unused)]
fn main() {
let table = Product::table(db);
// Full SELECT with all columns and conditions applied
let select = table.select();
println!("{}", select.preview());
// SELECT "id", "name", "price", "is_deleted" FROM "product"
// Count query (does not execute — just builds the expression)
let count_query = table.get_count_query();
// SELECT COUNT(*) FROM "product"
// Sum query for a specific column
let sum_query = table.get_sum_query(&table["price"]);
// SELECT SUM("price") FROM "product"
}
These return the same SqliteSelect and Expression types from chapter 1 — the table just
assembles them for you. But most of the time you won’t need the raw query at all.
CRUD operations
With the table defined, all four operations — create, read, update, delete — are one-liners. Each comes from a trait in the prelude:
#![allow(unused)]
fn main() {
let table = Product::table(db);
// Read — list all, get one by ID
let all = table.list().await?; // IndexMap<String, Product>
let pie = table.get("pie").await?; // Option<Product>
// Create — insert with a known ID
let muffin = Product { name: "Muffin".into(), price: 175 };
table.insert(&"muffin".to_string(), &muffin).await?;
// Update — replace the entire record
let updated = Product { name: "Blueberry Muffin".into(), price: 195 };
table.replace(&"muffin".to_string(), &updated).await?;
// Delete
table.delete(&"muffin".to_string()).await?;
}
That’s it. list() returns an IndexMap<Id, Product> — ordered and keyed by ID. get() returns
Option<Product> — None when the ID doesn’t exist, so you can pattern-match or .ok_or(...) into
your own not-found error. There’s also get_some() which returns an (Id, Product) pair (still
Option-wrapped) for sampling, and insert_return_id() for when you want the database to generate
the ID.
Try duplicating the replace() and delete() calls — the result is the same. Replacing
a record that already has the new values is a no-op. Deleting a record that’s already gone
succeeds silently. This makes table operations safe to retry without worrying about
side effects.
Type-erased tables
Table<SqliteDB, Product> is great when you know the types at compile time. But what if you want to
write a function that lists any table — products, orders, customers — without knowing the entity
type?
AnyTable wraps a concrete table and erases its type parameters.
Values come back as Record<serde_json::Value> instead of typed entities. You can iterate columns
by name and build a generic display:
#![allow(unused)]
fn main() {
async fn list_table(table: &AnyTable) -> VantageResult<()> {
let columns = table.column_names();
// Header
print!(" {:<12}", "id");
for col in &columns {
print!("{:<16}", col);
}
println!();
// Rows
for (id, record) in table.list_values().await? {
print!(" {:<12}", id);
for col in &columns {
let val = record.get(col).map(|v| format!("{}", v)).unwrap_or_default();
print!("{:<16}", val);
}
println!();
}
Ok(())
}
}
Convert any typed table into an AnyTable with from_table():
#![allow(unused)]
fn main() {
let table = Product::table(db);
let any = AnyTable::from_table(table);
list_table(&any).await?;
// id name price
// cupcake "Cupcake" 120
// donut "Doughnut" 135
// ...
}
Under the hood, AnyTable converts values to and from serde_json::Value on the fly. Conditions,
pagination, and all CRUD operations still work — you just lose compile-time type safety on the
entity fields. This is the trade-off: AnyTable lets you build generic UI components, CLI tools,
and admin panels that work with any table definition.
You can also write list_table using generics — that keeps type safety but limits you
to Rust callers:
#![allow(unused)]
fn main() {
async fn list_table<E: Entity + std::fmt::Debug>(
table: &impl ReadableDataSet<E>,
) -> VantageResult<()> {
for (id, entity) in table.list().await? {
println!(" {}: {:?}", id, entity);
}
Ok(())
}
}
If you plan to expose tables outside of Rust (Python bindings, FFI, a web admin UI),
type erasure via AnyTable is the way to go.
Relationships
Our database has a category table that we haven’t used yet. Let’s define it and connect it to
products.
Create src/category.rs:
#![allow(unused)]
fn main() {
// src/category.rs
use vantage_sql::prelude::*;
use vantage_types::prelude::*;
#[entity(SqliteType)]
#[derive(Debug, Clone, Default)]
pub struct Category {
pub name: String,
}
impl Category {
pub fn table(db: SqliteDB) -> Table<SqliteDB, Category> {
Table::new("category", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_many("products", "category_id", Product::table)
}
}
}
The important line is .with_many("products", "category_id", Product::table). This declares:
- “products” — the name of the relationship (you’ll use this to traverse it)
- “category_id” — the foreign key column on the
producttable Product::table— a function that builds the target table (same one we defined earlier)
No changes to Product are needed — the relationship is declared on the side that “owns” it.
Category has many products; the foreign key lives on product.
Given a set of categories, get_ref_as traverses the relationship and returns matching products:
#![allow(unused)]
fn main() {
let categories = Category::table(db.clone());
let products = categories.get_ref_as::<Product>("products")?;
}
The result is a Table<SqliteDB, Product> with an extra condition — only products whose
category_id matches one of the category IDs. Narrow the categories first and the subquery narrows
too. Let’s use this in a CLI that accepts an optional search filter. The .with_search() method
adds a LIKE condition across all columns of the table — handy for quick filtering.
Update src/main.rs:
mod category;
mod product;
use category::Category;
use product::Product;
use vantage_sql::prelude::*;
async fn list_products(table: &Table<SqliteDB, Product>) -> VantageResult<()> {
for (id, p) in table.list().await? {
println!(" {:<4} {:<20} {:>3} cents", id, p.name, p.price);
}
Ok(())
}
#[tokio::main]
async fn main() {
if let Err(e) = run().await {
e.report();
}
}
async fn run() -> VantageResult<()> {
let db = SqliteDB::connect("sqlite:products.db")
.await
.context("Failed to connect to products.db")?;
let filter = std::env::args().nth(1);
let products = match &filter {
Some(search) => Category::table(db.clone())
.with_search(search)
.get_ref_as::<Product>("products")?,
None => Product::table(db),
};
list_products(&products).await?;
Ok(())
}
Try it:
cargo run # all active products
cargo run "Sweet" # Cupcake, Doughnut, Cookies
cargo run "Pastries" # Tart, Pie
cargo run "t" # matches "Sweet Treats" AND "Pastries" — 5 products
The search "t" matches two categories (“Sweet Treats” and “Pastries”), so products from both are
returned.
Notice what happened with cargo run "t": the search matched two categories, and we got
products from both — without writing a JOIN or collecting IDs manually.
This is because Vantage relationships work through sets. with_search("t") narrowed
the category table to two rows. get_ref_as then generated a subquery:
SELECT ... FROM "product"
WHERE "category_id" IN (SELECT "id" FROM "category" WHERE "name" LIKE '%t%')
AND "is_deleted" = 0
The subquery comes from whatever conditions are on the source table. Add more conditions and the subquery narrows further. The relationship definition stays the same — you never need to rewrite the traversal logic.
Computed fields with expressions
So far, list_products only shows name and price. It would be nice to show the category name too —
but Product doesn’t have a category field, and the category name lives in a different table.
Vantage solves this with expressions — computed fields that are evaluated as part of the SELECT query. You define them on the table, and they appear alongside regular columns.
First, add a with_one relationship on Product (the reverse of with_many on Category):
#![allow(unused)]
fn main() {
.with_one("category", "category_id", Category::table)
}
This says: each product has one category, linked through category_id. Now you can use
get_subquery_as inside with_expression to build a correlated subquery:
#![allow(unused)]
fn main() {
.with_expression("category", |t| {
t.get_subquery_as::<Category>("category")
.unwrap()
.select_column("name")
})
}
select_column("name") builds a subquery like
SELECT "name" FROM "category" WHERE "id" = "product"."category_id" — a correlated subquery that
fetches the category name for each product row.
Here’s the updated src/product.rs:
#![allow(unused)]
fn main() {
use vantage_sql::prelude::*;
use vantage_types::prelude::*;
use crate::category::Category;
#[entity(SqliteType)]
#[derive(Debug, Clone, Default)]
pub struct Product {
pub name: String,
pub price: i64,
pub category: Option<String>,
}
impl Product {
pub fn table(db: SqliteDB) -> Table<SqliteDB, Product> {
let is_deleted = Column::<bool>::new("is_deleted");
Table::new("product", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_column_of::<i64>("price")
.with_column_of::<bool>("is_deleted")
.with_condition(is_deleted.eq(false))
.with_one("category", "category_id", Category::table)
.with_expression("category", |t| {
t.get_subquery_as::<Category>("category")
.unwrap()
.select_column("name")
})
}
}
}
A few things to note:
category: Option<String>— the field isOptionbecause a product might not have a category (NULL in the database). The expression result is deserialized into this field automatically.- No
with_column_offor “category” — the expression replaces what would normally be a column. Vantage adds it to the SELECT as a computed field. with_one+with_expression— the relationship defines how to traverse; the expression defines what to fetch. They work together but serve different purposes.
Update list_products to show the category:
#![allow(unused)]
fn main() {
async fn list_products(table: &Table<SqliteDB, Product>) -> VantageResult<()> {
for (id, p) in table.list().await? {
let cat = p.category.as_deref().unwrap_or("-");
println!(" {:<4} {:<20} {:>3} cents [{}]", id, p.name, p.price, cat);
}
Ok(())
}
}
Run it:
cargo run
1 Cupcake 120 cents [Sweet Treats]
2 Doughnut 135 cents [Sweet Treats]
3 Tart 220 cents [Pastries]
4 Pie 299 cents [Pastries]
5 Cookies 199 cents [Sweet Treats]
7 Sourdough Loaf 350 cents [Breads]
The category name comes from a subquery — no JOIN, no extra round trip. And the category filter still works:
cargo run "t"
1 Cupcake 120 cents [Sweet Treats]
2 Doughnut 135 cents [Sweet Treats]
3 Tart 220 cents [Pastries]
4 Pie 299 cents [Pastries]
5 Cookies 199 cents [Sweet Treats]
Expressions can reference other expressions. For example, we could add a product_count
expression to Category that counts products via a subquery, then build a title expression
that combines the name and count:
#![allow(unused)]
fn main() {
// In Category::table()
.with_expression("product_count", |t| {
t.get_subquery_as::<Product>("products")
.unwrap()
.get_count_query()
})
.with_expression("title", |t| {
let name = t.get_column_expr("name").unwrap();
let count = t.get_column_expr("product_count").unwrap();
concat_!(name, " (", count, ")").expr()
})
}
get_column_expr returns the expression for either a real column or a computed expression
by name. Here it resolves "product_count" to the count subquery, so title becomes
something like name || ' (' || (SELECT COUNT(*) FROM product WHERE ...) || ')'.
Now change Product’s category expression to fetch title instead of name:
#![allow(unused)]
fn main() {
.with_expression("category", |t| {
t.get_subquery_as::<Category>("category")
.unwrap()
.select_column("title")
})
}
select_column("title") resolves the title expression on Category and builds it into
the subquery. The result:
1 Cupcake 120 cents [Sweet Treats (3)]
2 Doughnut 135 cents [Sweet Treats (3)]
3 Tart 220 cents [Pastries (2)]
4 Pie 299 cents [Pastries (2)]
5 Cookies 199 cents [Sweet Treats (3)]
7 Sourdough Loaf 350 cents [Breads (1)]
Vantage renders everything into a single SQL query. The nested subqueries might look redundant, but modern SQL databases optimise and execute them efficiently — there’s no extra round trip to the database.
Extension traits
Throughout this chapter you’ve seen calls like categories.get_ref_as::<Product>("products"). The
turbofish ::<Product> and the string "products" are repetitive and error-prone — get the string
wrong and you get a runtime error.
Rust’s extension traits solve this. Define a trait on Table<SqliteDB, Category> that wraps the
traversal:
#![allow(unused)]
fn main() {
pub trait CategoryTable: ReadableDataSet<Category> {
fn ref_products(&self) -> Table<SqliteDB, Product> {
self.get_ref_as("products").unwrap()
}
}
impl CategoryTable for Table<SqliteDB, Category> {}
}
The default method lives in the trait — impl is just {}. The unwrap() is safe because we know
“products” is defined on every CategoryTable. If the string were wrong, it would panic immediately
during development, not silently fail at runtime.
Now callers write categories.ref_products() — no turbofish, no string, no ?, and the compiler
catches typos.
The same pattern works for typed column access:
#![allow(unused)]
fn main() {
pub trait ProductTable {
fn price(&self) -> Column<i64> {
self.get_column("price").unwrap()
}
}
impl ProductTable for Table<SqliteDB, Product> {}
}
With both traits in place, code reads naturally:
#![allow(unused)]
fn main() {
let expensive = categories
.ref_products()
.with_condition(products.price().gt(200));
}
Define extension traits in the same file as the entity — product.rs gets ProductTable,
category.rs gets CategoryTable. They’re part of your model’s public API: anyone who
imports the entity gets the typed accessors for free.
This is how Vantage scales to large codebases. The table definition (columns, conditions, relationships, expressions) is declared once. Extension traits give it a clean, typed API. Business logic code never sees raw strings or turbofish — just method calls.
Extension traits aren’t limited to column and relationship accessors — you can add any
business logic. For example, the vantage-cli-util crate provides print_table() for
terminal output. Add it as a method on your trait:
#![allow(unused)]
fn main() {
use vantage_cli_util::table_display;
pub trait ProductTable {
fn price(&self) -> Column<i64> {
self.get_column("price").unwrap()
}
async fn print(&self) -> VantageResult<()> {
table_display::print_table(self).await
}
}
}
Now products.print().await? gives you a formatted table in the terminal:
+----+----------------+-------+-------------------+
| id | name | price | category |
+----+----------------+-------+-------------------+
| 1 | Cupcake | 120 | Sweet Treats (3) |
| 2 | Doughnut | 135 | Sweet Treats (3) |
| 3 | Tart | 220 | Pastries (2) |
| ... |
+----+----------------+-------+-------------------+
Other methods you might add: only_expensive() that returns a filtered clone,
total_revenue() that computes a sum, or export_csv() that writes to a file.
The table is yours to extend.
Persistence Abstraction
It is time for a pause and reflection. The files you have written so far — Cargo.toml,
product.rs, category.rs, main.rs — are the makings of an extensible business software
architecture. Scalable, maintainable, testable, and portable across databases without rewriting a
single line of business logic.
What you have is four distinct components, each with a clear job:
flowchart TD
A["<b>Business code</b><br/><code>main.rs</code><br/>minimal, domain-focused"]
B["<b>Model definitions</b><br/><code>product.rs</code>, <code>category.rs</code><br/>entities, tables, relationships, domain methods"]
C["<b>Persistence crate</b><br/><code>vantage-sql</code> (or surrealdb, mongodb, csv…)<br/>query builder, CRUD, type system"]
D["<b>Vantage framework</b><br/><code>vantage-table</code>, <code>vantage-expressions</code>, <code>vantage-types</code><br/>the extensible mechanism that ties it together"]
A --> B
B --> C
C --> D
The majority of your business code can work with data without any knowledge of where it’s stored or how. It operates on typed entities, relationships, and domain methods — nothing else.
The model definitions focus on describing where and how data is stored, but they don’t micro-manage the process. If storage requirements change — a new backend, a schema migration, a cached layer in front — the model is where you make the tweaks. Business code remains untouched.
This separation gives you:
- Separation of concerns - framework to establish great design for your software from day one, then scale.
- No ripple effect — in Rust code refactoring cascade through entire codebase. Vantage model layer contains that pain, and enterprise codebases don’t unravel when requirements shift.
- Concise syntax — the ergonomics Python, JavaScript, and PHP developers enjoy, now in Rust. Cuting boilerplate, making code readablle.
- Strong types, zero cost — everything TypeScript has and more, without sacrificing performance. The abstraction compiles down to direct calls.
- A usable alternative to OOP — Java and C# developers get entities, methods, and composition without inheritance hierarchies or dependency injection containers.
- Safe concurrency, async, ownership, no memory leaks — the performance of C with ergonomics that make concurrent code readable.
- Portability — take Vantage with you anywhere: server, desktop, web, mobile, embedded. One static binary per target.
This introduction gets you productive quickly, but it barely scratches the surface. Later chapters explore more complex challenges that make Vantage is equipped to deal with:
- Zero-cost cross-persistence traversal — follow a relationship from a Postgres table into a MongoDB collection, all expressed in Rust, executed efficiently on each side.
- In-memory entity caching with indexing — Using
vantage-redbfor powerful and transparent client-side caching. - Facades and middleware APIs, almost for free — wrap your model to expose a filtered or transformed view to another team an API.
- Super-efficient API clients and SDKs — your model becomes the client. Typed endpoints, retries, pagination, and rate limits — all in one abstraction.
- UI framework integration — the same model drives desktop, terminal, web, and mobile
UIs through
vantage-ui-adapters. Define once, render anywhere. - Reactive in-memory data — make your local data reactive, receiving live updates from your database or websocket APIs.
The patterns you’ve learned scale all the way up.
A Standalone Axum Server
Chapter 2 ended with a CLI that prints products to stdout. In this chapter we put the same model behind an HTTP API:
curl "http://localhost:3001/categories"
[{ "name": "Sweet Treats" }, { "name": "Pastries" }, { "name": "Breads" }]
# Products in a specific category
curl "http://localhost:3001/categories/1/products"
# Writes
curl -X POST "http://localhost:3001/categories" \
-H 'content-type: application/json' -d '{"name":"Gluten-Free"}'
A few things about the shape of this API:
/categories/{id}/productsis a nested route — products narrowed by the relationship we defined in chapter 2. The handler for it is the same genericlistas/categories, just applied to a scoped table./productsand/categoriesstart out on SQLite, same as chapter 2. Toward the end of the chapter we migrate them to MongoDB. That migration is a change to the model file; the handlers and routes don’t know the difference.
The handler functions are each written once and mounted against any
Table<Backend, Entity> the router has on hand — one list, one
get, one post, one patch, one delete, parameterised over the backend and entity types.
Adding another entity is a route registration, not a new handler.
The minimum Axum skeleton
Two pieces: a small adaptation to chapter 2’s entities so they round-trip through HTTP, then the server itself. One new dependency:
cargo add axum
Tokio and serde are already pulled in from earlier chapters. axum is the HTTP framework; it uses
the existing serde to encode response bodies as JSON.
Entities. Chapter 2’s Category carried a computed title field; Chapter 2’s Product carried
a computed category field. Both were assembled from subqueries — great for display, but they don’t
round-trip cleanly through a POST body because the JSON would try to write columns that don’t
exist. For a writable API we replace the computed fields with the plain FK column that sits under
them. (See with_expression for the chapter-2
computed-field setup we’re walking away from here.)
src/category.rs:
#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
#[entity(SqliteType)]
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct Category {
pub name: String,
}
impl Category {
pub fn table(db: SqliteDB) -> Table<SqliteDB, Category> {
Table::new("category", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_many("products", "category_id", Product::table)
}
}
}
src/product.rs:
#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
#[entity(SqliteType)]
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct Product {
pub name: String,
pub price: i64,
#[serde(skip_serializing_if = "Option::is_none", default)]
pub category_id: Option<String>,
#[serde(default)]
pub is_deleted: bool,
}
impl Product {
pub fn table(db: SqliteDB) -> Table<SqliteDB, Product> {
let is_deleted = Column::<bool>::new("is_deleted");
Table::new("product", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_column_of::<i64>("price")
.with_column_of::<String>("category_id")
.with_column_of::<bool>("is_deleted")
.with_condition(is_deleted.eq(false))
.with_one("category", "category_id", Category::table)
}
}
}
Two things to notice:
category_idis now a real column in both the struct and the table, not a computed expression. Same information as before, but clients can read it fromGETand supply it onPOST.is_deletedpicks up#[serde(default)]soPOST /productsbodies don’t have to carry it; new products default to live.
The soft-delete condition from chapter 2 stays — GET /products will still hide rows with
is_deleted = true.
Now the server. Replace src/main.rs with:
mod category;
mod product;
use std::sync::OnceLock;
use axum::{routing::get, Json, Router};
use category::Category;
use vantage_sql::prelude::*;
static DB: OnceLock<SqliteDB> = OnceLock::new();
fn db() -> SqliteDB {
DB.get().expect("database not initialised").clone()
}
async fn list_categories() -> Json<Vec<Category>> {
let rows = Category::table(db()).list().await.unwrap();
Json(rows.into_values().collect())
}
#[tokio::main]
async fn main() -> VantageResult<()> {
let conn = SqliteDB::connect("sqlite:products.db")
.await
.context("Failed to connect to products.db")?;
DB.set(conn).ok();
let app = Router::new().route("/categories", get(list_categories));
let listener = tokio::net::TcpListener::bind("0.0.0.0:3001").await.unwrap();
axum::serve(listener, app).await.unwrap();
Ok(())
}
A few notes on what this is doing:
static DB: OnceLock<SqliteDB>. The handler needs a database handle but we don’t want to open a new connection per request. A program-wideOnceLockis the simplest possible holder — set once inmain, read from anywhere.- Each request calls
Category::table(db())— building the table (columns, relationships) and then lists it. - The handler returns
Json<Vec<Category>>directly. No DTO layer — the entity’s fields become the JSON fields.
Run it:
cargo run
In another terminal:
curl http://localhost:3001/categories
[
{ "name": "Sweet Treats" },
{ "name": "Pastries" },
{ "name": "Breads" }
]
Caching table definitions
We can cache each table definition so it’s built once and handed out by reference on every
subsequent call. In src/category.rs, add use std::sync::OnceLock; at the top and wrap the body
of Category::table:
#![allow(unused)]
fn main() {
impl Category {
pub fn table(db: SqliteDB) -> &'static Table<SqliteDB, Category> {
static CACHE: OnceLock<Table<SqliteDB, Category>> = OnceLock::new();
CACHE.get_or_init(|| {
Table::new("category", db)
// ...columns unchanged...
.with_many("products", "category_id", |db| Product::table(db).clone())
})
}
}
}
Three changes from chapter 2’s version:
- Return type is
&'static Table<...>instead ofTable<...>. The cache owns the definition; callers get a shared reference. Most table operations (list,get,insert,replace,delete,get_ref_as) take&self, so they work on the reference without any clone. static CACHE: OnceLock<...>lives inside the function. A function-local static is still a program-wide single instance — Rust allows this and it keeps the cache private to the table that owns it.- The
with_manycallback is a closure:|db| Product::table(db).clone(). The framework still hands us a db and expects an ownedTable<SqliteDB, Product>back, so we call the cached accessor and clone the reference. On the first call this triggers Product’s cache to build; every subsequent call just clones a pre-built definition.
src/product.rs follows the same pattern — wrap the body in OnceLock::get_or_init, change the
return type to &'static Table<SqliteDB, Product>, and rewrite with_one as
|db| Category::table(db).clone().
The handler in main.rs doesn’t change at all — Category::table(db()) used to return an owned
Table, and now it returns a &'static Table that auto-derefs through the same .list() call.
Restart the server and hit /categories — the response is identical, but now the table definitions
are built on the first request and reused forever after.
Most operations work on &Table<...>, but the builder methods that add conditions — like
with_condition, with_search, with_pagination, and with_order — consume self and
return a new Table. If a handler needs to narrow the cached definition, it clones first:
#![allow(unused)]
fn main() {
let narrowed = Category::table(db())
.clone() // owned copy of the cached definition
.with_condition(...) // now we can narrow it
}
That clone is what chapter 2 meant by “cloning a table clones the definition, not the data”: it’s a copy of the shape (columns, conditions, relationships), and it’s cheap.
Products of a category
The with_many relationship we kept from chapter 2 lets
us serve a nested route — /categories/{id}/products. Before writing the handler, give the
relationship a proper name with an extension trait on Table<SqliteDB, Category>. Add this to
src/category.rs:
#![allow(unused)]
fn main() {
pub trait CategoryTable {
fn ref_products(&self) -> Table<SqliteDB, Product>;
}
impl CategoryTable for Table<SqliteDB, Category> {
fn ref_products(&self) -> Table<SqliteDB, Product> {
self.get_ref_as("products").unwrap()
}
}
}
Chapter 2 introduced this pattern. The trait gives the relationship a typed, discoverable name —
ref_products() — so call sites stop carrying the "products" string and the ::<Product>
turbofish around. The .unwrap() is safe here because we’re the ones who registered "products"; a
typo surfaces immediately at startup.
Now the handler. Take the cached category table, narrow it to a single id, and traverse the relationship:
#![allow(unused)]
fn main() {
use axum::extract::Path;
use category::CategoryTable;
use product::Product;
async fn list_category_products(Path(id): Path<i64>) -> Json<Vec<Product>> {
let id_col = Column::<i64>::new("id");
let products = Category::table(db())
.clone()
.with_condition(id_col.eq(id))
.ref_products();
let rows = products.list().await.unwrap();
Json(rows.into_values().collect())
}
}
Three things going on:
Category::table(db()).clone()— we need an ownedTableto chainwith_conditiononto, so we clone the cached definition. The clone copies the shape (columns, conditions, relationships), not any rows.with_condition(id_col.eq(id))— narrows the category table to one row: the one we’re asking for. Nothing hits the database yet.ref_products()— traverses thewith_manyrelationship we registered on Category in chapter 2, viaget_ref_as. The returned table isTable<SqliteDB, Product>, already scoped to products whosecategory_idmatches the narrowed category set. Chapter 2 walked through what the emitted SQL looks like; it’s the same here.
Register the route in main.rs:
#![allow(unused)]
fn main() {
let app = Router::new()
.route("/categories", get(list_categories))
.route("/categories/{id}/products", get(list_category_products));
}
Hit it:
curl http://localhost:3001/categories/1/products
[
{ "name": "Cupcake", "price": 120, "category_id": "1", "is_deleted": false },
{ "name": "Doughnut", "price": 135, "category_id": "1", "is_deleted": false },
{ "name": "Cookies", "price": 199, "category_id": "1", "is_deleted": false }
]
curl http://localhost:3001/categories/2/products
[
{ "name": "Tart", "price": 220, "category_id": "2", "is_deleted": false },
{ "name": "Pie", "price": 299, "category_id": "2", "is_deleted": false }
]
Swap the id in the URL and the products narrow accordingly. The relationship was declared once, back in chapter 2, and we haven’t touched it since — every new nested route reuses the same declaration.
A generic crud helper
Two handlers so far — list_categories and list_category_products — and they’re already doing the
same thing: take a narrowed table, call .list(), return JSON. Adding POST /categories,
PATCH /categories/{id}, and DELETE /categories/{id} would mean four more near-identical handlers
per entity. That’s not how you scale a codebase.
The five HTTP methods of CRUD all map onto one of two operation shapes:
- List-level at
/collection:GET(list all) andPOST(create one). - Item-level at
/collection/{id}:GET(read one),PATCH(update),DELETE(remove).
If we can describe the set of rows this endpoint operates on with one closure, the same handler
bodies serve every entity. That closure is Fn(SqliteDB, &Params) -> Table<SqliteDB, E> — given the
database and whatever path params axum extracted, return the Table to act on. Put it in main.rs:
#![allow(unused)]
fn main() {
use std::collections::HashMap;
use std::sync::Arc;
type Params = HashMap<String, String>;
fn crud<E, F>(make_table: F) -> Router
where
F: Fn(SqliteDB, &Params) -> Table<SqliteDB, E> + Send + Sync + 'static,
E: Entity<AnySqliteType> + Serialize + DeserializeOwned + Send + Sync + 'static,
{
let f = Arc::new(make_table);
Router::new()
.route(
"/",
get({
let f = f.clone();
move |p: Option<Path<Params>>| async move {
let params = p.map(|Path(p)| p).unwrap_or_default();
let rows = f(db(), ¶ms).list().await.unwrap();
Json::<Vec<E>>(rows.into_values().collect())
}
})
.post({
let f = f.clone();
move |p: Option<Path<Params>>, Json(entity): Json<E>| async move {
let params = p.map(|Path(p)| p).unwrap_or_default();
let id = f(db(), ¶ms).insert_return_id(&entity).await.unwrap();
Json(serde_json::json!({ "id": id }))
}
}),
)
.route(
"/{id}",
get({
let f = f.clone();
move |Path(params): Path<Params>| async move {
let id = params["id"].clone();
let entity = f(db(), ¶ms).get(&id).await.unwrap().unwrap();
Json(entity)
}
})
.patch({
let f = f.clone();
move |Path(params): Path<Params>, Json(partial): Json<E>| async move {
let id = params["id"].clone();
let updated = f(db(), ¶ms).patch(&id, &partial).await.unwrap();
Json(updated)
}
})
.delete({
let f = f;
move |Path(params): Path<Params>| async move {
let id = params["id"].clone();
WritableDataSet::<E>::delete(&f(db(), ¶ms), &id).await.unwrap();
StatusCode::NO_CONTENT
}
}),
)
}
}
That’s the whole thing. Five handler bodies, each tiny, all generic over the entity type. Mount
crud(...) under a prefix with .nest(...) and every route below it gets the full CRUD verb set
for free.
/categories becomes a one-liner:
#![allow(unused)]
fn main() {
.nest("/categories", crud(|db, _| Category::table(db).clone()))
}
That gives us:
| Method | Path | Does |
|---|---|---|
| GET | /categories | list all |
| POST | /categories | insert, return new id |
| GET | /categories/{id} | fetch one |
| PATCH | /categories/{id} | partial update |
| DELETE | /categories/{id} | remove |
The nested /categories/{cat_id}/products route uses the same helper. The closure reads cat_id
out of the params map to narrow the set:
#![allow(unused)]
fn main() {
.nest(
"/categories/{cat_id}/products",
crud(|db, p| {
let cat_id: i64 = p["cat_id"].parse().unwrap();
let mut c = Category::table(db).clone();
c.add_condition(c.id().eq(cat_id));
c.ref_products()
}),
)
}
c.id() comes from extending the CategoryTable trait we already set up — add it next to
ref_products:
#![allow(unused)]
fn main() {
pub trait CategoryTable {
fn id(&self) -> Column<i64>;
fn ref_products(&self) -> Table<SqliteDB, Product>;
}
impl CategoryTable for Table<SqliteDB, Category> {
fn id(&self) -> Column<i64> {
self.get_column("id").unwrap()
}
fn ref_products(&self) -> Table<SqliteDB, Product> {
self.get_ref_as("products").unwrap()
}
}
}
Nesting crud under /categories/{cat_id}/products gives a full CRUD surface for products
belonging to that category:
| Method | Path | Does |
|---|---|---|
| GET | /categories/{cat_id}/products | list |
| POST | /categories/{cat_id}/products | create |
| GET | /categories/{cat_id}/products/{id} | fetch one |
| PATCH | /categories/{cat_id}/products/{id} | partial update |
| DELETE | /categories/{cat_id}/products/{id} | remove |
Try it:
curl -X POST http://localhost:3001/categories \
-H 'content-type: application/json' -d '{"name":"Gluten-Free"}'
# {"id":"4"}
curl http://localhost:3001/categories/1/products/1
# {"name":"Cupcake","price":120,"category_id":"1","is_deleted":false}
curl -X PATCH http://localhost:3001/categories/1 \
-H 'content-type: application/json' -d '{"name":"Sweet Things"}'
# {"name":"Sweet Things"}
curl -X DELETE http://localhost:3001/categories/4
# HTTP/1.1 204 No Content
The inline list_categories and list_category_products functions can be deleted — crud covers
them both.
Axum only lets a handler run the Path extractor once per request — after that, the URL
params are considered consumed. We need the outer {cat_id} inside the closure and the inner
{id} to identify which record to fetch, which rules out calling Path<i64> plus a second
Path<String>. Grabbing all params in one shot as HashMap<String, String> sidesteps the
limit — the closure picks out what it needs by name, and the item-level handlers pull id
from the same map.
The cost is a small amount of stringly-typed parsing (p["cat_id"].parse::<i64>()), which
is a fair trade for letting one crud function cover every route shape in the server.
Each HTTP method gets its own closure in the Router. They all need to share the same
make_table function, but Rust closures that capture by move can’t be cloned by default, and
each axum handler is an independent Fn — so we wrap make_table in an Arc once and let
each handler clone the Arc (cheap — just a refcount bump). Inside the async block, the
closure can then invoke f(db(), ¶ms) to build the narrowed table for that request.
Error handling
Every handler in crud still ends in .unwrap(). The happy path has been fine to demo, but
an API that panics on the slightest database hiccup isn’t usable. The worst part isn’t the
500 — it’s what happens on the wire when axum’s request task panics: the connection is
dropped and the client sees an empty reply, with no status code and no body to explain.
A proper REST API needs three things:
- Missing resources return 404, not 500.
- Bad JSON bodies return 400 with a useful message.
- Everything else returns 500 but with a structured JSON body, not silence.
Axum already gives us the middle one for free — it rejects malformed Json<E> bodies with 400.
The other two come down to converting VantageError into an HTTP
response. Add an ApiError type to main.rs:
#![allow(unused)]
fn main() {
use axum::response::{IntoResponse, Response};
struct ApiError {
status: StatusCode,
message: String,
}
impl IntoResponse for ApiError {
fn into_response(self) -> Response {
(
self.status,
Json(serde_json::json!({ "error": self.message })),
)
.into_response()
}
}
impl From<VantageError> for ApiError {
fn from(e: VantageError) -> Self {
eprintln!("API error: {:?}", e);
Self {
status: StatusCode::INTERNAL_SERVER_ERROR,
message: e.to_string(),
}
}
}
fn not_found(id: &str) -> ApiError {
ApiError {
status: StatusCode::NOT_FOUND,
message: format!("not found: {}", id),
}
}
type ApiResult<T> = Result<T, ApiError>;
}
Three things earn their keep:
IntoResponsemakesApiErrorreturnable from a handler; axum callsinto_response()to assemble status, headers, and body.From<VantageError>lets us use?inside a handler — every.await?short-circuits to anApiErrormapped to a 500, which axum will render for us.not_found(id)is how we build a 404 explicitly. Vantage’sgetreturnsOption<E>, so we.ok_or_else(|| not_found(&id))?the missing case straight into a 404 — no error-message string-matching, no brittleness.
With that in place, the unwraps inside crud turn into ?, and each handler returns
ApiResult<T>:
#![allow(unused)]
fn main() {
get({
let f = f.clone();
move |p: Option<Path<Params>>| async move {
let params = p.map(|Path(p)| p).unwrap_or_default();
let rows = f(db(), ¶ms).list().await?;
ApiResult::Ok(Json::<Vec<E>>(rows.into_values().collect()))
}
})
}
For the GET /{id} handler, convert the Option into a 404 at the handler:
#![allow(unused)]
fn main() {
get({
let f = f.clone();
move |Path(params): Path<Params>| async move {
let id = params["id"].clone();
let entity = f(db(), ¶ms)
.get(id.clone())
.await?
.ok_or_else(|| not_found(&id))?;
ApiResult::Ok(Json(entity))
}
})
}
Do the same shape for post, patch, delete — drop the .unwrap(), add ?, wrap the
happy path in ApiResult::Ok(...). Handlers that don’t read by id just need the ?.
Try a few error cases:
curl -w "\nstatus=%{http_code}\n" http://localhost:3001/categories/999
# {"error":"not found: 999"}
# status=404
curl -w "\nstatus=%{http_code}\n" -X PATCH http://localhost:3001/categories/999 \
-H 'content-type: application/json' -d '{"name":"Ghost"}'
# {"error":"patch_table_value: no row found (id: \"999\")"}
# status=500
curl -w "\nstatus=%{http_code}\n" -X POST http://localhost:3001/categories \
-H 'content-type: application/json' -d '{not-json}'
# Failed to parse the request body as JSON: ...
# status=400
curl -w "\nstatus=%{http_code}\n" -X DELETE http://localhost:3001/categories/999
# status=204
Missing ids produce 404s with a clean JSON body. The malformed body gets axum’s built-in 400
for free. DELETE on a missing id still returns 204 — vantage’s delete is idempotent, and
“the resource is gone” is true whether or not it was ever there. PATCH on a missing id
still 500s for now — patching doesn’t go through get, so there’s no Option to intercept;
adding a pre-flight get-and-ok_or(not_found) before the patch call would give you 404
there too.
An earlier draft of this tutorial matched e.to_string().contains("no row found") to
decide between 404 and 500 — brittle, because it hard-codes a vantage-internal error
string. Once ReadableDataSet::get switched to Result<Option<E>>, the handler can map
missing rows explicitly with .ok_or_else(|| not_found(&id))?. Errors are errors, misses
are None — no string-matching required.
eprintln!("API error: {:?}", e); prints the full error structure — location, context,
nested sources — which is exactly what a human debugging a 500 needs. The client meanwhile
only sees e.to_string(), the short one-liner message. That’s the whole point of
.context("…") from chapter 1: context accumulates on the server, a single sentence reaches
the client.
Swap eprintln! for tracing::error! if you’re wired up for structured logging. The
mechanics are identical.
Pagination and search
Three categories is fine for dev; three thousand would crush any client that naively calls
GET /categories and tries to render everything. Real APIs page through long lists and let
callers filter by a search term — two query-string features that belong inside crud so
every entity gets them.
Axum parses query strings for us via the Query<T> extractor. Add a small struct for the
parameters, and a second extractor on the list handler:
#![allow(unused)]
fn main() {
use axum::extract::Query;
use vantage_table::pagination::Pagination;
#[derive(Deserialize, Default)]
struct ListQuery {
page: Option<i64>,
per_page: Option<i64>,
q: Option<String>,
}
}
Inside crud, only the GET / handler changes. It takes both extractors, mutates the narrowed
table with whatever the caller asked for, and lists:
#![allow(unused)]
fn main() {
get({
let f = f.clone();
move |p: Option<Path<Params>>, Query(q): Query<ListQuery>| async move {
let params = p.map(|Path(p)| p).unwrap_or_default();
let mut t = f(db(), ¶ms);
if q.page.is_some() || q.per_page.is_some() {
t.set_pagination(Some(Pagination::new(
q.page.unwrap_or(1),
q.per_page.unwrap_or(50),
)));
}
if let Some(term) = q.q.as_deref() {
t.add_search(term);
}
let rows = t.list().await?;
ApiResult::Ok(Json::<Vec<E>>(rows.into_values().collect()))
}
})
}
Two things happen:
set_pagination(Some(Pagination::new(page, per_page)))takes a page number and a page size. Vantage applies these asLIMIT … OFFSET …on the SELECT. Missing params fall back to the defaults (page 1, 50 per page) — and if neither is supplied we don’t touch pagination at all, so the unfiltered list still hits the whole set.add_search(term)is the.with_searchwe used in chapter 2 to add a LIKE filter across all columns. Both end up as extraWHEREclauses on the query that vantage already compiles for us.
Try it:
curl "http://localhost:3001/categories?page=1&per_page=2"
# [{"name":"Sweet Treats"},{"name":"Pastries"}]
curl "http://localhost:3001/categories?page=2&per_page=2"
# [{"name":"Breads"}]
curl "http://localhost:3001/categories?q=Pastries"
# [{"name":"Pastries"}]
curl "http://localhost:3001/categories?q=e"
# [{"name":"Sweet Treats"},{"name":"Pastries"},{"name":"Breads"}]
The nested route gets these for free — crud is the same function. per_page works on
/categories/{cat_id}/products out of the box:
curl "http://localhost:3001/categories/1/products?per_page=2"
# [
# {"name":"Cupcake","price":120,"category_id":"1","is_deleted":false},
# {"name":"Doughnut","price":135,"category_id":"1","is_deleted":false}
# ]
Because the closure for the nested mount narrows the table with with_condition before
crud applies its own pagination/search, the filters compose cleanly: the category scope stays
in effect, and pagination just counts rows within it.
A full ?order_by=price&dir=desc pairing is the obvious next thing — and the Table API
supports it via add_order(column.ascending()) — but the OrderBy type is generic over the
backend’s condition type (T::Condition), not the easier-to-hand-you Expression<T::Value>
that get_column_expr returns. Wiring it up at the generic crud level takes a small extra
layer of From conversions that would balloon this section.
For a single-entity handler, you can simply narrow with let mut t = Category::table(db()) .clone(); t.add_order(sqlite_expr!("{}", ident("name")).ascending());. Adding ordering to
crud is a fine exercise once the rest of the server is in place — and a natural thing to
push back into the framework so every backend picks it up uniformly.
Our ListQuery silently accepts page=0, per_page=-5, or per_page=1000000. Pagination::new
clamps page and items-per-page to at least 1, so the first two can’t crash us — but an API
that hands out 1M rows because someone asked for it is a DoS target. For production,
extend ListQuery with a fn validate(&self) -> Result<(), ApiError> that caps per_page
at something like 200 and returns 400 otherwise. The plumbing is already there — ApiError
already knows how to render a 400.
Migrating to MongoDB
Chapter 2 closed on a claim: the model layer isolates business code from storage, so swapping
databases is a change to the model, not to the routes, handlers, or business logic. Now we
cash the check. /categories, /categories/{id}, and /categories/{cat_id}/products keep
their exact URL shape, response bodies, and behaviour — but the data lives in
MongoDB instead of SqliteDB.
Start Mongo — Docker is the easiest way:
docker run -d --name mongo-learn -p 27017:27017 mongo:7
Cargo.toml
Drop vantage-sql, add vantage-mongodb:
[dependencies]
axum = "0.8.9"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["full"] }
vantage-core = { path = "../vantage-core" }
vantage-dataset = { path = "../vantage-dataset" }
vantage-expressions = { path = "../vantage-expressions" }
vantage-mongodb = { path = "../vantage-mongodb" }
vantage-table = { path = "../vantage-table" }
vantage-types = { path = "../vantage-types", features = ["serde"] }
Entities
Swap the #[entity] type tag from SqliteType to MongoType and the id column name from
id to the MongoDB-idiomatic _id. Imports collapse to a single prelude use-line.
src/category.rs:
#![allow(unused)]
fn main() {
use std::sync::OnceLock;
use vantage_mongodb::prelude::*;
use crate::product::Product;
#[entity(MongoType)]
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct Category {
pub name: String,
}
impl Category {
pub fn table(db: MongoDB) -> &'static Table<MongoDB, Category> {
static CACHE: OnceLock<Table<MongoDB, Category>> = OnceLock::new();
CACHE.get_or_init(|| {
Table::new("category", db)
.with_id_column("_id")
.with_column_of::<String>("name")
.with_many("products", "category_id", |db| Product::table(db).clone())
})
}
}
}
src/product.rs gets the symmetric changes:
#![allow(unused)]
fn main() {
use std::sync::OnceLock;
use vantage_mongodb::prelude::*;
use crate::category::Category;
#[entity(MongoType)]
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct Product {
pub name: String,
pub price: i64,
#[serde(skip_serializing_if = "Option::is_none", default)]
pub category_id: Option<String>,
#[serde(default)]
pub is_deleted: bool,
}
impl Product {
pub fn table(db: MongoDB) -> &'static Table<MongoDB, Product> {
static CACHE: OnceLock<Table<MongoDB, Product>> = OnceLock::new();
CACHE.get_or_init(|| {
let is_deleted = Column::<bool>::new("is_deleted");
Table::new("product", db)
.with_id_column("_id")
.with_column_of::<String>("name")
.with_column_of::<i64>("price")
.with_column_of::<String>("category_id")
.with_column_of::<bool>("is_deleted")
.with_condition(is_deleted.eq(false))
.with_one("category", "category_id", |db| Category::table(db).clone())
})
}
}
}
Nothing about the struct shape changed — category_id and is_deleted have been real columns
since § 2. The with_* calls line up one-for-one with the SQLite version; only the id column
name and the entity’s type tag shift.
The CategoryTable trait from § 3 also updates — MongoDB’s _id is a string, not an integer:
#![allow(unused)]
fn main() {
impl CategoryTable for Table<MongoDB, Category> {
fn id(&self) -> Column<String> {
self.get_column("_id").unwrap()
}
fn ref_products(&self) -> Table<MongoDB, Product> {
self.get_ref_as("products").unwrap()
}
}
}
The nested route’s closure gets a little simpler as a result — no parse step, just narrow by the
URL’s cat_id string directly:
#![allow(unused)]
fn main() {
crud(|db, p| {
let mut c = Category::table(db).clone();
c.add_condition(c.id().eq(p["cat_id"].as_str()));
c.ref_products()
})
}
.eq(&str) works here because vantage-mongodb’s From<&str> for
AnyMongoType auto-promotes 24-character hex strings to
ObjectId and leaves everything else as String. The comparison fires with the right BSON type
whichever _id convention the collection uses.
main.rs
The crud function’s generic bounds shift from Entity<AnySqliteType> to Entity<AnyMongoType>
and its Fn(SqliteDB, ...) becomes Fn(MongoDB, ...). Body unchanged.
#![allow(unused)]
fn main() {
fn crud<E, F>(make_table: F) -> Router
where
F: Fn(MongoDB, &Params) -> Table<MongoDB, E> + Send + Sync + 'static,
E: Entity<AnyMongoType> + Serialize + DeserializeOwned + Send + Sync + 'static,
}
Connection, id handling, and error mapping are the only handler-level changes:
#![allow(unused)]
fn main() {
// db() returns MongoDB; connect with URL + database name.
let conn = MongoDB::connect("mongodb://localhost:27017", "learn3")
.await
.context("Failed to connect to MongoDB")?;
// item-level ids are MongoId. String → MongoId dispatches to ObjectId when the
// string is a 24-char hex, otherwise stays a plain String, via a `From<String>`
// smart-parse in vantage-mongodb.
let id: MongoId = params["id"].clone().into();
// MongoDB's missing-doc error joins the 404 path.
let status = if message.contains("no row found") || message.contains("Document not found") {
StatusCode::NOT_FOUND
} else {
StatusCode::INTERNAL_SERVER_ERROR
};
}
Running it
cargo run
The collection is empty, so the first request returns an empty array:
curl http://localhost:3001/categories
# []
POST a few categories — responses carry the auto-generated MongoDB ObjectId as the new id:
SWEETS=$(curl -s -X POST http://localhost:3001/categories \
-H 'content-type: application/json' -d '{"name":"Sweet Treats"}' \
| jq -r .id)
echo "$SWEETS"
# 69e2b9101a552c206f5f8468
Create a product in that category by POSTing to the nested route, including the parent id as
category_id in the body:
curl -X POST "http://localhost:3001/categories/$SWEETS/products" \
-H 'content-type: application/json' \
-d "{\"name\":\"Cupcake\",\"price\":120,\"category_id\":\"$SWEETS\"}"
# {"id":"69e2b9101a552c206f5f846a"}
curl "http://localhost:3001/categories/$SWEETS/products"
# [{"name":"Cupcake","price":120,"category_id":"69e2b9101a552c206f5f8468","is_deleted":false}]
Same URL shape as the SQLite version. Same JSON. Same error codes. Handlers, routing, pagination, filtering — nothing in the request path learned that storage moved from a local file to a document database.
CategoryTable::ref_products() still works. MongoDB’s _id defaults to ObjectId, but
application fields like product.category_id arrive as plain JSON strings and get stored as
BSON String. A naive $in: [ObjectId(...)] wouldn’t match. vantage-mongodb sidesteps that
by pushing both representations into the $in inside related_in_condition — an ObjectId
value also emits its 24-char hex string, and a hex-shaped String value also emits the
parsed ObjectId. Traversal works regardless of which form the target stores.
Nothing in MongoDB requires _id to be an ObjectId — it can be any BSON value, including a
plain string. If the app supplies _id explicitly on insert (or the framework generates a
UUID and writes it into _id), both category._id and product.category_id are strings and
everything lines up without the $in dual-push. Useful when you want stable, human-legible ids
or ids that came from an upstream system.
Scaling up: CRUD as a one-liner
The crud function, ApiError, Params, and ListQuery aren’t really tied to this app —
they’re generic over any Table<MongoDB, E>. Move them into their own module and main.rs
collapses to exactly what it’s about: connecting the database and registering routes. Three
files do the work:
src/vantage_axum.rs (one file, ~120 lines) — everything HTTP: ApiError, ListQuery,
and the crud<E, F> helper. Accepts any entity E that implements Entity<AnyMongoType>
plus the usual serde bounds.
src/db.rs (17 lines) — the static DB: OnceLock<MongoDB>, an init(url, db) helper
that connects and stores the handle, and a pub fn db() -> MongoDB accessor that hands out
cheap MongoDB clones.
src/main.rs — now down to ~30 lines:
mod category;
mod db;
mod product;
mod vantage_axum;
use axum::Router;
use category::{Category, CategoryTable};
use vantage_axum::crud;
use vantage_mongodb::prelude::*;
#[tokio::main]
async fn main() -> VantageResult<()> {
db::init("mongodb://localhost:27017", "learn3").await?;
let app = Router::new()
.nest("/categories", crud(|db, _| Category::table(db).clone()))
.nest(
"/categories/{cat_id}/products",
crud(|db, p| {
let mut c = Category::table(db).clone();
c.add_condition(c.id().eq(p["cat_id"].as_str()));
c.ref_products()
}),
);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3001").await.unwrap();
axum::serve(listener, app).await.unwrap();
Ok(())
}
Adding a new entity to this server is now two things:
- Write a
Table::new(...)constructor for it — declarative, one function, same shape we learned in chapter 2. - Mount it.
#![allow(unused)]
fn main() {
.nest("/widgets", crud(|db, _| Widget::table(db).clone()))
}
That’s the whole surface. GET list, POST create, GET /{id}, PATCH /{id}, DELETE /{id}, pagination via ?page=&per_page=, full-text search via ?q=, 404s for missing ids,
400s for malformed bodies, structured JSON errors. No new handler code. No per-entity error
mapping. No per-entity query-param struct. One line per entity.
Every route plays by the same rules because every route is served by the same crud — the
single description of “what this route does” lives in the entity’s Table definition, and
the HTTP boundary just routes to it. That is the “one description, many operations”
principle from chapter 2’s Table carried all the way to the wire, unbroken.
The vantage_axum module is generic enough to lift directly into a larger codebase — it has
no knowledge of Category, Product, or your particular routes. Drop it into your own
binary, give it a db() accessor, write entity files, mount routes.
The Vantage Journey
Vantage didn’t start as a multi-backend entity framework. It started as a weekend experiment to see if Rust could build SQL queries without feeling like Rust.
This page walks you through each release — what changed, why it mattered, and how the API evolved from raw Postgres queries to a universal persistence layer.
- 0.0 — “DORM” (April–November 2024)
- 0.1 — Vantage is born (December 2024)
- 0.2 — Entity Framework & MDA (February 2025)
- 0.3 — The Great Separation (July–October 2025)
- 0.4 — The Type System Rewrite (November 2025–present)
- The bigger picture
graph LR
D[0.0 DORM] -->|rename| V1[0.1 Vantage]
V1 -->|entity framework| V2[0.2 MDA]
V2 -->|crate split| V3[0.3 UI Adapters]
V3 -->|type rewrite| V4[0.4 Type System]
style D fill:#555,color:#fff
style V1 fill:#4a7c59,color:#fff
style V2 fill:#2d6a8f,color:#fff
style V3 fill:#8f5a2d,color:#fff
style V4 fill:#7c2d8f,color:#fff
0.0 — “DORM” (April–November 2024)
The project was originally called DORM — the Dry ORM. It was Postgres-only, monolithic, and proudly opinionated. Everything lived in one crate.
The core idea was already there: Data Sets. Instead of loading records eagerly, you describe what you want and let the framework figure out the query.
#![allow(unused)]
fn main() {
let clients = Client::table(); // Table<Postgres, Client>
let paying = clients.with_condition(
clients.is_paying_client().eq(&true)
);
let orders = paying.ref_orders(); // Table<Postgres, Order>
for order in orders.get().await? {
println!("#{} total: ${:.2}", order.id, order.total as f64 / 100.0);
}
}
Behind this innocent-looking code, DORM generated a single SQL query with subqueries, joins, and soft-delete filters — all derived from model definitions:
SELECT id,
(SELECT name FROM client WHERE client.id = ord.client_id) AS client_name,
(SELECT SUM((SELECT price FROM product WHERE id = product_id) * quantity)
FROM order_line WHERE order_line.order_id = ord.id) AS total
FROM ord
WHERE client_id IN (SELECT id FROM client WHERE is_paying_client = true)
AND is_deleted = false
No hand-written SQL. No query strings. The framework composed everything from relationship
definitions and table extensions like SoftDelete.
Early commits show a battle with Rust’s borrow checker — the infamous “lifetime hell.”
The solution came on April 28: switching to Arc for shared ownership. This unlocked
clonable data sources and composable table references that define the framework to this day.
Milestones
| Date | What happened |
|---|---|
| Apr 11 | First commit — queries, expressions, SQLite binding |
| Apr 18 | Insert/delete support, first Postgres tests |
| Apr 28 | Arc adoption — escaped lifetime hell |
| May 14 | Query::Join implemented |
| May 25 | has_one, has_many, relationship traversal |
| May 26 | Bakery model example — all entities defined |
0.1 — Vantage is born (December 2024)
On December 12, the framework was renamed from DORM to Vantage and published to crates.io for the first time. The API stayed the same — this was a branding milestone, not an architectural one.
A vantage point gives you a clear view of the landscape below. The framework gives you a clear view of your data — no matter where it lives or how complex the relationships are.
The same month brought the first Axum integration (bakery_api), proving that data sets could drive
REST endpoints naturally:
#![allow(unused)]
fn main() {
async fn list_orders(
client: axum::extract::Query<OrderRequest>,
pager: axum::extract::Query<Pagination>,
) -> impl IntoResponse {
let orders = Client::table()
.with_id(client.client_id.into())
.ref_orders();
let mut query = orders.query();
query.add_limit(Some(pager.per_page));
Json(query.get().await.unwrap())
}
}
Tags v0.1.0 and v0.1.1 were published the same day. SQLx data source landed 11 days later.
0.2 — Entity Framework & MDA (February 2025)
Version 0.2 repositioned Vantage as a full Entity Framework with Model-Driven Architecture. The README doubled in size. The vision expanded from “clever query builder” to “how enterprises should structure business logic.”
The key insight: entities aren’t just database rows. They’re business objects that might live in SQL, NoSQL, a REST API, or a message queue — and your code shouldn’t care which.
#![allow(unused)]
fn main() {
impl Client {
fn table() -> Table<Client, Oracle> { /* ... */ }
fn registration_queue() -> impl Insertable<Client> { /* Kafka */ }
fn admin_api() -> impl DataSet<Client> { /* REST */ }
fn read_csv(file: String) -> impl ReadableDataSet<Client> { /* CSV */ }
}
}
A developer calling Client::registration_queue().insert(id, client).await doesn’t
need to know it’s Kafka underneath. The SDK hides the transport — only the entity
contract matters.
This release also introduced the idea of struct projection — using different Rust types against the same data set to control which fields get queried:
#![allow(unused)]
fn main() {
struct MiniClient { name: String }
struct FullClient { name: String, email: String, balance: Decimal }
// Only fetches `name` from the database
let name = clients.get_id_as::<MiniClient>(42).await?.name;
}
The monolith was getting heavy, though. Everything still lived in one crate, and adding a new database meant touching core code.
0.3 — The Great Separation (July–October 2025)
Version 0.3 broke the monolith into dedicated crates and bet heavily on SurrealDB as the primary backend. The trait-based architecture that defines Vantage today was born here.
One crate became many: vantage-expressions, vantage-table, vantage-dataset,
vantage-surrealdb, surreal-client, vantage-config, vantage-ui-adapters — each
with a focused responsibility.
Table definitions moved from static initialization to a builder pattern:
#![allow(unused)]
fn main() {
// 0.2 — static, Postgres-only
Table::new_with_entity("bakery", postgres())
.with_id_column("id")
.with_column("name")
.with_many("clients", "bakery_id", || Box::new(Client::table()))
// 0.3 — builder, any datasource
Table::<SurrealDB, Client>::new("client", ds.clone())
.with_id_column("id")
.with_column("name")
.with_column("email")
.with_many("orders", "client_id", || Client::order_table())
}
Field accessors now return Expression instead of column objects — making them composable across
query builders:
#![allow(unused)]
fn main() {
// Build conditions from expressions
let active = clients.is_paying_client().eq(true);
let big_spenders = clients.balance().gt(1000);
let query = clients.with_condition(active).with_condition(big_spenders);
}
The AnyTable type-erasure system arrived, enabling generic code that works with any
datasource:
#![allow(unused)]
fn main() {
let tables: Vec<AnyTable> = vec![
AnyTable::new(Client::table()), // SurrealDB
AnyTable::new(Product::table()), // SQLite
];
for table in &tables {
println!("{}: {} records", table.name(), table.count().await?);
}
}
This release culminated with UI adapters for six frameworks — egui, GPUI, Slint, Tauri, Ratatui,
and Cursive — all driven by the same AnyTable interface.
The same bakery model powered a native desktop app (GPUI), a web app (Tauri), a terminal dashboard (Ratatui), and three more — without changing a single line of business logic.

0.4 — The Type System Rewrite (November 2025–present)
Version 0.4 rewrites the type system from the ground up. Custom types per datasource, CBOR
protocol, 7 persistence backends (SurrealDB, Postgres, MySQL, SQLite, MongoDB, CSV, REST API),
ActiveEntity / ActiveRecord patterns, typed columns, unified error handling, and a progressive
trait model where each persistence only implements what its engine supports.
The bigger picture
Looking at the trajectory:
| Version | Core idea | Backends | Crates |
|---|---|---|---|
| 0.0 | Can Rust build SQL smartly? | Postgres | 1 |
| 0.1 | Let’s publish this | Postgres | 1 |
| 0.2 | Entity Framework for Rust | Postgres | 1 |
| 0.3 | Traits, not inheritance | SurrealDB, SQLite | 10+ |
| 0.4 | Strict types, any persistence | SurrealDB, SQLite, Postgres, MySQL, MongoDB, CSV, REST API | 20+ |
What started as 16 commits in April 2024 is now 470+ commits, 189+ pull requests, and a framework that can drive the same business logic across five backends and six UI frameworks.
The destination hasn’t changed since day one: describe your data once, use it everywhere. Each version just made “everywhere” a little bigger.
What’s New in Vantage 0.4
Version 0.4 is a ground-up rewrite of the type system and a massive expansion of persistence support. If 0.3 proved the architecture, 0.4 makes it production-ready.
- Progressive model — implement only what you need
- Persistence-specific Type Systems
- Table — the interface you get for free
- ActiveEntity and ActiveRecord
- Record<V> — persistence-native value bags
- CBOR everywhere
- New persistences
- Unified error handling
- AnyTable goes universal
- What’s still coming
Progressive model — implement only what you need
In earlier versions, adding a new backend meant implementing everything. Version 0.4 flips this: capabilities unlock progressively based on which traits your persistence (backend) implements. There are two parallel trait hierarchies — one for typed entities, one for schema-less records.
DataSet works with your Rust structs:
#![allow(unused)]
fn main() {
// A CSV file is read-only — and that's fine
impl ReadableDataSet<Product> for CsvFile { /* ... */ }
// A message queue can only append
impl InsertableDataSet<Event> for KafkaTopic { /* ... */ }
// A full database gets the complete toolkit
impl ReadableDataSet<Client> for SurrealDB { /* ... */ }
impl WritableDataSet<Client> for SurrealDB { /* ... */ }
// ReadableDataSet + WritableDataSet automatically unlocks ActiveEntitySet!
}
ValueSet is the schema-less counterpart — same hierarchy, but operates on Record<Value>
instead of entities. Useful for dynamic tables, config-driven schemas, or when you don’t have (or
want) a struct:
#![allow(unused)]
fn main() {
// Read raw records — no struct needed
impl ReadableValueSet for CsvFile { /* returns Record<CsvType> */ }
// Full CRUD on raw values
impl WritableValueSet for SurrealDB { /* accepts Record<AnySurrealType> */ }
// ReadableValueSet + WritableValueSet automatically unlocks ActiveRecordSet!
}
Each layer is opt-in. A CSV persistence never pretends it can build select queries.
The DataSource layer is progressive too. These traits aren’t a strict hierarchy — they combine independently, and each persistence implements only what its engine supports:
DataSource ← marker, all backends
├── TableSource ← CRUD, columns, conditions
├── ExprDataSource ← execute expressions, defer()
└── SelectableDataSource ← query builder (Selectable)
TableExprSource ← count/sum/max/min as composable expressions
requires: TableSource + ExprDataSource
TableQuerySource ← table definition → full query
requires: TableSource + SelectableDataSource
Here’s what each persistence implements:
CSV
└── TableSource (read-only, in-memory conditions)
REST API (vantage-api-client)
├── TableSource (read-only, HTTP GET)
└── ExprDataSource (basic resolve)
MongoDB
├── TableSource (full CRUD, native bson::Document conditions)
└── SelectableDataSource (field projection, $match pipeline)
└── aggregates via native $sum/$max/$min pipeline
Postgres / MySQL / SQLite (via sqlx)
├── TableSource (full CRUD)
├── ExprDataSource (parametric SQL execution)
├── SelectableDataSource (JOINs, CTEs, window functions)
└── TableQuerySource
SurrealDB
├── TableSource (full CRUD)
├── ExprDataSource (CBOR execution)
├── SelectableDataSource (SurrealQL builder)
├── TableExprSource (composable aggregates)
├── TableQuerySource
└── vendor extensions (graph queries, RELATE, live queries)
There’s also MockTableSource in vantage-table and ImDataSource in vantage-dataset — a
fully in-memory persistence backed by HashMap<String, IndexMap<String, Record<Value>>>. Great for
tests and prototyping without any database.
Want to add your own? The Adding a New Persistence guide walks through each trait step by step — from type system to full CRUD to multi-backend CLI.
Persistence-specific Type Systems
The vantage-types crate is the foundation of 0.4. Each persistence defines its own type universe
via the vantage_type_system! macro — no more funnelling everything through JSON:
#![allow(unused)]
fn main() {
vantage_type_system!(AnySurrealType, SurrealDB); // CBOR-native types
vantage_type_system!(AnyMongoType, MongoDB); // BSON-native types
vantage_type_system!(AnySqliteType, SqliteDB); // CBOR with integer/real/text markers
vantage_type_system!(AnyPostgresType, PostgresDB); // CBOR with full PG type coverage
vantage_type_system!(AnyMysqlType, MysqlDB); // CBOR with MySQL type coverage
}
All SQL backends (SQLite, Postgres, MySQL) now use CBOR as their internal value representation —
not JSON. This preserves type fidelity that JSON loses (integer vs float, binary blobs, precise
decimals) while keeping the same vantage_type_system! ergonomics.
Each SQL backend has detailed type conversion tables covering chrono types (DATE, TIME, TIMESTAMP), numeric types (DECIMAL, BIGINT, FLOAT), and cross-type coercion rules. See the Type Conversions reference for exact round-trip behaviour per column type and Rust type.
Strict conversions replace silent casting — try_into() is explicit and fallible:
#![allow(unused)]
fn main() {
let val: i64 = surreal_value.try_into()?; // fails if it's actually a string
let val: String = mongo_value.try_into()?; // fails if it's actually a number
}
These types flow through the entire framework, not just storage. Columns carry their type as a
generic parameter (Column<i64>, Column<String>) — surviving type-erasure via original_type so
UI adapters can inspect what a column actually holds. AssociatedExpression<'a, DS, T, R> is
parameterised by the value type T, so a count query on SurrealDB returns AnySurrealType while
the same query on Postgres returns AnyPostgresType. AssociatedQueryable<R> then converts
that into your expected Rust type — all type-checked at compile time.
#![allow(unused)]
fn main() {
// Column preserves type through erasure
let price = Column::<i64>::new("price").with_flag(ColumnFlag::Indexed);
let erased = Column::<AnyType>::from_column(price);
assert_eq!(erased.get_type(), "i64"); // original type survives
// AssociatedExpression carries the persistence's value type
let count: AssociatedExpression<'_, SurrealDB, AnySurrealType, usize> =
table.get_expr_count();
let n: usize = count.get().await?; // executes, converts AnySurrealType → usize
}
This means the full range of your persistence’s native types is preserved end-to-end — from column definition through query building to result extraction — without narrowing down to JSON variants.
Table — the interface you get for free
Once a persistence implements TableSource, the framework hands you Table<DB, Entity> — a
fully-featured abstraction over your data with columns, conditions, ordering, pagination,
relationships, and aggregates. You don’t build any of this yourself; it comes from vantage-table.
Table<DB, E> auto-implements a wide range of traits from the DataSet and ValueSet hierarchies:
ReadableDataSet<E>, WritableDataSet<E>, InsertableDataSet<E>, ActiveEntitySet<E>,
ReadableValueSet, WritableValueSet, InsertableValueSet, and ActiveRecordSet. All of these
come for free once your persistence implements TableSource.
#![allow(unused)]
fn main() {
let products = Product::table(db)
.with_condition(products["is_deleted"].eq(false))
.with_order(products["price"].desc())
.with_pagination(Pagination::ipp(25));
// ReadableDataSet, WritableDataSet, ActiveEntitySet — all auto-implemented
for (id, product) in products.list().await? { /* ... */ }
let count = products.get_count().await?;
}
Additionally, table.select() yields a vendor-specific query builder — SurrealSelect,
SqliteSelect, PostgresSelect — giving you full access to the persistence’s native query
capabilities when you need to go beyond what the generic Table API offers.
This dramatically simplifies implementing new persistences. You implement TableSource methods
(read, write, aggregate) and get the entire Table API for free — conditions compose automatically,
columns carry types, references traverse between tables, and AnyTable type-erasure just works.
graph LR
P[Select Persistence]
M[Create Model Layer]
B[Build Business Logic]
E[Export via FFI or API]
P ==> M ==> B ==> E
style P fill:#4a7c59,color:#fff
style M fill:#2d6a8f,color:#fff
style B fill:#8f5a2d,color:#fff
style E fill:#7c2d8f,color:#fff
Pick your persistence (SurrealDB, Postgres, CSV — or several). Define entities and tables in a
shared model crate. Build business logic against Table and DataSet traits — persistence-agnostic
and testable with mocks. Finally, expose your model to other languages via FFI (C ABI, PyO3, UniFFI,
WASM) or through API layers (Axum, gRPC). The model stays in Rust; consumers don’t need to know.
Read more in Model-Driven Architecture and
Three Paths for Developers.
ActiveEntity and ActiveRecord
The vantage-dataset crate introduces two flavours of the active record pattern:
ActiveEntity<D, E> — wraps a typed entity. Derefs straight to your struct, tracks the ID, and
saves back to any WritableDataSet:
#![allow(unused)]
fn main() {
let mut user = users.get_entity(&id).await?.unwrap();
user.email = "new@example.com".to_string(); // modify via DerefMut
user.save().await?; // persists the change
}
ActiveRecord<D> — same idea, but schema-less. Works with Record<Value> instead of a concrete
struct, perfect for dynamic tables or config-driven entities:
#![allow(unused)]
fn main() {
let mut rec = table.get_value_record(&id).await?;
rec["status"] = json!("active");
rec.save().await?;
}
Both auto-unlock via blanket impls — if your datasource implements ReadableDataSet +
WritableDataSet, you get ActiveEntitySet for free. No extra code.
The API is designed for real-world patterns like get-or-create:
#![allow(unused)]
fn main() {
let mut user = users.get_entity(&id).await?
.unwrap_or_else(|| users.new_entity(id, User::default()));
user.name = "Alice".into();
user.save().await?;
}
Record<V> — persistence-native value bags
Record<V> is an IndexMap<String, V> wrapper — but V is not serde_json::Value. It’s your
persistence’s own type. A SurrealDB table returns Record<AnySurrealType>, Postgres returns
Record<AnyPostgresType>, MongoDB returns Record<AnyMongoType>. This ensures the full range of
your persistence’s native types is preserved — not narrowed down to what JSON can represent.
#![allow(unused)]
fn main() {
// Each persistence speaks its native types
let record: Record<AnySurrealType> = surreal_table.get_value(&id).await?;
let record: Record<AnyPostgresType> = pg_table.get_value(&id).await?;
let record: Record<AnyMongoType> = mongo_table.get_value(&id).await?;
// Type-safe extraction — respects the persistence's type boundaries
let name: String = record["name"].try_get::<String>().unwrap();
}
Record is the common currency across the framework — ReadableValueSet, WritableValueSet,
ActiveRecord, and entity conversion all work through it. JSON conversion only happens at the
boundary when you need AnyTable type-erasure (via from_table()).
CBOR everywhere
The SurrealDB client switched from JSON to binary CBOR, improving serialization performance and type fidelity. All backends followed — Postgres, MySQL, SQLite.
New persistences
MongoDB, CSV, and REST API persistences joined SurrealDB and SQLite. CSV evaluates conditions in-memory. MongoDB uses native BSON filters — no expression translation needed:
#![allow(unused)]
fn main() {
// Same handle_commands function — different persistences
handle_commands(SurrealDB::table("product")).await?;
handle_commands(CsvFile::table("products.csv")).await?;
handle_commands(MongoDB::table("product")).await?;
}
Unified error handling
vantage-core introduced VantageError with structured context — replacing the patchwork of
Box<dyn Error> and .unwrap() calls:
#![allow(unused)]
fn main() {
use vantage_core::{error, util::error::Context, Result};
connection.connect()
.with_context(|| error!("Failed to connect", dsn = &dsn, timeout = 30))?;
}
AnyTable goes universal
AnyTable::from_table() now wraps any datasource whose types convert to/from JSON, using an
internal JsonAdapter for on-the-fly conversion. This means your generic code works with
persistences that don’t even share a value type:
#![allow(unused)]
fn main() {
// Wrap a MongoDB table for use in generic code
let any: AnyTable = AnyTable::from_table(mongo_products);
let any: AnyTable = AnyTable::from_table(csv_products);
// Both work through the same interface
}
What’s still coming
Trait boundary refinements — Aggregates (get_count, get_sum) currently require
SelectableDataSource but should only need TableSource, so non-query backends like MongoDB
can use them directly. column_table_values_expr forces an ExprDataSource dependency that
document-oriented backends don’t need.
SurrealDB reference traversal — IN subqueries return record objects instead of scalar
values. Needs SELECT VALUE id — a SurrealDB-specific construct not yet in the generic
Selectable trait.
Type system gaps — Vec<u8> (binary data) and Uuid need type trait implementations
across backends. Bind/read paths exist — just missing the impl XxxType wiring.
Query builder — sql_fx!() macro for mixed-type function calls, Expression::empty()
sweep, PostgreSQL ingress scripts.
Architecture — Transaction support, Table JOIN preserving conditions and resolving alias
clashes, Condition::or() beyond two arguments, expression refactor (split Owned/Lazy).
Someday — Table aggregations (GROUP BY), disjoint subtypes pattern, replayable idempotent operations, “Realworld” example application.
Don’t force every persistence into the same mould. Let each one implement what it can, carry its own types, and unlock API surface progressively. The framework adapts to the datasource — not the other way around.
Expressions & Queries
Vantage builds queries without string concatenation. The vantage-expressions crate provides a
type-safe, composable expression system that works across all persistence backends — SQL, SurrealDB,
MongoDB, CSV, and anything you add yourself.
- The core idea
- Vendor macros
- Composing expressions
- Identifier quoting
- ExprDataSource — executing expressions
- Deferred expressions — cross-database values
- Selectable — the query builder interface
- SelectableDataSource — wiring it up
- Expressive trait
The core idea
An Expression<T> is a template string with typed parameters:
#![allow(unused)]
fn main() {
let expr = sqlite_expr!("SELECT {} FROM {} WHERE {} > {}",
(ident("name")), (ident("product")), (ident("price")), 100i64);
// → SELECT "name" FROM "product" WHERE "price" > ?1 (with 100 bound as i64)
}
Parameters are never interpolated into the string. They’re carried separately, each tagged with a
type marker from your persistence’s type system. The bind layer uses these markers to call the right
driver method — bind_i64, bind_str, bind_bool — no guessing, no silent coercion.
Three kinds of parameters:
- Scalar — a typed value:
42i64,"hello",true - Nested — another expression, composed into the template
- Deferred — a closure that executes later (cross-database resolution)
Vendor macros
Each persistence provides a convenience macro that produces Expression<AnyType> with the correct
type wrapping:
#![allow(unused)]
fn main() {
let e = sqlite_expr!("SELECT * FROM product WHERE price > {}", 100i64);
let e = surreal_expr!("SELECT * FROM product WHERE price > {}", 100i64);
let e = postgres_expr!("SELECT * FROM product WHERE price > {}", 100i64);
let e = mysql_expr!("SELECT * FROM product WHERE price > {}", 100i64);
}
Same syntax, different type universes. The compiler ensures you can’t accidentally mix a
Expression<AnySqliteType> into a SurrealDB query.
Composing expressions
Expressions nest naturally. Parenthesised arguments call .expr() automatically:
#![allow(unused)]
fn main() {
let condition = sqlite_expr!("{} > {}", (ident("price")), 100i64);
let query = sqlite_expr!("SELECT {} FROM {} WHERE {}",
(ident("name")), (ident("product")), (condition));
}
The ExpressionFlattener collapses all nesting into a single flat template with positional
parameters — each one still carrying its type marker.
For building lists (e.g. multi-row INSERT), use Expression::from_vec:
#![allow(unused)]
fn main() {
let row1 = sqlite_expr!("({}, {})", "tart", 220i64);
let row2 = sqlite_expr!("({}, {})", "pie", 299i64);
let rows = Expression::from_vec(vec![row1, row2], ", ");
}
Identifier quoting
SQL identifiers need quoting — and each database uses different quote characters. The Identifier
struct handles this by implementing Expressive<T> for each backend type:
#![allow(unused)]
fn main() {
// Quoting adapts to the expression's type context
sqlite_expr!("SELECT {}", (ident("name"))); // → SELECT "name"
mysql_expr!("SELECT {}", (ident("name"))); // → SELECT `name`
// Qualified identifiers
sqlite_expr!("SELECT {}", (ident("name").dot_of("u"))); // → SELECT "u"."name"
// Aliases
mysql_expr!("SELECT {}", (ident("name").with_alias("n"))); // → SELECT `name` AS `n`
}
ExprDataSource — executing expressions
The ExprDataSource<T> trait connects expressions to a live database:
#![allow(unused)]
fn main() {
// Execute directly
let result: AnySqliteType = db.execute(&expr).await?;
// Associate with an expected return type
let count: i64 = db.associate::<i64>(sqlite_expr!("SELECT COUNT(*) FROM product"))
.get().await?;
}
AssociatedExpression<'a, DS, T, R> carries both the expression and a reference to the datasource.
Call .get() to execute and convert in one step. The return type R is checked at compile time.
Deferred expressions — cross-database values
Sometimes a query on one database needs a value from another. defer() wraps a query as a closure
that resolves at execution time:
#![allow(unused)]
fn main() {
// Query config_db for a threshold — but don't execute yet
let threshold = config_db.defer(
sqlite_expr!("SELECT value FROM config WHERE key = {}", "min_price")
);
// Use the deferred value in a query against shop_db
let expensive = Expression::<AnySqliteType>::new(
"SELECT name FROM product WHERE price >= {}",
vec![ExpressiveEnum::Deferred(threshold)],
);
let result = shop_db.execute(&expensive).await?;
// 1. Resolves deferred → calls config_db, gets 150
// 2. Binds 150 as a scalar parameter
// 3. Executes against shop_db
}
This is not a subquery — the deferred query runs first, produces a concrete value, and that value gets bound as a regular parameter.
Selectable — the query builder interface
The Selectable<T> trait is the standard interface for building SELECT queries. Each persistence
provides its own SELECT struct (SqliteSelect, SurrealSelect, PostgresSelect) implementing this
trait:
#![allow(unused)]
fn main() {
let select = SqliteSelect::new()
.with_source("product")
.with_field("name")
.with_field("price")
.with_condition(sqlite_expr!("{} = {}", (ident("is_deleted")), false))
.with_order(sqlite_expr!("{}", (ident("price"))), false)
.with_limit(Some(10), None);
}
Builder methods come free from the trait — with_source, with_field, with_condition,
with_order, with_limit. You only implement the mutating methods (add_field,
add_where_condition, etc.).
Aggregate shortcuts clone the query and replace fields:
#![allow(unused)]
fn main() {
let count = select.as_count(); // SELECT COUNT(*) FROM ...
let total = select.as_sum(sqlite_expr!("{}", (ident("price")))); // SELECT SUM("price") FROM ...
}
SelectableDataSource — wiring it up
SelectableDataSource<T> connects the query builder to execution:
#![allow(unused)]
fn main() {
impl SelectableDataSource<AnySqliteType> for SqliteDB {
type Select = SqliteSelect;
fn select(&self) -> Self::Select { SqliteSelect::new() }
async fn execute_select(&self, select: &Self::Select) -> Result<Vec<AnySqliteType>> {
self.execute(&select.expr()).await
}
}
}
Once implemented, table.select() returns your vendor-specific builder pre-populated with the
table’s columns, conditions, and ordering — ready for execution or further customisation.
Expressive trait
Anything that implements Expressive<T> can be used inside an expression. This includes:
- Columns —
table["price"] - Operations —
table["price"].gt(100) - Identifiers —
ident("name") - Query builders —
select.expr() - Sort orders —
table["name"].desc() - Scalar values —
42i64,"hello",true - Closures — that’s what
defer()returns
You can implement Expressive<T> for your own types to make them composable into the expression
system.
Model-Driven Architecture
Vantage is opinionated about how you structure business software. The framework prescribes a Model-Driven Architecture (MDA) where entities, relationships, and business rules live in a shared model crate — decoupled from any specific persistence, UI, or API layer.
- The idea
- Anatomy of a model crate
- Defining entities
- Table constructors
- Relationships
- Computed fields
- Connection management
- Using the model
- Type-erased access
- The layered architecture
The idea
Most Rust projects scatter database queries across handlers, services, and utilities. When the schema changes or a new backend is needed, you’re hunting through dozens of files.
Vantage inverts this. You define your entities once in a model crate, then every consumer — REST
API, CLI, desktop UI, background worker — uses the same Table and DataSet interfaces. The model
is the source of truth.
graph TD
M[Model Crate] --> API[Axum API]
M --> CLI[CLI Tool]
M --> UI[Desktop UI]
M --> Worker[Background Worker]
M --> Test[Mock Tests]
style M fill:#4a7c59,color:#fff
Anatomy of a model crate
The bakery_model3 crate in the Vantage repo demonstrates the pattern. Here’s how it’s structured:
bakery_model3/
├── src/
│ ├── lib.rs ← re-exports, DB connection helpers
│ ├── bakery.rs ← Bakery entity + table constructors
│ ├── client.rs ← Client entity + table constructors
│ ├── order.rs ← Order entity + table constructors
│ └── product.rs ← Product entity + table constructors
└── examples/
├── cli.rs ← multi-source CLI using AnyTable
└── 0-intro.rs ← direct SurrealDB queries
Each entity file follows the same pattern: struct → trait impls → table constructors.
Defining entities
An entity is a plain Rust struct. The #[entity(...)] macro generates Record conversions for each
persistence’s type system:
#![allow(unused)]
fn main() {
#[entity(CsvType, SurrealType, SqliteType, PostgresType, MongoType)]
#[derive(Debug, Clone, PartialEq, Default)]
pub struct Client {
pub name: String,
pub email: String,
pub contact_details: String,
pub is_paying_client: bool,
pub bakery_id: Option<String>,
}
}
One struct, five type systems. The same Client works with CSV files, SurrealDB, SQLite, Postgres,
and MongoDB — each using its own native Record<AnyType> representation.
The entity struct does not include an id field. IDs are managed by the table via
with_id_column() — keeping the entity focused on business data.
Table constructors
Each entity provides table constructors per persistence. These are plain functions that return
Table<DB, Entity> with columns, relationships, and computed fields pre-configured:
#![allow(unused)]
fn main() {
impl Client {
pub fn csv_table(csv: Csv) -> Table<Csv, Client> {
Table::new("client", csv)
.with_column_of::<String>("name")
.with_column_of::<String>("email")
.with_column_of::<bool>("is_paying_client")
.with_column_of::<String>("bakery_id")
.with_one("bakery", "bakery_id", Bakery::csv_table)
.with_many("orders", "client_id", Order::csv_table)
}
pub fn surreal_table(db: SurrealDB) -> Table<SurrealDB, Client> {
Table::new("client", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_column_of::<String>("email")
.with_column_of::<bool>("is_paying_client")
.with_one("bakery", "bakery", Bakery::surreal_table)
.with_many("orders", "client", Order::surreal_table)
.with_expression("order_count", |t| {
let orders = t.get_subquery_as::<Order>("orders").unwrap();
orders.get_count_query()
})
}
pub fn sqlite_table(db: SqliteDB) -> Table<SqliteDB, Client> {
Table::new("client", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_column_of::<String>("email")
.with_column_of::<bool>("is_paying_client")
.with_column_of::<String>("bakery_id")
.with_one("bakery", "bakery_id", Bakery::sqlite_table)
.with_many("orders", "client_id", Order::sqlite_table)
}
}
}
Notice how the shape is consistent but details differ — SurrealDB uses embedded document references
("bakery" not "bakery_id"), and only SurrealDB gets the computed order_count expression field
(which requires correlated subquery support).
Relationships
Relationships are declared on the table, not the entity. Two methods:
with_one("name", "fk_field", constructor)— foreign key to parent (many-to-one)with_many("name", "fk_field", constructor)— parent to children (one-to-many)
Traversal is synchronous and returns a new Table with conditions applied:
#![allow(unused)]
fn main() {
let paying = Client::surreal_table(db)
.with_condition(clients["is_paying_client"].eq(true));
// Traverse — returns Table<SurrealDB, Order> with subquery condition
let orders = paying.get_ref_as::<SurrealDB, Order>("orders").unwrap();
// The generated query filters orders by paying clients automatically
for (id, order) in orders.list().await? {
println!("{}: {}", id, order.total);
}
}
Computed fields
with_expression adds fields that don’t exist in the database — they’re computed via correlated
subqueries:
#![allow(unused)]
fn main() {
.with_expression("order_count", |t| {
let orders = t.get_subquery_as::<Order>("orders").unwrap();
orders.get_count_query()
})
// SELECT *, (SELECT COUNT(*) FROM order WHERE order.client = client.id)
// AS order_count FROM client
}
These fields appear in ReadableValueSet results alongside physical columns.
Connection management
The model crate owns database connections. bakery_model3 uses a OnceLock<SurrealDB> pattern for
global access, with a DSN-based connection function:
#![allow(unused)]
fn main() {
pub async fn connect_surrealdb() -> Result<()> {
let dsn = std::env::var("SURREALDB_URL")
.unwrap_or_else(|_| "cbor://root:root@localhost:8000/bakery/v2".into());
let client = SurrealConnection::dsn(&dsn)?
.connect().await?;
set_surrealdb(SurrealDB::new(client))
}
}
Consumers call connect_surrealdb() once at startup, then use Client::surreal_table(surrealdb())
anywhere.
Using the model
Once the model crate exists, consumers are simple. They don’t know or care about SQL, SurrealQL, or BSON — they work with entities and tables:
#![allow(unused)]
fn main() {
// In an Axum handler
async fn list_clients() -> Json<Vec<Client>> {
let clients = Client::surreal_table(surrealdb())
.with_condition(clients["is_paying_client"].eq(true));
Json(clients.list().await.unwrap().into_values().collect())
}
// In a CLI tool
let table = Client::sqlite_table(db);
println!("{} clients", table.get_count().await?);
// In a test — no database needed
let mock = MockTableSource::new()
.with_data("client", test_data).await;
let table = Table::<MockTableSource, Client>::new("client", mock);
assert_eq!(table.get_count().await?, 3);
}
The same model crate can expose table constructors for multiple persistences. Your production code uses Postgres, your CLI reads CSV exports, your tests use mocks — all sharing the same entity definitions and business rules.
Type-erased access
For truly generic code (UI grids, admin panels, config-driven tools), wrap tables with AnyTable:
#![allow(unused)]
fn main() {
let tables: Vec<AnyTable> = vec![
AnyTable::from_table(Client::surreal_table(db.clone())),
AnyTable::from_table(Product::sqlite_table(sqlite.clone())),
AnyTable::from_table(Order::csv_table(csv.clone())),
];
// Same code handles all three — different databases, same interface
for table in &tables {
println!("{}: {} records", table.table_name(), table.get_count().await?);
}
}
See also: bakery_model3/examples/cli.rs for a complete multi-source CLI built on this pattern.
The layered architecture
Putting it all together, Vantage prescribes four layers:
┌─────────────────────────────────────────────────┐
│ 4. Consumers │
│ Axum API · CLI · egui · Tauri · gRPC │
├─────────────────────────────────────────────────┤
│ 3. Business Logic │
│ Traits on Table · custom methods · rules │
├─────────────────────────────────────────────────┤
│ 2. Model Crate │
│ Entities · Table constructors · Relations │
├─────────────────────────────────────────────────┤
│ 1. Persistence │
│ SurrealDB · Postgres · SQLite · CSV · API │
└─────────────────────────────────────────────────┘
Layer 1 is implemented once per database (or use an existing Vantage crate). Layer 2 is your model
crate — entity definitions, relationships, computed fields. Layer 3 adds business-specific traits
and methods on top of Table. Layer 4 is any number of consumers that import the model and don’t
think about persistence.
The model is the only place that knows about database structure. Business logic works against
abstract Table and DataSet interfaces. Consumers work against business logic traits. Change
the database — only layer 1 and 2 change. Add a new UI — only layer 4 changes. The architecture
scales to hundreds of entities and dozens of developers without coupling.
Three Paths for Developers
Vantage gives you three ways to work with data. Each path builds on the previous one, and all three converge into a unified data abstraction layer.
graph TD
EF["① Entity Framework<br/>CRUD over business objects"]
QB["② Query Building<br/>Vendor-specific SQL power"]
CP["③ Custom Persistences<br/>APIs, caches, reactive data"]
EF --> DAL["Universal Data Abstraction Layer"]
QB --> DAL
CP --> DAL
DAL --> E["Export: FFI · API · UI · SDK"]
style EF fill:#4a7c59,color:#fff
style QB fill:#2d6a8f,color:#fff
style CP fill:#8f5a2d,color:#fff
style DAL fill:#7c2d8f,color:#fff
style E fill:#555,color:#fff
① Entity Framework
The most common path. Define entities, build tables, use DataSet and ActiveEntity for CRUD —
your code never touches SQL or any query language.
#![allow(unused)]
fn main() {
// Define once
#[entity(SurrealType, SqliteType, PostgresType)]
struct Product {
name: String,
price: i64,
is_deleted: bool,
}
// Use everywhere
let table = Product::surreal_table(db);
let expensive = table.with_condition(table["price"].gt(200));
let count = expensive.get_count().await?;
let mut item = table.get_entity(&id).await?.unwrap();
item.price = 350;
item.save().await?;
}
This is the bread and butter of enterprise development — hundreds of entities, each with relationships, computed fields, and business rules, all persistence-agnostic and testable with mocks.
You’re building business software with well-defined entities (Client, Order, Invoice). You want clean separation between persistence and logic. You don’t need vendor-specific SQL features.
② Query Building
When you need the full power of your database — JOINs, CTEs, window functions, JSONB operators, array aggregation, recursive queries — drop into the vendor-specific query builder.
SQLite — CASE, UNION, window functions
#![allow(unused)]
fn main() {
use vantage_sql::primitives::case::Case;
let select = SqliteSelect::new()
.with_source("users")
.with_field("name")
.with_field("salary")
.with_expression(
Case::new()
.when(sqlite_expr!("{} >= {}", (ident("salary")), 100000.0f64),
sqlite_expr!("{}", "senior"))
.when(sqlite_expr!("{} >= {}", (ident("salary")), 60000.0f64),
sqlite_expr!("{}", "mid"))
.else_(sqlite_expr!("{}", "junior"))
.as_alias("band"),
)
.with_order(ident("salary"), Order::Desc);
}
PostgreSQL — DISTINCT ON, LATERAL JOIN, array ops
#![allow(unused)]
fn main() {
let select = PostgresSelect::new()
.with_distinct_on(ident("user_id").dot_of("o"))
.with_source_as("orders", "o")
.with_expression(ident("name").dot_of("u"))
.with_expression(ident("id").dot_of("o").with_alias("order_id"))
.with_expression(ident("total").dot_of("o"))
.with_join(PostgresSelectJoin::inner(
"users", "u",
ident("id").dot_of("u").eq(ident("user_id").dot_of("o")),
))
.with_order(ident("user_id").dot_of("o"), Order::Asc)
.with_order(ident("created_at").dot_of("o"), Order::Desc);
}
Both examples are type-safe — parameters are bound with the correct CBOR type markers, identifiers are quoted for the target dialect, and results deserialize into typed structs. See the full test suites: SQLite complex queries, PostgreSQL complex queries.
Complex queries don’t have to live in isolation. Wrap them as expression fields
(with_expression) or as custom methods on your Table — they become part of the entity
framework, available to every consumer of your model crate.
You need vendor-specific features — recursive CTEs, window functions, DISTINCT ON,
JSONB operators, generate_series, array aggregation. You want full control over the query
while keeping type safety and parameterised binding.
③ Custom Persistences
Vantage isn’t limited to databases. Implement the persistence traits for anything that stores or produces data:
- REST APIs —
vantage-api-clientwraps any paginated JSON API as a read-onlyTableSource - API pools —
vantage-api-pooladds connection pooling, prefetching, and rate limiting - CSV files —
vantage-csvreads structured files with in-memory conditions - Local caches —
vantage-livesyncs fast cache (ImTable, ReDB) with slow backend (Postgres, SurrealDB) for responsive UIs - Message queues — implement
InsertableDataSetfor append-only sources like Kafka topics - Mixed sources — read from SQL, write through a queue, cache in memory — same
Tableinterface
#![allow(unused)]
fn main() {
// Read from a REST API
let api_products = Product::api_table(RestApi::new("https://api.example.com/products"));
// Cache locally for fast UI
let live = LiveTable::new(
Product::surreal_table(db), // permanent backend
ImTable::new(&cache, "products"), // fast cache
);
// Both implement the same traits — your UI code doesn't care
let products: Vec<Product> = live.list().await?.into_values().collect();
}
You’re integrating with external APIs, building offline-first applications, implementing reactive data layers, or combining multiple data sources behind a single interface. Your custom persistence becomes a first-class citizen in the entity framework.
Convergence
All three paths converge into the same data abstraction layer. Entity framework tables, raw
query results, and custom persistence data all flow through Table, DataSet, and AnyTable —
giving you one interface for the entire organisation’s data.
┌──────────────────────────────────────────────────────────────┐
│ Data Abstraction Layer │
│ │
│ Table<SurrealDB, Client> ← entity framework │
│ Table<PostgresDB, Report> ← complex queries │
│ Table<RestApi, ExternalData> ← custom persistence │
│ Table<LiveTable, Product> ← cached + reactive │
│ │
│ All implement: DataSet · ValueSet · ActiveEntitySet │
│ All wrap into: AnyTable (type-erased, generic code) │
└──────────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
Axum API egui/Tauri FFI / SDK
This is where Rust’s type system pays off at scale. Every Table is checked at compile time. Every
persistence has strict type boundaries. Every consumer works against abstract traits. Add a new
database, API, or cache — nothing else changes. Add a new UI framework or language binding — the
model stays the same.
Vantage turns your data layer into a universal abstraction. Start with one database and a handful of entities. Over time, add more persistences, more entities, more consumers. The architecture doesn’t bend — Rust’s ownership model and trait system ensure each layer stays clean, each boundary stays enforced, and each team works independently.
Adding a New Persistence
So you want to connect Vantage to a new database? This guide walks through the process in six incremental steps — each one unlocks more framework features. You don’t have to implement all six; stop whenever your persistence has enough capability for your use case.
- Overview
- Step 1: Type System
- Step 2: Expressions
- Step 3: Query Builder
- Step 4: Table & CRUD
- Step 5: Relationships
- Step 6: Multi-Backend Applications
Overview
| Step | What you build | What it unlocks | Can skip? |
|---|---|---|---|
| 1. Type System | vantage_type_system! macro, AnyType, Record conversions | Type-safe values, struct ↔ record mapping | Required |
| 2. Expressions | Vendor macro, ExprDataSource | Execute raw queries, cross-database defer() | Skip for simple read-only sources (CSV) |
| 3. Query Builder | Selectable, SelectableDataSource | Composable SELECT with conditions, ordering, limits | Skip if your persistence has no query language |
| 4. Table & CRUD | TableSource, entity tables, aggregates, writes | Table<DB, Entity>, full CRUD, ReadableDataSet, WritableDataSet | Required for table support |
| 5. Relationships | with_one, with_many, correlated subqueries | Reference traversal, expression fields | Skip if you don’t need cross-table queries |
| 6. Multi-Backend | AnyTable::from_table(), CLI example | Type-erased tables, generic UI/API code | Skip if you only use one persistence |
Step 1: Type System
Every database has its own idea of what types exist. The vantage_type_system! macro generates a
type trait, variant enum, and type-erased AnyType wrapper that prevents silent casting between
incompatible types.
You’ll implement the type trait for each Rust type your database supports, set up Record
conversions (free via serde for JSON-based backends, or via #[entity] macro for custom value
types), and add TryFrom<AnyType> for scalar extraction.
Step 2: Expressions
With types in place, build a vendor macro (sqlite_expr!, surreal_expr!) that produces
Expression<AnyType> with typed parameters. Implement ExprDataSource to execute expressions
against your database — handling parameter binding, deferred cross-database resolution, and result
parsing.
Skip this step if your persistence evaluates conditions in-memory (like CSV) — you can implement
TableSource directly without an expression engine.
Step 3: Query Builder
Build a SELECT struct implementing the Selectable trait — fields, conditions, ordering, limits,
aggregates. Wire it up through SelectableDataSource so the rest of Vantage can create and execute
queries through a standard interface.
Skip this step if your persistence doesn’t have a query language. MongoDB, for instance, skips
Selectable and uses native BSON pipelines instead.
Step 4: Table & CRUD
Implement TableSource to give Vantage full table abstraction — columns, conditions, ordering,
pagination, entity CRUD, and aggregates. This is where Table<DB, Entity> comes alive and
auto-implements ReadableDataSet, WritableDataSet, and ActiveEntitySet.
Start with todo!() for every method and implement them incrementally, driven by tests.
Step 5: Relationships
Declare with_one and with_many relationships on tables and traverse them with get_ref_as.
Implement column_table_values_expr for subquery-based traversal and optionally
related_correlated_condition for correlated subqueries (expression fields like computed counts).
Skip this step if your persistence is flat (no foreign keys or cross-collection references).
Step 6: Multi-Backend Applications
Wrap your tables with AnyTable::from_table() to erase the backend type. This enables generic UI,
CLI, and API code that works identically across SurrealDB, SQLite, CSV, MongoDB, or your new
persistence — all through a uniform serde_json::Value-based interface.
Step 1: Define Your Type System
Every database has its own idea of what types exist. SQLite has 5 storage classes (NULL, INTEGER, REAL, TEXT, BLOB). Postgres has dozens. SurrealDB has its own set with Things and Geometry types.
The vantage type system gives you two things:
- Type markers — so you can tell the difference between “this is an integer” and “this is text”
even when both are stored as
serde_json::Valueunder the hood. - Safe extraction —
try_get::<i64>()on a text value returnsNoneinstead of silently coercing. This prevents the kind of bugs where a string “42” gets treated as a number somewhere downstream.
Why not just use serde_json::Value directly?
You can! And for simple cases it works fine. The problem shows up when values move between contexts.
A JSON number 42 could be an integer, a float, or even a boolean (SQLite stores bools as 0/1).
Without type markers, you lose that distinction and get silent data corruption.
Setting it up
Use the vantage_type_system! macro. It generates a trait, an enum of variants, and a type-erased
AnyType wrapper:
#![allow(unused)]
fn main() {
vantage_type_system! {
type_trait: SqliteType,
method_name: json,
value_type: serde_json::Value,
type_variants: [Null, Integer, Text, Real, Numeric, Blob]
}
}
This gives you SqliteType (trait), SqliteTypeVariants (enum), and AnySqliteType (the
type-erased wrapper that remembers which variant a value belongs to).
The value_type is whatever your database driver naturally speaks. For SQL databases that’s usually
serde_json::Value. SurrealDB uses ciborium::Value (CBOR). You could use any type — even String
if your storage is that simple.
Then implement the trait for each Rust type. Here’s bool — SQLite stores it as 0/1:
#![allow(unused)]
fn main() {
impl SqliteType for bool {
type Target = SqliteTypeIntegerMarker; // bool lives in the Integer family
fn to_json(&self) -> serde_json::Value {
Value::Number(if *self { 1.into() } else { 0.into() })
}
fn from_json(value: serde_json::Value) -> Option<Self> {
match value {
Value::Number(n) => n.as_i64().map(|i| i != 0),
Value::Bool(b) => Some(b), // accept native bools too
_ => None,
}
}
}
}
And here’s how the type safety works in practice:
#![allow(unused)]
fn main() {
let val = AnySqliteType::new(42i64);
assert_eq!(val.try_get::<i64>(), Some(42)); // same variant → works
assert_eq!(val.try_get::<String>(), None); // Integer ≠ Text → rejected
let val = AnySqliteType::new("hello".to_string());
assert_eq!(val.try_get::<String>(), Some("hello".to_string()));
assert_eq!(val.try_get::<i64>(), None); // Text ≠ Integer → rejected
}
You also need From conversions so values can be created conveniently:
#![allow(unused)]
fn main() {
let val: AnySqliteType = 42i64.into();
let val: AnySqliteType = "hello".into();
let val: AnySqliteType = true.into();
}
Records and struct conversion
A Record<V> is an ordered key-value map (field name → value). It’s how vantage represents a single
row of data regardless of the backend. The question is: how do your structs become Records and vice
versa?
There are two paths depending on your value_type.
Path A: serde_json::Value (the easy path)
If your type system uses serde_json::Value as the value type (like SQLite does), you get struct
conversion for free. Vantage has blanket implementations of IntoRecord and TryFromRecord for any
type that implements serde’s Serialize/Deserialize:
#![allow(unused)]
fn main() {
#[derive(Serialize, Deserialize)]
struct Product {
name: String,
price: i64,
is_deleted: bool,
}
let product = Product { name: "Cupcake".into(), price: 120, is_deleted: false };
// Struct → Record<serde_json::Value> — automatic via serde
let record: Record<serde_json::Value> = product.into_record();
// Record<serde_json::Value> → Struct — automatic via serde
let restored: Product = Product::from_record(record).unwrap();
}
This works because serde already knows how to turn structs into JSON objects and back. No extra code needed on your part.
Path B: Custom value type (the #[entity] path)
If your type system uses something other than serde_json::Value — like SurrealDB’s
ciborium::Value — then serde’s blanket impls don’t apply. You need the #[entity] proc macro to
generate the conversion code:
#![allow(unused)]
fn main() {
#[entity(SurrealType)]
#[derive(Debug, Clone)]
struct Product {
name: String,
price: i64,
is_deleted: bool,
}
}
The #[entity(SurrealType)] macro looks at each field, and generates:
IntoRecord<AnySurrealType>— callsAnySurrealType::new(self.name)for each fieldTryFromRecord<AnySurrealType>— callsrecord["name"].try_get::<String>()for each field
This is where the type markers from Step 1 become critical. When you read a record back from the
database, each value is an AnySurrealType with a variant tag. The try_get::<String>() call
checks that the variant is Text before extracting. If someone stored an integer in a field that
should be a string, you get an error instead of garbage.
You can even target multiple type systems at once:
#![allow(unused)]
fn main() {
#[entity(SurrealType, CsvType)]
struct Product {
name: String,
price: i64,
}
}
This generates conversions for both Record<AnySurrealType> and Record<AnyCsvType>, so the same
struct works across different backends.
Testing Record conversions
Test Record<AnySqliteType> in both modes — typed (write path) and untyped (read path).
Typed records simulate what you’d build when inserting data. Values have variant tags, and
try_get enforces them:
#![allow(unused)]
fn main() {
#[test]
fn test_typed_record() {
let mut record: Record<AnySqliteType> = Record::new();
record.insert("name".into(), AnySqliteType::new("Cupcake".to_string()));
record.insert("price".into(), AnySqliteType::new(120i64));
assert_eq!(record["name"].try_get::<String>(), Some("Cupcake".to_string()));
assert_eq!(record["name"].try_get::<i64>(), None); // Text ≠ Integer → blocked
assert_eq!(record["price"].try_get::<i64>(), Some(120));
assert_eq!(record["price"].try_get::<String>(), None); // Integer ≠ Text → blocked
}
}
Untyped records simulate what comes back from the database. Values have type_variant: None, so
try_get is permissive — it just attempts the conversion:
#![allow(unused)]
fn main() {
#[test]
fn test_untyped_record() {
let mut record: Record<AnySqliteType> = Record::new();
record.insert("name".into(), AnySqliteType::untyped(json!("Cupcake")));
record.insert("price".into(), AnySqliteType::untyped(json!(120)));
assert_eq!(record["name"].try_get::<String>(), Some("Cupcake".to_string()));
assert_eq!(record["name"].try_get::<i64>(), None); // fails because "Cupcake" isn't a number
assert_eq!(record["price"].try_get::<i64>(), Some(120));
assert_eq!(record["price"].try_get::<f64>(), Some(120.0)); // permissive — json 120 can be f64
}
}
The key difference: a typed AnySqliteType::new(42i64) blocks try_get::<f64>() because Integer ≠
Real. An untyped AnySqliteType::untyped(json!(42)) allows it because there’s no variant to check —
it just asks “can JSON number 42 be read as f64?”
Also test Option<T> fields, null handling, and missing fields.
TryFrom<AnyType> for common types
You also need TryFrom<AnyType> implementations for scalar types and Records. These are used later
by AssociatedExpression::get() in Step 2, but they belong here because they’re part of the type
system:
#![allow(unused)]
fn main() {
// Scalars — extract single value from single-row results
impl TryFrom<AnySqliteType> for i64 { ... }
impl TryFrom<AnySqliteType> for String { ... }
// etc.
// Records — extract first row from result array
impl TryFrom<AnySqliteType> for Record<AnySqliteType> { ... }
impl TryFrom<AnySqliteType> for Record<serde_json::Value> { ... }
}
For scalars, if the result is a single-row, single-column array like [{"COUNT(*)": 3}], extract
the value automatically. For Records, extract the first row and wrap each field as an untyped
AnyType.
Step 1 conclusion
At this point you should have:
-
Type impls in
src/<backend>/types/— thevantage_type_system!macro call, trait implementations for each Rust type,Fromconversions onAnyType, variant detection inTypeVariants::from_*(), andTryFrom<AnyType>for scalars and Records. -
Tests in
tests/<backend>/1_types_round_trip.rscovering:- In-memory
AnyTyperound-trips for each supported type - Type mismatch rejections (wrong variant →
None) - Struct ↔ Record conversions (including Option fields and error cases)
- Values read from the actual database converting correctly
- In-memory
How type markers flow through the system
The AnyType wrapper has an Option<Variant> field — Some(Integer) means “I know this is an
integer”, None means “I don’t know, just try whatever conversion you need.”
Writing (you → database): Values created with AnySqliteType::new(42i64) get
type_variant: Some(Integer). The bind layer uses the variant to pick the right sqlx bind call —
Integer binds as i64, Text as &str, Real as f64. No guessing.
Reading (database → you): Values coming back from the database are created with
AnySqliteType::untyped(json_value) which sets type_variant: None. This means try_get::<i64>()
won’t be blocked by a variant mismatch — it just attempts the conversion. The type checking happens
later when you deserialize into a struct.
#![allow(unused)]
fn main() {
// Writing — typed, variant enforced
let val = AnySqliteType::new(true); // type_variant: Some(Bool)
val.try_get::<bool>(); // Some(true) — variant matches
val.try_get::<i64>(); // None — Bool ≠ Integer, blocked by type boundary
val.try_get::<String>(); // None — Bool ≠ Text, blocked
// Reading — untyped, permissive
let val = AnySqliteType::untyped(json!(1)); // type_variant: None
val.try_get::<i64>(); // Some(1) — no variant check, json 1 parses as i64
val.try_get::<bool>(); // Some(true) — no variant check, json 1 parses as bool (≠0)
val.try_get::<f64>(); // Some(1.0) — no variant check, json 1 parses as f64
val.try_get::<String>(); // None — json 1 can't parse as String
}
Both directions use Record<AnySqliteType>, but the values behave differently:
Writing: Struct → Record<AnySqliteType> (typed) → bind_sqlite_value() → sqlx
Reading: sqlx → Record<AnySqliteType> (untyped) → try_get / serde → Struct
Step 2: Make Expressions Work
With the type system in place, you can now use Expression<AnySqliteType> to build and execute
queries. This step has two deliverables: a convenience macro and the ExprDataSource trait
implementation.
The vendor macro
Define a macro that produces Expression<YourAnyType>. SurrealDB has surreal_expr!, we create
sqlite_expr!:
#![allow(unused)]
fn main() {
let expr = sqlite_expr!("SELECT * FROM product WHERE price > {}", 100i64);
}
Under the hood, 100i64 gets wrapped as AnySqliteType::new(100i64) with variant Integer. When
this expression hits the database, the bind layer knows to call query.bind(100i64) — not
query.bind("100") or query.bind(100.0).
The macro handles three kinds of parameters:
42i64→ scalar with type marker(sub_expr)→ nested expression (composed into the template){deferred}→ lazy evaluation (resolved at execution time)
Identifier quoting
SQL identifiers (table names, column names) need quoting to handle reserved words, spaces, and
special characters. Different databases use different quote styles — PostgreSQL and SQLite use
double quotes ("name"), MySQL uses backticks (`name`), SurrealDB uses something else
entirely.
Vantage centralises this in the Identifier struct (vantage-sql/src/primitives/identifier.rs).
Identifier is quote-agnostic — it stores the name parts and optional alias, but the actual quoting
happens in the Expressive<T> implementation for each backend type:
#![allow(unused)]
fn main() {
impl Expressive<AnyMysqlType> for Identifier {
fn expr(&self) -> Expression<AnyMysqlType> {
Expression::new(self.render_with('`'), vec![]) // `name`
}
}
}
When you add a new SQL backend, add an Expressive<YourAnyType> impl with your quote character. The
compiler picks the right impl based on the expression type context.
In practice you use the ident() shorthand and pass it into the vendor macro with parentheses — the
(...) syntax calls .expr() automatically, so quoting is handled by the type context:
#![allow(unused)]
fn main() {
use vantage_sql::primitives::identifier::{Identifier, ident};
// The (ident(...)) syntax invokes Expressive — quoting is automatic
let expr = mysql_expr!("SELECT {} FROM {} WHERE {} = {}",
(ident("name")), (ident("product")), (ident("price")), 100i64);
// → SELECT `name` FROM `product` WHERE `price` = 100
let expr = postgres_expr!("SELECT {} FROM {} WHERE {} = {}",
(ident("name")), (ident("product")), (ident("price")), 100i64);
// → SELECT "name" FROM "product" WHERE "price" = 100
}
For qualified identifiers (table.column) and aliases:
#![allow(unused)]
fn main() {
let expr = sqlite_expr!("SELECT {}", (Identifier::with_dot("u", "name")));
// → SELECT "u"."name"
let expr = mysql_expr!("SELECT {}", (ident("name").with_alias("n")));
// → SELECT `name` AS `n`
}
Test identifier quoting in tests/<backend>/2_identifier.rs — cover basic names, reserved words,
spaces, hyphens, unicode, and names that start with numbers. These are all legal inside quoted
identifiers in both PostgreSQL and MySQL.
ExprDataSource
Implement DataSource (marker) and ExprDataSource<AnySqliteType> on your DB struct. The execute
method takes an expression, flattens nested sub-expressions, converts {} placeholders to your
driver’s syntax (?N for SQLite, $N for Postgres), binds parameters using type markers, and
returns results.
Results come back as AnySqliteType with type_variant: None — the database doesn’t preserve our
markers, so results are permissive (see Step 1). For SQLite that’s especially natural since it
doesn’t distinguish boolean from integer on the wire.
If the persistence layer you’re implementing does preserve type information in responses (like
SurrealDB with CBOR tags), set the correct type_variant when constructing result values in your
execute() implementation. That way try_get enforces type boundaries on both sides of the
round-trip.
Validating with INSERT expressions
The best way to test this is INSERT + SELECT round-trips. A single insert exercises all the pieces — macro, parameter binding, type markers, and result parsing:
#![allow(unused)]
fn main() {
let insert = sqlite_expr!(
"INSERT INTO product (id, name, price, is_deleted) VALUES ({}, {}, {}, {})",
"cupcake", "Flux Cupcake", 120i64, false
);
db.execute(&insert).await?;
let select = sqlite_expr!("SELECT * FROM product WHERE id = {}", "cupcake");
let result = db.execute(&select).await?;
}
Nested expressions let you build multi-row inserts from composable parts:
#![allow(unused)]
fn main() {
let row1 = sqlite_expr!("({}, {}, {}, {})", "tart", "Time Tart", 220i64, false);
let row2 = sqlite_expr!("({}, {}, {}, {})", "pie", "Sea Pie", 299i64, true);
// Expression::from_vec joins sub-expressions with a delimiter
let rows = Expression::from_vec(vec![row1, row2], ", ");
// Nest into the INSERT — flattener resolves everything into a single query
let insert = Expression::<AnySqliteType>::new(
"INSERT INTO product (id, name, price, is_deleted) VALUES {}",
vec![ExpressiveEnum::Nested(rows)],
);
db.execute(&insert).await?;
}
The ExpressionFlattener collapses all nesting into one flat template with positional parameters —
each one still carrying its type marker for correct binding.
Deferring: cross-database value resolution
Sometimes a query on one database needs a value from another database. That’s what defer() is for
— it wraps a query as a closure that executes later, when the outer query runs.
This is not a subquery. The deferred query runs first, produces a concrete value, and that value gets bound as a regular parameter in the outer query.
#![allow(unused)]
fn main() {
let (config_db, shop_db) = setup().await;
// This doesn't execute yet — it's a closure
let threshold_query = sqlite_expr!("SELECT value FROM config WHERE key = {}", "min_price");
let deferred_threshold = config_db.defer(threshold_query);
// Use the deferred value as a parameter in a different database
let shop_query = Expression::<AnySqliteType>::new(
"SELECT name FROM product WHERE price >= {} ORDER BY price",
vec![ExpressiveEnum::Deferred(deferred_threshold)],
);
// When shop_db.execute() runs:
// 1. Resolves the deferred → calls config_db, gets 150
// 2. Replaces the Deferred param with Scalar(150)
// 3. Flattens and binds: SELECT name FROM product WHERE price >= ?1
let result = shop_db.execute(&shop_query).await?;
}
Your execute() implementation needs to resolve deferred parameters before flattening. Walk the
parameter list, call .call().await on any Deferred, and leave Scalar and Nested untouched.
The resolved value comes back as an untyped AnySqliteType (no variant marker), so it gets bound
via JSON-inference. For SQLite this is fine — the loose type system handles it. For stricter
databases, you may want defer() to preserve type information from the source query’s result.
Reading query results
So far we’ve been calling db.execute(&expr).await which returns AnySqliteType. For a SELECT
query, that value wraps a JSON array of row objects. To work with individual rows, you convert into
Records:
#![allow(unused)]
fn main() {
let result = db.execute(&sqlite_expr!("SELECT * FROM product")).await?;
// Result is AnySqliteType wrapping [{"id":"a","name":"Cheap","price":50}, ...]
// Convert to records manually:
let rows: Vec<JsonValue> = match result.into_value() {
JsonValue::Array(arr) => arr,
_ => panic!("expected rows"),
};
let record: Record<JsonValue> = rows[0].clone().into();
}
That works but it’s verbose. The TryFrom<AnyType> impls from Step 1 make this cleaner through
AssociatedExpression. When you call db.associate::<R>(expr), you get an expression that knows
its return type — .get() executes and converts in one step:
#![allow(unused)]
fn main() {
// Scalar — extracts single value from single-row result
let count = db.associate::<i64>(sqlite_expr!("SELECT COUNT(*) FROM product"));
assert_eq!(count.get().await?, 3);
// Record — extracts first row
let record: Record<JsonValue> = db
.associate(sqlite_expr!("SELECT * FROM product WHERE id = {}", "c"))
.get().await?;
}
From a Record, you can deserialize into a struct. For the #[entity] path:
#![allow(unused)]
fn main() {
#[entity(SqliteType)]
struct Product { id: String, name: String, price: i64 }
let record: Record<AnySqliteType> = db
.associate(sqlite_expr!("SELECT * FROM product WHERE id = {}", "c"))
.get().await?;
let product = Product::from_record(record)?;
}
Or for the serde path with Record<JsonValue>:
#![allow(unused)]
fn main() {
#[derive(Deserialize)]
struct Product { id: String, name: String, price: i64 }
let record: Record<JsonValue> = db
.associate(sqlite_expr!("SELECT * FROM product WHERE id = {}", "c"))
.get().await?;
let product: Product = Product::from_record(record)?;
}
Testing the failure modes (missing fields, NULL into required field, wrong types) can help spot issues in your implementation.
Step 2 conclusion
At this point you should have:
-
A vendor macro (
sqlite_expr!,surreal_expr!, etc.) that producesExpression<AnyType>with typed parameters. -
Trait impls in
src/<backend>/impls/—DataSource(marker) andExprDataSource<AnyType>withexecute()anddefer(). -
Tests in
tests/<backend>/2_*.rscovering:- INSERT with typed parameters, read back and verify
- Multi-row INSERT using nested expressions and
from_vec - Type marker verification (bool binds as bool, not as string “true”)
- Cross-database deferred value resolution
- AssociatedExpression with scalar, Record, and entity results
- Identifier quoting: basic names, reserved words, spaces, hyphens, unicode
Step 2b: Implement Operators
Expressions let you build raw queries, but users shouldn’t have to write
sqlite_expr!("{} > {}", (ident("price")), 100i64) every time they want a condition. Operators
give typed columns ergonomic methods like .eq(), .gt(), .in_() that produce your backend’s
native condition type.
This step covers how to implement a vendor-specific operation trait for your persistence.
Vendor-specific operation traits
Each persistence defines its own operation trait that returns the backend’s condition type directly.
The trait is blanket-implemented for all Expressive<T> where T: Into<AnyBackendType>, so typed
columns get the methods for free.
For SQL backends, a macro generates the trait:
#![allow(unused)]
fn main() {
// In vantage-sql/src/sqlite/operation.rs
define_sql_operation!(
SqliteOperation,
SqliteCondition,
crate::sqlite::types::AnySqliteType
);
}
This produces:
- A trait
SqliteOperation<T>with.eq(),.gt(),.lt(),.ne(),.gte(),.lte(),.in_(),.in_list(),.cast()— all returningSqliteCondition - A blanket impl for all
Expressive<T>whereT: Into<AnySqliteType> - An
Expressive<AnySqliteType>impl forSqliteCondition, enabling chaining
How it works internally
Each method builds an Expression<T> from the two operands, then converts it to the backend’s
condition type via From<Expression<T>>:
#![allow(unused)]
fn main() {
fn gt(&self, value: impl Expressive<T>) -> SqliteCondition {
let expr: Expression<T> = Expression::new("{} > {}", vec![
ExpressiveEnum::Nested(self.expr()),
ExpressiveEnum::Nested(value.expr()),
]);
SqliteCondition::from(expr) // maps T → AnySqliteType via Into
}
}
The From<Expression<F>> for SqliteCondition impl (from Step 1’s define_sql_condition! macro)
handles the type mapping — it calls ExpressionMap::map() to convert all F scalars into
AnySqliteType.
Chaining across type boundaries
Because SqliteCondition implements Expressive<AnySqliteType>, the blanket gives it
SqliteOperation<AnySqliteType>. This enables:
#![allow(unused)]
fn main() {
let price = Column::<i64>::new("price");
price.gt(10).eq(false)
// => SqliteCondition wrapping: (price > 10) = 0
}
The first operation (.gt(10)) enforces type safety — 10 must be Expressive<i64>. The second
operation (.eq(false)) operates on SqliteCondition where bool: Expressive<AnySqliteType>, so
any backend-compatible type is accepted.
Implementing for a non-SQL backend
For backends that don’t use expression trees for conditions (like MongoDB), you implement the operation trait manually instead of using the macro. MongoDB produces BSON documents:
#![allow(unused)]
fn main() {
pub trait MongoOperation<T>: Expressive<T> {
fn eq(&self, value: impl Into<AnyMongoType>) -> MongoCondition {
let field = self.expr().template.clone();
let bson_val = AnyMongoType::from(value).to_bson();
MongoCondition::Doc(doc! { field: { "$eq": bson_val } })
}
fn gt(&self, value: impl Into<AnyMongoType>) -> MongoCondition {
let field = self.expr().template.clone();
let bson_val = AnyMongoType::from(value).to_bson();
MongoCondition::Doc(doc! { field: { "$gt": bson_val } })
}
// ...
}
impl<T, S: Expressive<T>> MongoOperation<T> for S {}
}
Key differences from SQL:
- Values use
Into<AnyMongoType>notExpressive<T>— MongoDB doesn’t compose expression trees, it builds BSON documents from scalar values. - Field name extraction —
self.expr().templategives the column name for simple columns. Complex expressions produce the template string as the field path. - Chaining —
MongoConditionimplementsExpressive<AnyMongoType>for the blanket, but boolean chaining (.eq(false)= negate) is handled via dedicated methods like.eq_bool(false)since MongoDB negation uses$notwrappers.
Avoiding method name conflicts
When multiple backend features are enabled, types like Identifier and &str implement
Expressive<T> for multiple backends. This causes ambiguity if the operation trait is generic.
Each backend’s operation trait lives in its own module (e.g. sqlite::operation::SqliteOperation).
Users import only the trait they need:
#![allow(unused)]
fn main() {
// In your prelude:
pub use crate::sqlite::operation::SqliteOperation;
}
Condition type requirements
Your condition type must satisfy TableSource::Condition bounds — Clone + Send + Sync + 'static.
It also needs:
From<Expression<F>>for anyF: Into<AnyType>— so typed column operations convert cleanlyFrom<Identifier>— soident("field")works withwith_condition()Expressive<AnyType>— so the condition can be chained with further operations- Any backend-specific conversions (e.g.
From<Document>for MongoDB)
For SQL backends, the define_sql_condition! macro generates all of these.
Step 2b checklist
-
Define your operation trait — either via
define_sql_operation!(SQL) or manually (document-oriented backends). -
Blanket-implement it for all
Expressive<T>whereTconverts into yourAnyType. -
Implement
Expressive<AnyType>for your condition type — enables chaining. -
Export from your prelude — so users get the operation trait automatically.
-
Tests covering:
- Typed column operations:
Column::<i64>::new("price").gt(150)→ condition - Boolean column:
Column::<bool>::new("active").eq(false)→ condition - Chaining:
price.gt(10).eq(false)compiles and produces correct output - Cross-type rejection:
price.gt(false)does NOT compile - Same-type column comparison:
price.eq(price.clone())works - Condition usable with
table.add_condition()andselect.with_condition()
- Typed column operations:
Step 3: Statement Builders and SelectableDataSource
In practice, nobody writes raw expressions for every query. This step adds the Selectable trait
implementation for your SELECT builder and wires it up through SelectableDataSource so the rest of
vantage can create and execute queries through a standard interface.
Implement Selectable for your SELECT builder
The Selectable<T> trait is the standard interface for building SELECT queries across all vantage
backends. Your SELECT struct needs to implement it. The trait has two kinds of methods:
Mutating methods you must implement — set_source, add_field, add_where_condition,
add_order_by, add_group_by, set_limit, set_distinct, the clear_* methods, the has_*
methods, as_count, and as_sum.
Builder methods you get for free — with_source, with_field, with_condition, with_order,
with_expression, with_limit. These are default implementations that call the mutating methods
and return self.
This means your builder code is just the struct definition and new():
#![allow(unused)]
fn main() {
pub struct SqliteSelect {
pub fields: Vec<Expr>,
pub from: Vec<Expr>,
pub where_conditions: Vec<Expr>,
pub order_by: Vec<(Expr, bool)>,
pub group_by: Vec<Expr>,
pub distinct: bool,
pub limit: Option<i64>,
pub skip: Option<i64>,
}
impl SqliteSelect {
pub fn new() -> Self { /* initialize empty */ }
}
}
The Selectable impl goes in its own file (e.g., statements/select/impls/selectable.rs) and the
builder methods come from the trait:
#![allow(unused)]
fn main() {
let select = SqliteSelect::new()
.with_source("product")
.with_field("name")
.with_field("price")
.with_condition(sqlite_expr!("\"is_deleted\" = {}", false))
.with_order(sqlite_expr!("\"price\""), false)
.with_limit(Some(10), None);
}
The as_count() and as_sum() methods should clone the current query, replace the fields with
COUNT(*) or SUM(column), drop the ORDER BY (unnecessary for aggregates), and render:
#![allow(unused)]
fn main() {
let count_expr = select.as_count(); // SELECT COUNT(*) FROM product WHERE ...
let sum_expr = select.as_sum(sqlite_expr!("\"price\"")); // SELECT SUM("price") FROM ...
}
Implement SelectableDataSource
This trait connects the SELECT builder to execution. It tells vantage “this database can create SELECT queries and run them”:
#![allow(unused)]
fn main() {
impl SelectableDataSource<AnySqliteType> for SqliteDB {
type Select = SqliteSelect;
fn select(&self) -> Self::Select {
SqliteSelect::new()
}
async fn execute_select(&self, select: &Self::Select) -> Result<Vec<AnySqliteType>> {
// delegate to ExprDataSource::execute()
}
}
}
Live tests
Up to now, most tests used in-memory databases created in setup(). For Step 3, start running tests
against a real pre-populated database. This catches issues that in-memory tests miss — schema
mismatches, type affinity surprises, data edge cases.
Set up a test database with known data (we use a bakery schema translated from SurrealDB’s test
fixture), and write tests that query it through the Selectable interface:
#![allow(unused)]
fn main() {
let db = SqliteDB::connect("sqlite:../target/bakery.sqlite?mode=ro").await?;
let select = SqliteSelect::new()
.with_source("product")
.with_condition(sqlite_expr!("\"price\" > {}", 200i64))
.with_order(sqlite_expr!("\"price\""), false);
let record: Record<AnySqliteType> = db.associate(select.expr()).get().await?;
let product = Product::from_record(record)?;
assert_eq!(product.name, "Enchantment Under the Sea Pie");
}
Implementing complex queries
The Selectable interface gives you the bare bones — fields, conditions, ordering, limits,
aggregates. Your database can do much more: JOINs, subqueries, CTEs, window functions, HAVING,
UNION, JSON operators, CASE expressions.
The best way to build this out is incrementally, driven by real queries:
-
Create a test database scaffold with enough tables and data to exercise complex features. Foreign keys, self-referential hierarchies, many-to-many junctions, JSON columns, generated columns — the more variety, the better. (See
scripts/sqlite/db/v3.sqlfor an example.) -
Write the queries first in raw SQL, as comments in your test file. Start with queries you know work against your scaffold. Each query should target specific features (JOINs, GROUP BY + HAVING, window functions, etc.).
-
Implement one query at a time. For each query:
- Read the SQL and identify which builder methods are missing
- Add the methods to your SELECT struct (e.g.,
add_join,add_having,with_cte) - Write a render test that checks the generated SQL matches
- Write a live test that executes against the scaffold and verifies results
This approach has two advantages. First, you don’t over-design — you only add features you actually need. Second, every new feature ships with a test that proves it works against a real database, not just string comparison.
When your queries outgrow what the select struct offers directly, extract primitives and
nested structs rather than bloating the builder. For example, Identifier (in
vantage-sql/src/primitives/) handles qualified column names ("u"."name") and aliases — it
implements Expressive<T> so it plugs straight into expressions. Similarly, SqliteSelectJoin
lives inside the select module and renders its own INNER JOIN ... ON ... clause. The select struct
just holds a Vec<SqliteSelectJoin> and calls render() on each one. This pattern — small struct
with Expressive impl, composed into the builder — scales to CASE, CTE, window specs, and anything
else without the select struct growing unbounded.
Other statements
Selectable only covers SELECT. INSERT, UPDATE, and DELETE don’t need a trait at this stage — they
just need to implement Expressive<AnySqliteType> so they can be passed to
ExprDataSource::execute(). The statement builders from earlier steps still work, they just need
their expression type migrated from JsonValue to AnySqliteType to flow directly into
execute().
Step 3 conclusion
At this point you should have:
-
Selectable<AnyType>impl for your SELECT builder — all standard methods implemented, builder pattern provided by trait defaults. -
SelectableDataSource<AnyType>impl for your DB struct —select()andexecute_select(). -
Tests in
tests/<backend>/3_*.rscovering:- SQL rendering via
preview()— fields, conditions, ordering, limits, distinct, group by as_count()andas_sum()render correctly- Live execution against a test database — SELECT, COUNT, SUM, ORDER+LIMIT
- Entity deserialization from live query results
- SQL rendering via
Step 4: Table Abstraction and Entity CRUD
The same entities get used hundreds of times across a codebase — constructing a query from scratch
every single time is tedious and error-prone. Vantage offers Table<> as an abstraction over your
entity definitions: it knows the table name, the columns, their types, and the ID field, so it can
build queries for you.
To use your persistence backend as a table source, you need to implement the TableSource trait.
Most of the heavy-lifting is done by the vantage-table crate — your job is to implement
TableSource trait methods.
Implement TableSource with placeholder methods
Start by adding the required dependencies:
# in your backend's Cargo.toml
vantage-table = { path = "../vantage-table" }
async-trait = "0.1"
Create a new test file (e.g. tests/<backend>/4_table_def.rs) that defines a table and populates
its columns. The columns rely on the type system you built in Step 1.
The TableSource implementation also declares several associated types:
Column— theColumntype supplied byvantage-tableis good enough for most backends.AnyTypeandValue— your type-erased value type from Step 1 (e.g.AnySqliteType).Id— useStringfor SQL databases, or a custom type if your IDs have special structure (e.g. SurrealDB’sThingwhich encodestable:id). Whatever you pick must be covered by your type system.
#![allow(unused)]
fn main() {
use async_trait::async_trait;
use vantage_table::column::core::{Column, ColumnType};
use vantage_table::traits::table_source::TableSource;
#[async_trait]
impl TableSource for SqliteDB {
type Column<Type> = Column<Type> where Type: ColumnType;
type AnyType = AnySqliteType;
type Value = AnySqliteType;
type Id = String;
// ...
}
}
Implement the following methods first — they’re all straightforward delegations:
- Column management —
create_column,to_any_column,convert_any_column:
#![allow(unused)]
fn main() {
fn create_column<Type: ColumnType>(&self, name: &str) -> Self::Column<Type> {
Column::new(name)
}
fn to_any_column<Type: ColumnType>(
&self,
column: Self::Column<Type>,
) -> Self::Column<Self::AnyType> {
Column::from_column(column)
}
fn convert_any_column<Type: ColumnType>(
&self,
any_column: Self::Column<Self::AnyType>,
) -> Option<Self::Column<Type>> {
Some(Column::from_column(any_column))
}
}
- Expression factory —
expr():
#![allow(unused)]
fn main() {
fn expr(
&self,
template: impl Into<String>,
parameters: Vec<ExpressiveEnum<Self::Value>>,
) -> Expression<Self::Value> {
Expression::new(template, parameters)
}
}
Every other method — should start as todo!(). You’ll implement them incrementally in the following
sections, driven by tests.
Define entity tables
With TableSource in place, define your entity structs and table constructors. The pattern is the
same across all backends — #[entity(YourType)] for the struct, plus a builder method that returns
Table<YourDB, Entity>:
#![allow(unused)]
fn main() {
use vantage_sql::sqlite::{SqliteType, SqliteDB, AnySqliteType};
use vantage_table::table::Table;
use vantage_types::entity;
#[entity(SqliteType)]
#[derive(Debug, Clone, PartialEq, Default)]
struct Product {
name: String,
calories: i64,
price: i64,
bakery_id: String,
is_deleted: bool,
inventory_stock: i64,
}
impl Product {
fn sqlite_table(db: SqliteDB) -> Table<SqliteDB, Product> {
Table::new("product", db)
.with_id_column("id")
.with_column_of::<String>("name")
.with_column_of::<i64>("calories")
.with_column_of::<i64>("price")
.with_column_of::<String>("bakery_id")
.with_column_of::<bool>("is_deleted")
.with_column_of::<i64>("inventory_stock")
}
}
}
Note that the entity struct does not include the id field — that’s handled separately by
with_id_column(), which registers the column and sets the table’s ID field. The remaining columns
are added with with_column_of::<Type>(), which creates typed columns via your
TableSource::create_column implementation.
Verify with a query generation test
Your first test should build a table, then call table.select(). Just like the Step 3 tests, you
can use preview() to check the rendered SQL, and later execute it against a real database:
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_product_select() {
let db = SqliteDB::connect("sqlite::memory:").await.unwrap();
let table = Product::sqlite_table(db);
let select = table.select();
assert_eq!(
select.preview(),
"SELECT \"id\", \"name\", \"calories\", \"price\", \
\"bakery_id\", \"is_deleted\", \"inventory_stock\" FROM \"product\""
);
}
}
This works because table.select() (provided by vantage-table) calls your
SelectableDataSource::select() to get a fresh SELECT builder, then applies the table name via
set_source() and adds each registered column via add_field(). None of the todo!() methods are
hit — only the column and expression infrastructure you already implemented.
Implement the read methods
Table<T, E> implements two traits from vantage-dataset that provide read access:
-
ReadableValueSet— returns rawRecord<Value>(untyped storage values):list_values()→ all records asIndexMap<Id, Record<Value>>get_value(id)→Option<Record<Value>>—Noneif no record matches the idget_some_value()→ one arbitrary record (orNoneif empty)
-
ReadableDataSet<E>— returns deserialized entities (callsE::try_from_record()for you):list()→ all entities asIndexMap<Id, E>get(id)→Option<E>—Noneif no entity matches the idget_some()→ one arbitrary entity
Both traits delegate to three TableSource methods: list_table_values, get_table_value, and
get_table_some_value. The pattern is the same for all three:
- Get the id field name from
table.id_field()(falls back to"id") - Build a SELECT using
table.select()(which already applies columns, conditions, ordering) - Execute via
self.execute(&select.expr()) - Parse the result — split each row into an ID and a
Record
For get_table_value, add a WHERE condition on the id field and return Ok(None) when the
lookup misses — errors are reserved for actual connection or parse failures.
For get_table_some_value, set LIMIT 1 and return the first row (or None if empty).
Write tests for both ReadableValueSet and ReadableDataSet in separate files — import the traits
from vantage_dataset and call list_values(), get_value(), get_some_value(), list(),
get(), get_some() against your pre-populated test database. Keep these tests condition-free —
conditions get their own test file next.
Error handling
All TableSource methods return vantage_core::Result<T> (an alias for Result<T, VantageError>).
Use the error! macro from vantage_core to create errors with structured context:
#![allow(unused)]
fn main() {
use vantage_core::error;
// Simple error message
return Err(error!("expected array result"));
// With key = value context (NOT format args — the macro uses a different syntax)
return Err(error!("row missing id field", field = id_field_name));
// For database-specific errors, convert them with map_err
let rows = query.fetch_all(self.pool()).await
.map_err(|e| error!("SQLite query failed", details = e.to_string()))?;
}
The macro automatically captures file, line, and column. The key = value pairs are stored as
structured context, not interpolated into the message string.
To wrap external errors with additional context, use the Context trait:
#![allow(unused)]
fn main() {
use vantage_core::Context;
// Wraps the original error as the "source" of a new VantageError
let data = std::fs::read("config.json")
.context(error!("failed to load config"))?;
}
This chains errors — the original io::Error is preserved as the source, so Display renders both
messages and the source chain is available via std::error::Error::source().
Operation trait — condition building
Each backend provides an operation trait (e.g. SqliteOperation) with .eq(), .ne(), .gt(),
.gte(), .lt(), .lte(), and .in_() methods for building conditions. It has a blanket
implementation for all Expressive<T> types, so your columns get these methods automatically — no
explicit impl needed.
All methods accept impl Expressive<YourAnyType>, so you can pass native Rust values (false,
42, "hello"), other columns (table["other_field"]), or full expressions. This requires your
scalar types to implement Expressive<YourAnyType> — the same impls you added in Step 1 for the
vendor macro.
Testing conditions
Table carries conditions set via add_condition(), and table.select() applies them
automatically as WHERE clauses. Test a few patterns:
- Custom expression — pass columns as expression arguments via
table["field"]:
#![allow(unused)]
fn main() {
let mut table = Product::sqlite_table(db);
table.add_condition(sqlite_expr!("{} > {}", (table["price"]), 130));
}
- Multiple conditions — combined with AND, including field-to-field comparison:
#![allow(unused)]
fn main() {
let mut table = Product::sqlite_table(db);
table.add_condition(sqlite_expr!("{} > {}", (table["price"]), 130));
table.add_condition(sqlite_expr!("{} > {}", (table["price"]), (table["calories"])));
}
- SqliteOperation::eq() — the idiomatic way:
#![allow(unused)]
fn main() {
use vantage_sql::sqlite::operation::SqliteOperation;
let mut table = Product::sqlite_table(db);
table.add_condition(table["is_deleted"].eq(false));
}
Implement aggregates
Implement get_table_count, get_table_sum, get_table_max, and get_table_min in your
TableSource. These build aggregate queries from table.select() and extract the scalar result.
Once implemented, Table exposes shorter get_count, get_sum, get_max, get_min methods
directly:
#![allow(unused)]
fn main() {
let table = Product::sqlite_table(db);
assert_eq!(table.get_count().await.unwrap(), 5);
assert_eq!(table.get_max(&table["price"]).await.unwrap().try_get::<i64>().unwrap(), 299);
}
Implement write operations
Table also implements WritableDataSet (insert, replace, patch, delete) and InsertableDataSet
(insert with auto-generated ID). These delegate to six TableSource methods:
insert_table_value— INSERT with a known ID. Build anSqliteInsertwith the id field and record fields, execute, then read back viaget_table_value.replace_table_value— full replacement. For SQLite, useINSERT OR REPLACE INTO.patch_table_value— partial update. Build anSqliteUpdatewith only the provided fields and a WHERE condition on the id field.delete_table_value— DELETE with a WHERE condition on the id field.delete_table_all_values— DELETE without conditions.insert_table_return_id_value— INSERT without a known ID (auto-increment). UseRETURNING "id"to get the generated ID back from the database.
Test both WritableValueSet (raw records, no entity) and WritableDataSet (typed entities) using
in-memory SQLite:
#![allow(unused)]
fn main() {
// WritableValueSet — no entity needed
let rec = record(&[("name", "Gamma".into()), ("price", 30i64.into())]);
table.insert_value(&"c".to_string(), &rec).await.unwrap();
// WritableDataSet — typed entities
let item = Item { name: "Gamma".into(), price: 30 };
table.insert(&"c".to_string(), &item).await.unwrap();
// InsertableDataSet — auto-generated ID
let id = table.insert_return_id(&item).await.unwrap();
let fetched = table.get(id).await.unwrap();
}
Step 5: Relationships
Tables can declare relationships using with_one and with_many, then traverse them at runtime
with get_ref_as. The relationship system is provided by vantage-table — your backend just needs
column_table_values_expr implemented to make it work.
Implement column_table_values_expr — it builds a subquery for a single column respecting current
conditions. For SQL backends this is a SELECT "col" FROM "table" WHERE ... expression.
Define relationships when constructing tables — with_one for foreign-key-to-parent, with_many
for parent-to-children. Then traverse:
#![allow(unused)]
fn main() {
let mut clients = client_table(db);
clients.add_condition(sqlite_expr!("{} = {}", (clients["is_paying_client"]), true));
let orders = clients.get_ref_as::<SqliteDB, ClientOrder>("orders").unwrap();
// The generated query includes the subquery:
// SELECT ... FROM "client_order"
// WHERE client_id IN (SELECT "id" FROM "client" WHERE is_paying_client = 1)
assert_eq!(orders.list().await.unwrap().len(), 3);
}
with_expression adds computed fields to a table using correlated subqueries. It pairs with
get_subquery_as which produces target.fk = source.id conditions (vs get_ref_as which uses
IN (subquery)).
#![allow(unused)]
fn main() {
.with_many("orders", "client_id", Order::sqlite_table)
.with_expression("order_count", |t| {
t.get_subquery_as::<Order>("orders").unwrap().get_count_query()
})
// Generates: (SELECT COUNT(*) FROM "client_order"
// WHERE "client_order"."client_id" = "client"."id") AS "order_count"
}
What to implement: override related_correlated_condition in your TableSource to produce
table-qualified equality. Default panics — backends without correlated subquery support (CSV) simply
can’t use this feature.
#![allow(unused)]
fn main() {
fn related_correlated_condition(&self, target_table: &str, target_field: &str,
source_table: &str, source_column: &str) -> Self::Condition {
sqlite_expr!("{} = {}", (ident(target_field).dot_of(target_table)),
(ident(source_column).dot_of(source_table)))
}
}
Requires SelectableDataSource (Step 3) since aggregate query builders use table.select().
Step 6: Multi-Backend Applications
At this point your backend works — you can define tables, query data, and traverse relationships.
But a real application typically has a model crate that defines entities once and offers table
constructors for each backend. That’s bakery_model3 in the Vantage repo. The final piece is
AnyTable, which lets you treat tables from different backends uniformly.
AnyTable: the type-erased wrapper
AnyTable erases the backend and entity types behind a uniform serde_json::Value-based interface.
This is what makes it possible to write generic UI, CLI, or API code that doesn’t care which
database is behind it.
There are two ways to create one:
#![allow(unused)]
fn main() {
// 1. If your backend already uses serde_json::Value (rare):
let any = AnyTable::new(my_table);
// 2. For backends with custom value types (the common case):
let any = AnyTable::from_table(Product::sqlite_table(db));
}
from_table works as long as your AnyType implements Into<serde_json::Value> and
From<serde_json::Value>. The vantage_type_system! macro generates the Into conversion
automatically, and after Step 1 your backend should have the From direction covered too.
Building a multi-source CLI
The CLI example in bakery_model3/examples/cli.rs shows the pattern. A build_table function
matches on the user’s chosen source, calls the right entity constructor, and wraps it with
AnyTable::from_table(). Once you have an AnyTable, all commands are backend-agnostic —
list_values(), get_count(), get_some_value(), insert_value(), and delete() all work
identically regardless of which database is behind it.
Because the values flow through as serde_json::Value, the CLI renderer can inspect types at
runtime — booleans like is_deleted display as true/false with color highlighting, numbers stay
numeric, and nulls render cleanly. Your type system work in Step 1 ensures these values arrive with
the right JSON type rather than everything being a string.
Try it out:
# List products from CSV
cargo run --example cli -- csv product list
# Same thing from SQLite
cargo run --example cli -- sqlite product list
# Count bakeries in SurrealDB
cargo run --example cli -- surreal bakery count
# Get a single product record
cargo run --example cli -- sqlite product get
# Insert a new record
cargo run --example cli -- surreal bakery add myid '{"name":"Test","profit_margin":10}'
# Delete a record
cargo run --example cli -- surreal bakery delete myid
That’s the payoff of implementing a proper type system and TableSource — one line of
AnyTable::from_table() bridges the gap between your backend’s native types and a uniform
JSON-based interface.
Persistence-aligned Type System
Every Vantage persistence defines its own type trait — SqliteType, PostgresType, SurrealType,
MongoType, CsvType — that maps Rust values to and from the storage format. This is how
expressions like sqlite_expr!("price > {}", 150i64) know how to bind 150i64 as a parameter.
Built-in implementations cover the common ground: bool, i64, f64, String,
Option<T>, chrono date/time types, and more. Each persistence supports exactly the types its
backend can handle natively.
Because the trait is open, you can implement it for your own types — enums, newtypes, domain objects — and they’ll work everywhere expressions are used: conditions, inserts, updates.
See Adding Custom Types for a walkthrough.
Adding Custom Types
You can teach Vantage to store any Rust type by implementing the persistence type trait. Here’s the
pattern, using an Animal enum that maps to a text column across every backend:
- Define your type — a plain Rust enum (or struct).
- Implement the persistence trait —
SqliteType,PostgresType, etc. Each impl says how to convert to/from the storage format (CBOR for SQL backends, BSON for MongoDB). - Use it in expressions — once the trait is implemented,
sqlite_expr!("species = {}", animal)just works.
A single type can implement traits for multiple backends, so the same Animal enum works with
SQLite, Postgres, SurrealDB, MongoDB, and CSV — each with its own serialization logic.
SQL: PostgreSQL, MySQL & SQLite
The vantage-sql crate provides persistence implementations for three SQL databases via
sqlx. All three share the same architecture — CBOR-based type
systems, expression engine, query builder, and full TableSource implementation — but each has
vendor-specific behaviour around type affinity, quoting, and parameter binding.
Backends
| Backend | Struct | Type System | Param style | ID quoting |
|---|---|---|---|---|
| PostgreSQL | PostgresDB | AnyPostgresType | $1, $2 | "double_quote" |
| MySQL | MysqlDB | AnyMysqlType | ?, ? | `backtick` |
| SQLite | SqliteDB | AnySqliteType | ?1, ?2 | "double_quote" |
What they implement
All three implement the full trait stack:
DataSource— markerExprDataSource— parametric SQL execution with CBOR valuesSelectableDataSource— query builder with JOINs, CTEs, window functionsTableSource— full CRUD, columns, conditions, aggregatesTableQuerySource— table definition → full query
CBOR value representation
All SQL backends use CBOR (ciborium::Value) as their internal value type — not JSON. This
preserves type fidelity that JSON loses:
- Integer vs Float — JSON’s
42is ambiguous; CBOR distinguishesInteger(42)fromFloat(42.0) - Binary data — CBOR has native byte arrays; JSON would need base64 encoding
- Precise decimals — stored as tagged CBOR values, not lossy floats
Values are converted to CBOR on write (via the type system’s to_cbor()) and read back as untyped
CBOR (via from_cbor()). The type markers from vantage_type_system! ensure correct binding —
integers bind as i64, text as &str, booleans as bool.
Type conversion reference
Each database handles Rust types differently depending on the column type. See the Type Conversions reference for detailed round-trip tables covering:
- Chrono types —
NaiveDate,NaiveTime,NaiveDateTime,DateTime<Utc>,DateTime<FixedOffset> - Numeric types —
Decimal,i64,f64 - Exact vs lossy vs truncated behaviour per column type
- Cross-type coercion rules and error cases
SQL Primitives
Primitives are reusable building blocks for constructing SQL expressions. They handle quoting, escaping, vendor-specific syntax, and logical composition so you don’t have to.
Conditions built with typed columns and SqliteOperation cover simple comparisons, but real queries
need more — OR groups, function calls, string concatenation, date formatting. That’s what
primitives are for.
Import them with:
#![allow(unused)]
fn main() {
use vantage_sql::primitives::*;
}
Primitives are not part of the prelude — import them explicitly when needed.
Some primitives have convenience macros that accept a variable number of arguments and call
.expr() on each one automatically: fx! → Fx, concat_! → Concat. The macros are
syntactic sugar — if you need to build arguments programmatically (e.g. from a Vec), use
the underlying struct directly.
or_() / and_() — Logical Combinators
By default, multiple calls to .with_condition() combine with AND. When you need OR, use
or_():
#![allow(unused)]
fn main() {
use vantage_sql::primitives::*;
// role = 'admin' OR role = 'superuser'
let cond = or_(ident("role").eq("admin"), ident("role").eq("superuser"));
}
For nested logic, combine with and_():
#![allow(unused)]
fn main() {
// (price > 100 AND in_stock = 1) OR (featured = 1)
let cond = or_(
and_(ident("price").gt(100), ident("in_stock").eq(true)),
ident("featured").eq(true),
);
}
Both return Expression<T>, so they plug directly into .with_condition().
ident() — Identifiers
Creates a quoted column or table name. Quoting adapts per backend (" for SQLite/Postgres, `
for MySQL).
#![allow(unused)]
fn main() {
let col = ident("price"); // "price"
let qualified = ident("name").dot_of("u"); // "u"."name"
let aliased = ident("total").with_alias("t"); // "total" AS "t"
}
ident() is a shorthand for
Identifier::new(). Reserved words and
names with spaces are quoted automatically.
fx! — Function Calls
The fx! macro builds a SQL function call. Arguments are passed directly —
.expr() is called on each one automatically:
#![allow(unused)]
fn main() {
fx!("count", sqlite_expr!("*"))
// => COUNT(*)
fx!("avg", ident("price"))
// => AVG("price")
// Multiple arguments
fx!("coalesce", ident("nickname"), "anonymous")
// => COALESCE("nickname", 'anonymous')
// Nested
fx!("round", fx!("avg", ident("price")), 2i64)
// => ROUND(AVG("price"), 2)
}
If you need to build arguments programmatically (e.g. from a Vec), use
Fx::new() directly:
#![allow(unused)]
fn main() {
let args: Vec<Expression<AnySqliteType>> = columns.iter().map(|c| c.expr()).collect();
let f = Fx::new("coalesce", args);
}
ternary() — Conditional Expression
Three-valued conditional. Renders as IIF() on SQLite, IF() on MySQL, and
CASE WHEN ... THEN ... ELSE ... END on PostgreSQL:
#![allow(unused)]
fn main() {
let expr = ternary(
ident("stock").gt(0),
"in stock",
"sold out",
);
}
ternary() is a shorthand for
Ternary::new().
Case — CASE Expressions
For more than two branches, use Case to build a full
CASE WHEN ... END block:
#![allow(unused)]
fn main() {
let expr = Case::new()
.when(ident("status").eq("active"), "yes")
.when(ident("status").eq("banned"), "no")
.else_("unknown");
}
concat_! — String Concatenation
Concatenates expressions. Renders as || on SQLite/Postgres, CONCAT() on MySQL. The
concat_! macro calls .expr() on each argument automatically:
#![allow(unused)]
fn main() {
concat_!(ident("first_name"), " ", ident("last_name"))
}
Use .ws() for a separator — it accepts any
Expressive<T>, including string literals:
#![allow(unused)]
fn main() {
concat_!(ident("first_name"), ident("last_name")).ws(", ")
// SQLite: "first_name" || ', ' || "last_name"
// MySQL: CONCAT_WS(', ', `first_name`, `last_name`)
}
Interval — Date Intervals
Portable date interval that adapts per backend:
#![allow(unused)]
fn main() {
let i = Interval::days(30);
// SQLite: 30 (used with date functions)
// MySQL: INTERVAL 30 DAY
// Postgres: INTERVAL '30 days'
}
See Interval for available constructors (days,
hours, months, etc.).
date_format() — Date Formatting
Portable strftime-style formatting. Translates format tokens per backend:
#![allow(unused)]
fn main() {
let formatted = date_format(ident("created_at"), "%Y-%m-%d");
// SQLite: STRFTIME('%Y-%m-%d', "created_at")
// MySQL: DATE_FORMAT("created_at", '%Y-%m-%d')
// Postgres: TO_CHAR("created_at", 'YYYY-MM-DD')
}
date_format() is a shorthand for
DateFormat::new(). Use
.raw_format() to skip token
translation and pass a native format string.
Type Conversions
How Rust native types round-trip through different SQL column types. Tested via entity insert + read-back (Table API).
Legend:
- exact = lossless round-trip, value identical after insert + read
- lossy = value stored but precision reduced (e.g. f64 rounding)
- truncated = value stored but digits beyond column precision are cut
- err = database rejects the insert or entity conversion fails on read
Chrono Types
MySQL
| Column \ Rust type | NaiveDate | NaiveTime | NaiveDateTime | DateTime<Utc> | DateTime<FixedOffset> |
|---|---|---|---|---|---|
| VARCHAR | exact | exact | exact | exact | exact |
| DATE | exact | err | err | err | — |
| TIME | err | truncating µs | err | err | — |
| TIME(6) | err | exact | err | err | — |
| DATETIME | err | err | truncating µs | truncating µs | offset → +00:00 |
| DATETIME(6) | err | err | exact | exact | — |
| TIMESTAMP | err | err | truncating µs | truncating µs | offset → +00:00 |
| TIMESTAMP(6) | err | err | exact | exact | — |
- Format:
"2025-01-10 12:00:00"(space separator, no T);FixedOffsetappends+05:30 - TIME/DATETIME/TIMESTAMP default to 0 fractional digits — use
(6)for microseconds DateTime<FixedOffset>: VARCHAR preserves offset; DATETIME/TIMESTAMP normalize to UTC- Cross-type coercions (e.g. NaiveTime → DATE) fail with variant mismatch
PostgreSQL
| Column \ Rust type | NaiveDate | NaiveTime | NaiveDateTime | DateTime<Utc> | DateTime<FixedOffset> |
|---|---|---|---|---|---|
| VARCHAR | exact | exact | exact | exact | offset → +00:00 |
| DATE | exact | err | err | err | — |
| TIME | err | exact | err | err | — |
| TIMESTAMP | err | err | exact | exact | offset → +00:00 |
| TIMESTAMPTZ | err | err | exact | exact | offset → +00:00 |
- Format:
"2025-01-10 12:00:00+00"(space separator, abbreviated tz offset) - Microsecond precision by default — no
(6)suffix needed - Typed binds (chrono types, not text) required for DATE/TIME/TIMESTAMP/TIMESTAMPTZ
DateTime<FixedOffset>: offset always normalized to UTC (typed binds), no column preserves it- Cross-type coercions fail with variant mismatch
SQLite
| Column \ Rust type | NaiveDate | NaiveTime | NaiveDateTime | DateTime<Utc> | DateTime<FixedOffset> |
|---|---|---|---|---|---|
| TEXT | exact | exact | exact | exact | exact |
- All dates stored as TEXT — format: ISO 8601 with T separator (
"2025-01-10T12:00:00Z") - Subsecond precision and timezone offsets preserved as-is
Numeric Types
MySQL
| Column \ Rust type | Decimal | i64 | f64 |
|---|---|---|---|
| VARCHAR | exact | exact | exact |
| DECIMAL(20,6) | truncated to 6 places | err | err |
| DECIMAL(38,15) | exact | err | err |
| DOUBLE | lossy (~15 digits) | err | exact |
| FLOAT | lossy (~7 digits) | err | lossy (~7 digits) |
| BIGINT | fractional part lost | exact | err |
- No cross-conversion between Integer, Float, and Decimal CBOR types
- VARCHAR works for all types —
from_cborparses text as fallback
PostgreSQL
| Column \ Rust type | Decimal | i64 | f64 |
|---|---|---|---|
| VARCHAR | exact | exact | exact |
| NUMERIC(20,6) | truncated to 6 places | err | err |
| NUMERIC(38,15) | exact | err | err |
| DOUBLE PRECISION | lossy (~15 digits) | err | exact |
| REAL | lossy (~7 digits) | err | lossy (~7 digits) |
| BIGINT | fractional part lost | exact | err |
- Typed binds (
rust_decimal::Decimal) used for NUMERIC columns
SQLite
| Column \ Rust type | Decimal | i64 | f64 |
|---|---|---|---|
| TEXT | exact | exact | exact |
| NUMERIC | lossy (~15 digits, stored as REAL) | exact | exact |
| REAL | lossy (~15 digits) | err | exact |
| INTEGER | fractional part lost | exact | err |
- SQLite infers CBOR type from stored value, not declared column type
- NUMERIC/REAL affinities coerce to float —
i64in REAL comes back as Float and fails - Store Decimal in TEXT for lossless precision