@drizzle-adapter/tidb-serverless
v1.0.5
Published
TiDB Serverless adapter implementation for the Drizzle Adapter ecosystem.
Downloads
370
Readme
@drizzle-adapter/tidb-serverless
TiDB Serverless adapter implementation for the Drizzle Adapter ecosystem.
Overview
The @drizzle-adapter/tidb-serverless
package provides the TiDB Serverless implementation for the Drizzle Adapter interface. While you don't interact with this package directly (you use @drizzle-adapter/core
instead), it enables support for TiDB Serverless in the Drizzle Adapter ecosystem.
Why TiDB Serverless?
TiDB Serverless offers unique advantages for modern applications:
- Distributed SQL: TiDB is a distributed SQL database that scales horizontally while maintaining strong consistency
- MySQL Compatibility: 100% MySQL compatible while offering distributed capabilities
- Auto-scaling: Automatically scales compute and storage resources based on workload
- Serverless Pricing: Pay only for what you use with per-second billing
- HTAP Capabilities: Hybrid Transactional and Analytical Processing for real-time analytics
- Global Distribution: Deploy across regions with automatic data replication
- High Availability: Built-in high availability with automatic failover
- Real-time Analytics: Run complex analytical queries alongside transactional workloads
Perfect For:
- Microservices: Scale individual services independently
- Global Applications: Serve users worldwide with low latency
- Real-time Analytics: Process transactional and analytical workloads simultaneously
- Cost-Sensitive Workloads: Pay only for actual usage
- High-Growth Applications: Scale seamlessly from startup to enterprise
Installation
# Install both the core package and the TiDB adapter
pnpm install @drizzle-adapter/core @drizzle-adapter/tidb-serverless
Important: Adapter Registration
For the adapter to work correctly with the DrizzleAdapterFactory, you must import it for its self-registration side effects:
// Import for side effects - adapter will self-register
import '@drizzle-adapter/tidb-serverless';
// Now you can use the factory
import { DrizzleAdapterFactory } from '@drizzle-adapter/core';
Usage
Configuration
import { DrizzleAdapterFactory, TypeDrizzleDatabaseConfig } from '@drizzle-adapter/core';
const config: TypeDrizzleDatabaseConfig = {
DATABASE_DRIVER: 'tidb-serverless',
DATABASE_URL: 'mysql://user:[email protected]:4000/your-database'
};
const factory = new DrizzleAdapterFactory();
const adapter = factory.create(config);
Schema Definition
const dataTypes = adapter.getDataTypes();
const users = dataTypes.dbTable('users', {
id: dataTypes.dbBigInt('id', { mode: 'number' }).primaryKey().autoincrement(),
name: dataTypes.dbVarChar('name', { length: 255 }).notNull(),
email: dataTypes.dbVarChar('email', { length: 255 }).notNull().unique(),
status: dataTypes.dbEnum('status', ['active', 'inactive']).default('active'),
metadata: dataTypes.dbJson('metadata'),
createdAt: dataTypes.dbTimestamp('created_at').defaultNow()
});
const posts = dataTypes.dbTable('posts', {
id: dataTypes.dbBigInt('id', { mode: 'number' }).primaryKey().autoincrement(),
userId: dataTypes.dbBigInt('user_id', { mode: 'number' })
.references(() => users.id),
title: dataTypes.dbVarChar('title', { length: 255 }).notNull(),
content: dataTypes.dbText('content').notNull(),
published: dataTypes.dbBoolean('published').default(false),
createdAt: dataTypes.dbTimestamp('created_at').defaultNow()
});
Basic CRUD Operations
import { eq, and, or, desc } from 'drizzle-orm';
const connection = await adapter.getConnection();
const client = connection.getClient();
try {
// INSERT
// Single insert with returning
const [newUser] = await client
.insert(users)
.values({
name: 'John Doe',
email: '[email protected]',
metadata: { role: 'user' }
})
.returning();
// Bulk insert
await client
.insert(posts)
.values([
{
userId: newUser.id,
title: 'First Post',
content: 'Hello, world!'
},
{
userId: newUser.id,
title: 'Second Post',
content: 'Another post'
}
]);
// SELECT
// Select all
const allUsers = await client
.select()
.from(users);
// Select with conditions
const activeUsers = await client
.select()
.from(users)
.where(eq(users.status, 'active'));
// Select with multiple conditions
const specificPosts = await client
.select()
.from(posts)
.where(
and(
eq(posts.userId, newUser.id),
eq(posts.published, true)
)
);
// Select with join
const userPosts = await client
.select({
userName: users.name,
postTitle: posts.title,
postContent: posts.content
})
.from(posts)
.leftJoin(users, eq(posts.userId, users.id))
.where(eq(posts.published, true))
.orderBy(desc(posts.createdAt));
// Select with complex conditions
const searchResults = await client
.select()
.from(posts)
.where(
or(
eq(posts.published, true),
eq(posts.userId, newUser.id)
)
)
.orderBy(desc(posts.createdAt))
.limit(10);
// UPDATE
// Update single record
const [updated] = await client
.update(users)
.set({ name: 'John Smith' })
.where(eq(users.id, newUser.id))
.returning();
// Update multiple records
await client
.update(posts)
.set({ published: true })
.where(eq(posts.userId, newUser.id));
// DELETE
// Delete single record
await client
.delete(users)
.where(eq(users.id, newUser.id));
// Delete with conditions
await client
.delete(posts)
.where(
and(
eq(posts.userId, newUser.id),
eq(posts.published, false)
)
);
} finally {
await connection.disconnect();
}
TiDB-Specific Features
Distributed Transactions
const orders = dataTypes.dbTable('orders', {
id: dataTypes.dbBigInt('id', { mode: 'number' }).primaryKey().autoincrement(),
userId: dataTypes.dbBigInt('user_id', { mode: 'number' }),
amount: dataTypes.dbDecimal('amount', { precision: 10, scale: 2 }),
status: dataTypes.dbEnum('status', ['pending', 'completed', 'failed']),
createdAt: dataTypes.dbTimestamp('created_at').defaultNow()
});
// TiDB handles distributed transactions automatically
async function createOrder(userId: number, amount: number) {
const connection = await adapter.getConnection();
const client = connection.getClient();
try {
const [order] = await client
.insert(orders)
.values({
userId,
amount,
status: 'pending'
})
.returning();
// Even in a distributed environment, this is ACID compliant
return order;
} finally {
await connection.disconnect();
}
}
Real-time Analytics
const analytics = dataTypes.dbTable('analytics', {
id: dataTypes.dbBigInt('id', { mode: 'number' }).primaryKey().autoincrement(),
metric: dataTypes.dbVarChar('metric', { length: 255 }),
value: dataTypes.dbDouble('value'),
dimensions: dataTypes.dbJson('dimensions'),
timestamp: dataTypes.dbTimestamp('timestamp').defaultNow()
});
// TiDB can handle analytical queries alongside transactional workloads
async function getMetricsTrends() {
const client = await adapter.getConnection().getClient();
return client
.select({
metric: analytics.metric,
hourly_avg: sql`AVG(${analytics.value})`,
hour: sql`DATE_FORMAT(${analytics.timestamp}, '%Y-%m-%d %H:00:00')`
})
.from(analytics)
.groupBy(analytics.metric, sql`hour`)
.orderBy(desc(sql`hour`));
}
Best Practices
- Connection Management: Always use try/finally to ensure connections are properly closed
- Batch Operations: Use bulk inserts/updates when possible for better performance
- Transactions: Leverage TiDB's distributed transaction capabilities
- Query Optimization: Use appropriate indexes and limit result sets
- Error Handling: Implement proper retry logic for transient errors
Contributing
We welcome contributions! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
Related Packages
- @drizzle-adapter/core - Core interfaces and types
- @drizzle-adapter/mysql-core - Shared MySQL functionality