- Published on
logixia 1.3.1 — Async-First Logging That Doesn't Block Your Node.js App
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Winston, Pino, Bunyan — they're all good loggers. But when you need logs in a database, request IDs automatically injected in every log line, sensitive fields redacted before they hit disk, or full-text search over your log history, you're bolting on extra packages and writing glue code. logixia ships all of that as first-class features.
- Installation
- Basic Setup
- Non-Blocking by Design
- Database Transports
- File Rotation
- Request Tracing with AsyncLocalStorage
- Field Redaction
- Log Search with SearchManager
- NestJS Module
- OpenTelemetry Integration
- Adaptive Log Level
- Child Loggers
- Graceful Shutdown
- Kafka and WebSocket Transports
- Custom Transports
- Conclusion
Installation
npm install logixia
# or
yarn add logixia
For NestJS:
npm install logixia @logixia/nestjs
Basic Setup
import { Logger } from 'logixia'
const logger = new Logger({
level: 'info',
transports: ['console'],
})
logger.info('Server started', { port: 3000 })
logger.warn('Rate limit approaching', { userId: 'u123', requests: 95 })
logger.error('Payment failed', { invoiceId: 'inv_456', error: err })
All log methods are async and non-blocking. The underlying transport writes happen off the main event loop — your request handler returns immediately, the log is written in the background.
Non-Blocking by Design
The difference between synchronous and asynchronous logging matters under load:
// ❌ Synchronous logger (Winston default) — blocks event loop
logger.info('Request completed') // waits for fs.writeSync to finish
// ✅ logixia — schedules write, returns immediately
await logger.info('Request completed')
// or fire-and-forget:
logger.info('Request completed').catch(handleLogError)
Under 10,000 req/s, synchronous file writes add ~2–5ms latency per request. logixia's async queue batches writes and flushes in chunks — the event loop stays unblocked.
Database Transports
This is logixia's standout feature: native transports for PostgreSQL, MySQL, MongoDB, and SQLite. Logs go directly into your database — queryable, indexable, and part of your existing backup strategy.
import { Logger, PostgresTransport } from 'logixia'
const logger = new Logger({
level: 'info',
transports: [
new PostgresTransport({
connectionString: process.env.DATABASE_URL,
table: 'app_logs', // auto-created if doesn't exist
batchSize: 100, // insert 100 logs per query
flushInterval: 5000, // or every 5 seconds
}),
],
})
// MySQL
import { MySQLTransport } from 'logixia'
new MySQLTransport({ host, user, password, database, table: 'logs' })
// MongoDB
import { MongoTransport } from 'logixia'
new MongoTransport({ uri: process.env.MONGO_URI, collection: 'logs' })
// SQLite (great for local dev / edge)
import { SQLiteTransport } from 'logixia'
new SQLiteTransport({ path: './logs.db' })
The auto-created table schema:
CREATE TABLE app_logs (
id BIGSERIAL PRIMARY KEY,
level VARCHAR(10) NOT NULL,
message TEXT NOT NULL,
meta JSONB,
trace_id VARCHAR(64),
request_id VARCHAR(64),
timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX ON app_logs (level);
CREATE INDEX ON app_logs (trace_id);
CREATE INDEX ON app_logs (timestamp);
File Rotation
For apps writing to disk, logixia supports both size-based and time-based rotation:
import { FileRotationTransport } from 'logixia'
const logger = new Logger({
transports: [
new FileRotationTransport({
dir: './logs',
filename: 'app.log',
rotation: 'daily', // or 'hourly' | 'weekly'
maxFiles: 30, // keep 30 days of logs
maxSize: '100MB', // also rotate if file exceeds 100MB
compress: true, // gzip rotated files
}),
],
})
// Creates: logs/app.log (current), logs/app-2026-03-13.log.gz (yesterday), ...
You can combine multiple transports — log to console in dev, file + database in production:
const logger = new Logger({
level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
transports: [
'console',
...(process.env.NODE_ENV === 'production'
? [
new PostgresTransport({ connectionString: process.env.DATABASE_URL }),
new FileRotationTransport({ dir: '/var/log/myapp', rotation: 'daily' }),
]
: []),
],
})
Request Tracing with AsyncLocalStorage
The killer feature for API logs: every log line automatically includes the current request's trace ID and request ID — without passing a logger instance through every function call.
import { Logger, RequestContext } from 'logixia'
const logger = new Logger({ transports: ['console'] })
// Express middleware — sets up AsyncLocalStorage context per request
app.use(RequestContext.middleware(logger))
// Now ANY log call in the request lifecycle includes trace/request IDs automatically
app.get('/orders', async (req, res) => {
logger.info('Fetching orders') // → includes requestId: 'req_abc123'
await db.query('SELECT * FROM orders')
logger.info('Orders fetched', { count: 42 }) // → same requestId
res.json(orders)
})
// Even in nested service calls — no need to pass logger around
class OrderService {
async processOrder(orderId: string) {
logger.info('Processing order', { orderId }) // → still has requestId from HTTP context
// ...
}
}
This works because logixia uses Node.js AsyncLocalStorage — the context propagates through the entire async call chain automatically, including callbacks, Promises, and async/await.
Field Redaction
Prevent sensitive data from ever reaching your logs or database:
const logger = new Logger({
transports: ['console'],
redact: {
fields: [
'password',
'token',
'authorization',
'creditCard',
'user.ssn', // dot-notation for nested fields
'user.bankAccount',
/apiKey/i, // regex to match field names
],
replacement: '[REDACTED]',
},
})
logger.info('User signup', {
email: 'sanjeev@example.com',
password: 'supersecret123', // → '[REDACTED]'
user: {
name: 'Sanjeev',
ssn: '123-45-6789', // → '[REDACTED]'
},
})
// Output: { email: 'sanjeev@example.com', password: '[REDACTED]', user: { name: 'Sanjeev', ssn: '[REDACTED]' } }
Redaction happens before any transport writes the log — sensitive data never reaches disk, database, or log aggregators.
Log Search with SearchManager
When logs are in a database, you can actually search them:
import { SearchManager } from 'logixia'
const search = new SearchManager({ connectionString: process.env.DATABASE_URL })
// Full-text search
const results = await search.query({
text: 'payment failed',
level: 'error',
from: new Date('2026-03-13'),
to: new Date('2026-03-14'),
limit: 50,
})
// Filter by trace ID — see all logs for one request
const traceLog = await search.query({
traceId: '4bf92f3577b34da6a3ce929d0e0e4736',
})
// Filter by metadata fields
const userLogs = await search.query({
meta: { userId: 'u_123' },
level: ['warn', 'error'],
})
This is what makes database logging compelling over file logging — you can find all errors for a specific user, trace a slow request end-to-end, or correlate logs across services by trace ID.
NestJS Module
// app.module.ts
import { Module } from '@nestjs/common'
import { LogixiaLoggerModule } from 'logixia'
@Module({
imports: [
LogixiaLoggerModule.forRoot({
level: 'info',
transports: [
'console',
new PostgresTransport({
connectionString: process.env.DATABASE_URL,
}),
],
redact: { fields: ['password', 'token'] },
requestContext: true, // auto-inject request IDs via middleware
}),
],
})
export class AppModule {}
// In any service
import { Injectable } from '@nestjs/common'
import { InjectLogger, LogixiaLogger } from 'logixia'
@Injectable()
export class OrderService {
constructor(@InjectLogger() private logger: LogixiaLogger) {}
async createOrder(dto: CreateOrderDto) {
this.logger.info('Creating order', { userId: dto.userId, items: dto.items.length })
try {
const order = await this.db.order.create(dto)
this.logger.info('Order created', { orderId: order.id })
return order
} catch (err) {
this.logger.error('Order creation failed', { error: err.message, dto })
throw err
}
}
}
OpenTelemetry Integration
logixia reads the active OTel trace context and automatically includes traceId and spanId in every log:
import { trace } from '@opentelemetry/api'
import { Logger, OtelContextProvider } from 'logixia'
const logger = new Logger({
transports: ['console'],
contextProvider: new OtelContextProvider(), // reads OTel trace context
})
// Inside a traced request:
// Span is active from OTel middleware/instrumentation
logger.info('Processing payment')
// → { message: 'Processing payment', traceId: '4bf92f...', spanId: 'abc123...' }
Log entries link directly to traces in Jaeger, Zipkin, or any OTel-compatible backend.
Adaptive Log Level
Automatically increase log verbosity when error rates spike:
const logger = new Logger({
level: 'info',
adaptive: {
enabled: true,
errorThreshold: 10, // errors per minute to trigger escalation
escalateTo: 'debug', // switch to debug level during incidents
cooldown: 300_000, // return to info after 5 minutes of normal error rate
},
})
// During normal operation: only info+ logs written
// After 10 errors/min: debug logs enabled automatically for deeper visibility
// After 5 minutes below threshold: back to info
Child Loggers
Create scoped loggers that inherit configuration but add fixed context:
const requestLogger = logger.child({
service: 'payment-service',
environment: process.env.NODE_ENV,
})
const orderLogger = requestLogger.child({ module: 'orders' })
orderLogger.info('Order processed', { orderId: 'ord_123' })
// → { service: 'payment-service', environment: 'production', module: 'orders', message: 'Order processed', orderId: 'ord_123' }
Child loggers are useful for adding service/module context without repeating it in every log call.
Graceful Shutdown
In-flight logs get flushed before the process exits:
process.on('SIGTERM', async () => {
console.log('Shutting down...')
await logger.flush() // wait for all buffered logs to write
await logger.close() // close transport connections
process.exit(0)
})
This matters for database transports that batch writes — without graceful shutdown, the last 100ms of logs might be lost.
Kafka and WebSocket Transports
For streaming logs to Kafka topics or a real-time dashboard:
import { KafkaTransport, WebSocketTransport } from 'logixia'
const logger = new Logger({
transports: [
new KafkaTransport({
brokers: ['kafka:9092'],
topic: 'app-logs',
compression: 'gzip',
}),
new WebSocketTransport({
url: 'ws://log-dashboard:8080',
levels: ['error', 'warn'], // only stream high-priority logs
}),
],
})
Custom Transports
If the built-ins don't cover your use case, implement Transport:
import { Transport, LogEntry } from 'logixia'
class SlackTransport implements Transport {
async write(entry: LogEntry): Promise<void> {
if (entry.level !== 'error') return
await fetch(process.env.SLACK_WEBHOOK!, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text: `🚨 *${entry.message}*\n\`\`\`${JSON.stringify(entry.meta, null, 2)}\`\`\``,
}),
})
}
async flush(): Promise<void> {}
async close(): Promise<void> {}
}
const logger = new Logger({
transports: [new SlackTransport()],
})
// Any error log goes to Slack
logger.error('Database connection lost', { host: 'db.example.com' })
Conclusion
logixia 1.3.1 solves the problems that file-based logging ignores: you can't easily search log files, you can't correlate logs across services without injecting context manually, and synchronous file writes hurt throughput under load. With database transports, everything is queryable. With AsyncLocalStorage request tracing, every log line knows what request it belongs to. With field redaction, sensitive data stays out of your logs entirely. If you're on NestJS, the LogixiaLoggerModule gets all of this working in under 10 lines of config — npm install logixia @logixia/nestjs and you're done.