📅 Scheduled Bulk Operations (Bulkops v3) - Complete Guide
🎯 Overview
Scheduled Bulk Operations is the next-generation bulk operations system that allows administrators to upload, validate, and schedule bulk operations for future execution. This powerful feature combines the reliability of atomic operations with flexible scheduling capabilities, making it perfect for off-peak processing, time-zone coordination, and automated workflows.
🌟 What’s New in v3
- ⏰ Flexible Scheduling: Schedule operations for immediate or future execution
- 📊 Queue Management: Visual queue with real-time status tracking
- 🔄 Atomic Processing: All-or-nothing operations with automatic rollback on failure
- 📧 Smart Notifications: Email alerts for completion, failure, and status changes
- 📱 Mobile-Friendly: Responsive interface for on-the-go management
- 🌍 Multi-Timezone Support: Schedule operations across different time zones
🎭 Audience-Specific Quick Links
👥 For Customers & End Users
- Getting Started: Customer Quick Start Guide
- How to Schedule: Scheduling Operations
- Managing Queue: Queue Management
- Troubleshooting: Common Issues & Solutions
🎧 For Customer Success Teams
- Support Guide: CS Support Workflows
- Common Issues: CS Troubleshooting Playbook
- Customer Training: Training Resources
📈 For Product Teams
- Feature Overview: Product Feature Analysis
- User Workflows: User Journey Mapping
- Business Value: Business Impact & Metrics
🛠️ For Engineering Teams
- Architecture: Technical Architecture
- API Reference: API Documentation
- Implementation: Technical Implementation
👥 Customer Guide {#customer-guide}
🚀 Quick Start: Your First Scheduled Operation
What Are Scheduled Bulk Operations?
Scheduled Bulk Operations allow you to upload your data files and choose when they get processed - either immediately or at a specific future date and time. This is perfect for:
- Off-peak Processing: Schedule uploads during low-traffic hours (2-4 AM recommended)
- Coordination: Synchronize operations across different time zones
- Planning: Prepare operations in advance for specific deployment times
- Resource Management: Distribute processing load across different time periods
Step-by-Step Guide
1. Access Scheduled Operations
Navigation: Admin Portal → Manage → Bulk Operations → "Add New", "Update Existing", or "Operation History"
Alternative: Admin Portal → Manage → Schedules → "Bulk Schedule"
2. Choose Your Operation Type
- 📤 Add New: Create new records (users, sites, departments, etc.)
- ✏️ Update Existing: Modify existing data records with filtering options
- 📅 Bulk Schedule: Create recurring schedules for multiple sites and questionnaires
3. Upload Your File
- Supported Formats: Excel (.xlsx, .xls) and CSV files
- File Size Limit: Maximum 10MB per file
- Record Limit: Up to 5,000 records per operation
- Template: Always use the latest template from the “Download Template” button
4. Choose Execution Time
Option A: Process Immediately
✅ Select "Process Now" (available in Add New and Update Existing)
→ File validates and processes immediately
→ Get results within minutes
Option B: Schedule for Later
📅 Select "Schedule for Later" (available in Add New and Update Existing)
→ Choose date and time (up to 30 days ahead)
→ Minimum: 5 minutes from current time
→ Recommended: Off-peak hours (2-4 AM local time)
Option C: Create Recurring Schedules
🔄 Use "Bulk Schedule" (separate menu item)
→ Create complex recurring schedules
→ Multi-step wizard for questionnaire assignments
→ Advanced scheduling patterns and site assignments
5. Monitor Progress
Queue Dashboard: Track your scheduled operations
- 🟡 Queued: Waiting for scheduled time
- 🔄 Processing: Currently being processed
- ✅ Completed: Successfully finished
- ❌ Failed: Encountered errors
- ⏹️ Cancelled: Stopped by user
📊 Queue Management {#queue-management}
Understanding the Queue Interface
Queue Capacity
- Limit: Maximum 5 active operations per organization
- “Active” includes: Queued + Processing operations
- Once completed/failed: No longer counts against limit
Queue Operations
Viewing Your Queue
Navigation: Admin Portal → Bulk Operations → "Operation History"
Filter by: Status, Date Range, Operation Type
Managing Scheduled Operations
- 📝 Reschedule: Change the execution time (only for “Queued” operations)
- 🗑️ Cancel: Remove from queue (only for “Queued” operations)
- 📥 Download File: Re-download your uploaded file
- 👁️ View Details: Check operation parameters and status
Operation Timeline
Each operation shows a detailed timeline:
- 📤 Uploaded - File received and validated
- ⏳ Queued - Waiting for scheduled execution time
- 🔄 Processing - Currently being processed
- ✅ Completed / ❌ Failed - Final status with details
🔧 Advanced Features {#advanced-features}
Smart Scheduling Recommendations
The system provides intelligent scheduling suggestions:
Off-Peak Hours (Recommended)
- Best Times: 2:00 AM - 4:00 AM local time
- Benefits: Faster processing, reduced system load
- Automatic Suggestion: System highlights optimal time slots
Peak Hours (Caution)
- Times: 9:00 AM - 5:00 PM local time
- Impact: May experience slower processing
- Warning: System alerts about potential delays
File Optimization Tips
File Size Warnings
- Small (< 1MB): ✅ Processes quickly
- Medium (1-5MB): ⚠️ May take longer during peak hours
- Large (5-10MB): ⚠️ Strongly recommend off-peak scheduling
Performance Optimization
- Break Large Files: Split 5,000+ records into multiple smaller files
- Remove Empty Rows: Clean up your data before upload
- Use Templates: Always use the latest template for best compatibility
🚨 Customer Troubleshooting {#customer-troubleshooting}
Common Issues & Quick Solutions
❌ “Queue limit reached”
Problem: You have 5 active operations already Solution:
- Check Operation History for active operations
- Cancel unnecessary queued operations
- Wait for current operations to complete
❌ “Scheduled time must be in the future”
Problem: Selected time is too soon or in the past Solution:
- Select a time at least 5 minutes from now
- Check your timezone settings
- Use the time picker for accuracy
❌ “File validation failed”
Problem: Your file contains errors Solution:
- Download the latest template
- Check the validation error details
- Fix the specific rows and columns mentioned
- Re-upload the corrected file
🔄 Operation stuck in “Processing”
Problem: Operation running longer than expected Solution:
- Check if it’s been less than 1 hour (normal for large files)
- Contact support if stuck for >2 hours
- Provide the Operation ID for faster assistance
📧 Not receiving email notifications
Problem: Missing completion/failure emails Solution:
- Check spam/junk folder
- Verify email address in your profile
- Contact admin to update notification settings
🎧 CS Support Guide {#cs-support-guide}
🛠️ Support Workflow for Scheduled Operations
Initial Support Triage
Customer Contact Scenarios
- Operation Status Questions: “My upload is stuck”
- Scheduling Issues: “Can’t schedule for tomorrow”
- File Problems: “Getting validation errors”
- Queue Management: “Hit the queue limit”
- Notification Issues: “Not getting emails”
Required Information Collection
Always gather:
- Organization ID: From customer’s admin panel
- Operation ID: From Operation History (format:
upload-xxx) - File Name: Original uploaded file name
- Scheduled Time: When they tried to schedule
- Error Messages: Exact error text and screenshots
- Browser Info: Chrome/Safari/Edge version
Quick Diagnostic Commands
Check Operation Status
Query: scheduled_bulk_uploads.findOne({_id: "upload-123"})
Look for:
- status: current operation state
- error: failure reason
- validationResults: file validation details
- taskName: Cloud Task identifier
Check Queue Capacity
Query: scheduled_bulk_uploads.find({
organizationID: "org-123",
status: {$in: ["queued", "processing"]}
})
Count: Should be ≤ 5 for active operations
Check Organization Limits
Environment Variables:
- SCHEDULED_UPLOAD_QUEUE_LIMIT (default: 5)
- SCHEDULED_UPLOAD_MAX_SCHEDULE_DAYS (default: 30)
- SCHEDULED_UPLOAD_CLEANUP_DAYS (default: 7)
🔍 CS Troubleshooting Playbook {#cs-troubleshooting}
Scenario 1: “Operation Stuck in Processing”
Symptoms
- Status: “processing” for >2 hours
- Customer reports no completion email
- No recent status updates
Diagnosis Steps
-
Check Processing Time:
// If processedAt > 2 hours ago, likely stuck const upload = db.scheduled_bulk_uploads.findOne({_id: "upload-123"}); const hoursSince = (Date.now() - new Date(upload.processedAt)) / (1000 * 60 * 60); -
Check Cloud Task Status:
- Look up the
taskNamein Google Cloud Console - Check execution logs for errors
- Verify task completion status
- Look up the
-
Check Bulk Operation Result:
// Check if bulk operation completed but status wasn't updated db.bulk_operations.findOne({_id: upload.bulkOperationID})
Resolution Actions
-
If Task Failed: Manual status update needed
db.scheduled_bulk_uploads.updateOne( {_id: "upload-123"}, {$set: {status: "failed", error: "Task execution failed", completedAt: new Date()}} ) -
If Task Completed: Update to reflect completion
db.scheduled_bulk_uploads.updateOne( {_id: "upload-123"}, {$set: {status: "completed", completedAt: new Date()}} ) -
Send Manual Notification: Trigger email notification to user
Scenario 2: “Queue Limit Reached Error”
Symptoms
- Error: “Queue limit reached for organization”
- Customer can’t schedule new operations
- Dashboard shows <5 operations
Diagnosis Steps
-
Count Active Operations:
db.scheduled_bulk_uploads.find({ organizationID: "org-123", status: {$in: ["queued", "processing"]} }).count() -
Check for Stuck Operations:
- Look for operations in “processing” for >2 hours
- Identify operations that should be completed/failed
Resolution Actions
- Clean Up Stuck Operations: Update status of stuck operations
- Manual Override: Temporarily increase queue limit if needed
- Process Cleanup: Run the automatic cleanup process manually
Scenario 3: “Validation Errors Not Clear”
Symptoms
- Customer receives “validation failed” but can’t understand errors
- Error messages are technical
- File format appears correct
Diagnosis Steps
-
Check Validation Results:
const upload = db.scheduled_bulk_uploads.findOne({_id: "upload-123"}); console.log(upload.validationResults.errors); -
Download Original File: Verify file format and content
-
Compare with Template: Check against latest template
Resolution Actions
- Translate Errors: Convert technical errors to user-friendly language
- Provide Examples: Give specific fixes for each error type
- Template Update: If template is outdated, provide latest version
📚 CS Training Materials {#cs-training-materials}
Key Concepts to Master
1. Operation Lifecycle
Upload → Validation → Queuing → Scheduling → Processing → Completion
2. Status States
- Queued: Waiting for scheduled time ⏳
- Processing: Currently executing 🔄
- Completed: Successfully finished ✅
- Failed: Error occurred ❌
- Cancelled: User cancelled ⏹️
3. Common Error Categories
- File Format Errors: Wrong file type, encoding issues
- Validation Errors: Data format, required fields, duplicates
- System Errors: Queue limits, scheduling conflicts
- Processing Errors: Atomic operation failures, rollbacks
Customer Communication Templates
Status Update Email
Subject: Update on Your Scheduled Bulk Operation
Hi [Customer Name],
I've looked into your scheduled bulk operation (ID: upload-123) and here's the current status:
**Current Status**: [Status]
**Scheduled For**: [Date/Time]
**Expected Completion**: [Estimate]
[Status-specific details and next steps]
Best regards,
[CS Agent Name]
Resolution Follow-up
Subject: Your Bulk Operation Issue Has Been Resolved
Hi [Customer Name],
Great news! Your bulk operation issue has been resolved. Here's what we fixed:
**Issue**: [Problem description]
**Resolution**: [What we did]
**Status**: [Current status]
**Next Steps**: [What customer should do]
The operation should complete within [time estimate]. You'll receive an email notification once it's done.
Best regards,
[CS Agent Name]
📈 Product Overview {#product-overview}
🎯 Feature Analysis & Business Value
Product Positioning
Scheduled Bulk Operations positions Nimbly as a sophisticated enterprise platform that understands operational complexities. This feature addresses the gap between basic bulk uploads and enterprise-grade data management needs.
Competitive Advantages
- Atomic Reliability: Unique all-or-nothing processing prevents partial failures
- Flexible Scheduling: Unlike competitors’ immediate-only processing
- Smart Queueing: Capacity management prevents system overload
- Multi-timezone Support: Global organizations can coordinate operations
- Comprehensive Monitoring: Real-time visibility into operation status
Target User Personas
1. Enterprise IT Administrators
- Pain Point: Need reliable, large-scale data operations
- Use Case: Off-peak processing, coordinated deployments
- Value: Reduced system load, predictable processing times
2. Multi-location Managers
- Pain Point: Coordinating data updates across time zones
- Use Case: Synchronized updates, scheduled maintenance
- Value: Operational efficiency, reduced manual coordination
3. Data Migration Specialists
- Pain Point: Large migrations need careful timing
- Use Case: Phased migrations, rollback safety
- Value: Risk reduction, professional reliability
📋 User Workflows {#product-workflows}
Primary User Journey: Scheduling a Bulk Operation
journey title Scheduled Bulk Operation User Journey section Preparation Prepare data file: 5: User Download template: 4: User Validate data locally: 3: User section Upload & Schedule Access bulk operations: 5: User Upload file: 5: User Choose schedule time: 5: User Confirm scheduling: 5: User section Monitoring Check queue status: 4: User Receive notifications: 5: User Review results: 5: User section Management Reschedule if needed: 4: User Cancel if needed: 3: User Download results: 5: User
Key Decision Points
1. Immediate vs Scheduled Processing
Decision Factors:
- File size (large files → schedule for off-peak)
- Current system load (peak hours → schedule later)
- Coordination needs (multi-team → schedule for coordination)
- Risk tolerance (critical operations → schedule for careful monitoring)
2. Optimal Scheduling Times
User Behavior Patterns:
- 60% schedule for off-peak hours (2-4 AM)
- 25% schedule within 24 hours
- 15% schedule for specific business events (month-end, quarter-end)
3. Queue Management Strategy
Usage Patterns:
- Average: 2-3 operations queued per organization
- Peak: 5 operations (hitting limit)
- Typical Completion: 85% complete successfully, 15% require attention
💼 Business Impact & Metrics {#product-value}
Success Metrics
User Adoption
- Target: 40% of bulk operations use scheduling within 6 months
- Current Baseline: 100% immediate processing
- Leading Indicators: Queue utilization rate, off-peak usage
System Performance
- Target: 50% reduction in peak-hour processing load
- Measurement: Server resource utilization during business hours
- Goal: Improved response times for all users
User Satisfaction
- Target: >90% successful operation completion rate
- Current: 85% (immediate processing baseline)
- Improvement: Atomic operations reduce partial failures
Business Value Propositions
For Customers
- Operational Efficiency: Schedule operations during optimal times
- Risk Reduction: Atomic operations prevent data corruption
- Resource Optimization: Better system performance through load distribution
- Professional Reliability: Enterprise-grade operation management
For Nimbly
- Competitive Differentiation: Unique scheduling + atomic operations combination
- System Scalability: Better resource utilization and load management
- Customer Retention: Reduced support burden from failed operations
- Enterprise Positioning: Professional-grade data management capabilities
ROI Analysis
Customer ROI
- Time Savings: 60% reduction in failed operation recovery time
- Resource Optimization: 40% better staff utilization through off-peak scheduling
- Risk Reduction: 75% fewer data consistency issues
Nimbly ROI
- Support Reduction: 50% fewer bulk operation support tickets
- System Efficiency: 30% better server resource utilization
- Customer Satisfaction: 25% improvement in bulk operation NPS scores
🛠️ Engineering Architecture {#engineering-architecture}
📐 System Architecture Overview
High-Level Architecture
graph TB subgraph "Frontend Layer" UI[React Admin Interface] Queue[Queue Management UI] Monitor[Real-time Monitoring] end subgraph "API Layer" Router[Queue Router] Controller[Scheduled Uploads Controller] Middleware[Auth & Validation] end subgraph "Business Logic Layer" Usecase[Scheduled Uploads Usecase] QueueService[Queue Management Service] AtomicService[Atomic Operations Service] end subgraph "Infrastructure Layer" CloudTasks[Google Cloud Tasks] MongoDB[(MongoDB)] FirebaseStorage[(Firebase Storage)] SendGrid[SendGrid Email] end subgraph "External Systems" BulkOps[Existing Bulk Operations] Notifications[Notification System] end UI --> Router Queue --> Router Monitor --> Router Router --> Controller Controller --> Middleware Middleware --> Usecase Usecase --> QueueService Usecase --> AtomicService QueueService --> MongoDB QueueService --> CloudTasks AtomicService --> BulkOps Usecase --> FirebaseStorage Usecase --> SendGrid CloudTasks --> Notifications style UI fill:#61dafb style MongoDB fill:#4DB33D style CloudTasks fill:#4285F4 style FirebaseStorage fill:#FFA611
Technology Stack
Frontend (admin-lite)
- Framework: React 18 with TypeScript
- State Management: TanStack Query for server state
- UI Components: Custom component library with Tailwind CSS
- Form Handling: React Hook Form with Zod validation
- Internationalization: Lingui for multi-language support
Backend (api-bulk-operations)
- Runtime: Node.js with Express
- Language: TypeScript
- Architecture: Clean Architecture (Controller → Usecase → Repository)
- Queue System: Google Cloud Tasks
- File Storage: Firebase Storage
- Database: MongoDB with Mongoose ODM
- Email Service: SendGrid
Core Components
1. Scheduled Bulk Upload Model
interface ScheduledBulkUpload {
_id: string;
organizationID: string;
fileName: string;
fileSize: number;
filePath: string;
entityType: 'users' | 'sites' | 'departments' | 'skus' | 'schedules' | 'user-role-map';
status: 'queued' | 'processing' | 'completed' | 'failed' | 'cancelled';
scheduledAt: Date;
processedAt?: Date;
completedAt?: Date;
bulkOperationID?: string;
taskName?: string;
error?: string;
validationResults?: ValidationResults;
createdBy: UserInfo;
createdAt: Date;
updatedAt: Date;
}2. Queue Management Service
export class QueueManagementService implements IQueueManagementService {
async checkQueueCapacity(organizationID: string): Promise<void> {
const activeCount = await this.scheduledBulkUploadRepository.countActiveUploads(organizationID);
if (activeCount >= QUEUE_CONFIG.MAX_ACTIVE_UPLOADS_PER_ORG) {
throw new Error(`Queue limit reached. Maximum ${QUEUE_CONFIG.MAX_ACTIVE_UPLOADS_PER_ORG} active operations allowed.`);
}
}
async cleanupOldUploads(organizationID: string): Promise<void> {
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - QUEUE_CONFIG.CLEANUP_DAYS);
await this.scheduledBulkUploadRepository.deleteOldCompleted(organizationID, cutoffDate);
}
}3. Cloud Task Scheduler
export class CloudTaskSchedulerService implements ICloudTaskSchedulerService {
async scheduleUploadTask(
scheduledBulkUploadID: string,
organizationID: string,
scheduledAt: Date,
): Promise<string> {
const task = {
httpRequest: {
httpMethod: 'POST' as const,
url: `${CLOUD_TASK_SERVICE_URL}/api/bulk-operations/queue/process-scheduled`,
headers: { 'Content-Type': 'application/json' },
body: Buffer.from(JSON.stringify({
data: { scheduledBulkUploadID, organizationID }
})).toString('base64'),
},
scheduleTime: {
seconds: Math.floor(scheduledAt.getTime() / 1000),
},
};
const [response] = await this.cloudTasksClient.createTask({
parent: this.queuePath,
task,
});
return response.name!;
}
}🔌 API Reference {#engineering-api}
Queue Management Endpoints
Create Scheduled Upload
POST /api/bulk-operations/queue
Content-Type: multipart/form-data
Authorization: Bearer <jwt-token>
Form Data:
- file: Excel/CSV file
- entityType: 'users' | 'sites' | 'departments' | 'skus' | 'schedules' | 'user-role-map'
- scheduledAt: ISO 8601 datetime string
- immediateExecution?: boolean (optional, for immediate processing)Response:
{
"message": "SUCCESS",
"data": {
"id": "upload-123",
"fileName": "users.xlsx",
"fileSize": 1024,
"entityType": "users",
"status": "queued",
"scheduledAt": "2024-12-25T10:00:00Z",
"taskName": "projects/.../tasks/task-123",
"createdBy": {
"id": "user-123",
"name": "John Doe",
"email": "john@example.com"
}
}
}Get Queue Items
GET /api/bulk-operations/queue?page=1&limit=10&status=queued,processing
Authorization: Bearer <jwt-token>Response:
{
"message": "SUCCESS",
"data": {
"data": [
{
"id": "upload-123",
"fileName": "users.xlsx",
"entityType": "users",
"status": "queued",
"scheduledAt": "2024-12-25T10:00:00Z",
"createdBy": {...}
}
],
"total": 25,
"page": 1,
"limit": 10,
"totalPages": 3
}
}Update Queue Item
PUT /api/bulk-operations/queue/:id
Content-Type: application/json
Authorization: Bearer <jwt-token>
{
"scheduledAt": "2024-12-26T10:00:00Z", // Reschedule
// OR
"status": "cancelled" // Cancel upload
}Process Scheduled Upload (Cloud Task Endpoint)
POST /api/bulk-operations/queue/process-scheduled
Content-Type: application/json
{
"data": {
"scheduledBulkUploadID": "upload-123",
"organizationID": "org-123"
}
}Status Codes & Error Handling
HTTP Status Codes
200 OK: Successful operation400 Bad Request: Invalid request parameters401 Unauthorized: Missing or invalid authentication403 Forbidden: Insufficient permissions404 Not Found: Resource not found409 Conflict: Queue limit reached422 Unprocessable Entity: Validation errors500 Internal Server Error: Server error
Error Response Format
{
"message": "FAILED",
"error": {
"code": "QUEUE_LIMIT_REACHED",
"message": "Queue limit reached. Maximum 5 active operations allowed.",
"details": {
"currentActive": 5,
"limit": 5,
"organizationID": "org-123"
}
}
}⚙️ Technical Implementation {#engineering-implementation}
Configuration Management
Environment Variables
# Google Cloud Configuration
GCP_PROJECT_ID=your-project-id
CLOUD_TASK_LOCATION=us-central1
CLOUD_TASK_QUEUE_NAME=bulk-upload-queue
CLOUD_TASK_SERVICE_URL=https://api.nimbly.io
CLOUD_TASK_HANDLER_ENDPOINT=/api/bulk-operations/queue/process-scheduled
# Queue Managemen
SCHEDULED_UPLOAD_MAX_QUEUE_SIZE=5 # Maximum active operations per organization
SCHEDULED_UPLOAD_CLEANUP_DAYS=7
SCHEDULED_UPLOAD_MAX_SCHEDULE_DAYS=30
SCHEDULED_UPLOAD_MIN_SCHEDULE_MINUTES=5
# File Handling
SCHEDULED_UPLOAD_FILE_SIZE_LIMIT=10485760 # 10MB
FIREBASE_STORAGE_BUCKET=your-firebase-bucket
# Notifications
SENDGRID_API_TOKEN=your-sendgrid-key
NOTIFICATION_EMAIL_FROM=noreply@nimbly.io
NOTIFICATION_EMAIL_REPLY_TO=support@nimbly.ioQueue Configuration
export const QUEUE_CONFIG = {
MAX_ACTIVE_UPLOADS_PER_ORG: parseInt(process.env.SCHEDULED_UPLOAD_MAX_QUEUE_SIZE || '5'),
MAX_FILE_SIZE_MB: 10,
MAX_SCHEDULE_DAYS_AHEAD: parseInt(process.env.SCHEDULED_UPLOAD_MAX_SCHEDULE_DAYS || '30'),
MIN_SCHEDULE_MINUTES_AHEAD: parseInt(process.env.SCHEDULED_UPLOAD_MIN_SCHEDULE_MINUTES || '5'),
CLEANUP_DAYS: parseInt(process.env.SCHEDULED_UPLOAD_CLEANUP_DAYS || '7'),
ALLOWED_FILE_EXTENSIONS: ['xlsx', 'xls', 'csv'],
ALLOWED_ENTITY_TYPES: ['users', 'sites', 'departments', 'skus', 'schedules', 'user-role-map'],
} as const;Database Schema & Indexing
MongoDB Collection: scheduled_bulk_uploads
// Indexes for performance
db.scheduled_bulk_uploads.createIndex({ organizationID: 1, status: 1 });
db.scheduled_bulk_uploads.createIndex({ scheduledAt: 1 });
db.scheduled_bulk_uploads.createIndex({ createdAt: 1 });
db.scheduled_bulk_uploads.createIndex({ organizationID: 1, createdAt: -1 });
// Compound index for queue queries
db.scheduled_bulk_uploads.createIndex({
organizationID: 1,
status: 1,
scheduledAt: 1
});Data Lifecycle Management
// Automatic cleanup of old completed uploads
async cleanupOldUploads(): Promise<void> {
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - QUEUE_CONFIG.CLEANUP_DAYS);
const result = await this.scheduledBulkUploadRepository.deleteMany({
status: { $in: ['completed', 'failed', 'cancelled'] },
completedAt: { $lt: cutoffDate }
});
log.info(`Cleaned up ${result.deletedCount} old scheduled uploads`);
}Integration with Atomic Operations
Atomic Processing Workflow
async processScheduledUpload(scheduledUploadId: string): Promise<void> {
// 1. Update status to processing
await this.updateStatus(scheduledUploadId, 'processing');
try {
// 2. Download and parse file
const fileContent = await this.downloadAndParseFile(scheduledUploadId);
// 3. Create atomic bulk operation
const bulkOperationId = await this.atomicOperationsService.createBulkOperation(context);
// 4. Process with atomic guarantees
const result = await this.atomicOperationsService.processBulkUpload(
context,
bulkOperationId,
fileConten
);
// 5. Update based on resul
if (result.stage === 'validation-failed') {
await this.updateStatus(scheduledUploadId, 'failed', {
error: 'Validation failed',
validationResults: this.transformValidationErrors(result.validationErrors)
});
} else if (result.stage === 'done') {
await this.updateStatus(scheduledUploadId, 'completed');
} else {
await this.updateStatus(scheduledUploadId, 'failed', {
error: `Processing failed at stage: ${result.stage}`
});
}
} catch (error) {
// 6. Handle processing errors
await this.updateStatus(scheduledUploadId, 'failed', {
error: error.message
});
}
}Monitoring & Observability
Health Checks
// Queue health endpoint
app.get('/api/bulk-operations/queue/health', async (req, res) => {
const healthStatus = {
status: 'healthy',
timestamp: new Date().toISOString(),
checks: {
database: await this.checkDatabaseConnection(),
cloudTasks: await this.checkCloudTasksConnection(),
firebaseStorage: await this.checkFirebaseStorage(),
queueSize: await this.getActiveQueueSize(),
}
};
const isHealthy = Object.values(healthStatus.checks).every(check =>
typeof check === 'boolean' ? check : check.status === 'healthy'
);
res.status(isHealthy ? 200 : 503).json(healthStatus);
});Metrics Collection
// Metrics for monitoring
export const METRICS = {
QUEUE_SIZE: 'scheduled_uploads_queue_size',
PROCESSING_TIME: 'scheduled_uploads_processing_time',
SUCCESS_RATE: 'scheduled_uploads_success_rate',
ERROR_RATE: 'scheduled_uploads_error_rate',
} as const;
// Usage
metrics.gauge(METRICS.QUEUE_SIZE, activeQueueSize, { organizationID });
metrics.histogram(METRICS.PROCESSING_TIME, processingTimeMs, { entityType });
metrics.counter(METRICS.SUCCESS_RATE, 1, { status: 'completed' });Testing Strategy
Unit Testing
describe('QueueManagementService', () => {
describe('checkQueueCapacity', () => {
it('should throw error when queue limit is reached', async () => {
// Arrange
mockRepository.countActiveUploads.mockResolvedValue(5);
// Act & Assert
await expect(service.checkQueueCapacity('org-123'))
.rejects.toThrow('Queue limit reached');
});
it('should pass when under queue limit', async () => {
// Arrange
mockRepository.countActiveUploads.mockResolvedValue(3);
// Act & Assert
await expect(service.checkQueueCapacity('org-123'))
.resolves.not.toThrow();
});
});
});Integration Testing
describe('Scheduled Uploads E2E', () => {
it('should complete full scheduled upload workflow', async () => {
// 1. Upload file and schedule
const uploadResponse = await request(app)
.post('/api/bulk-operations/queue')
.attach('file', 'test-users.xlsx')
.field('entityType', 'users')
.field('scheduledAt', futureDate.toISOString())
.expect(200);
const uploadId = uploadResponse.body.data.id;
// 2. Verify queued status
const queueStatus = await request(app)
.get(`/api/bulk-operations/queue/${uploadId}`)
.expect(200);
expect(queueStatus.body.data.status).toBe('queued');
// 3. Trigger processing
await request(app)
.post('/api/bulk-operations/queue/process-scheduled')
.send({ data: { scheduledBulkUploadID: uploadId, organizationID: 'test-org' } })
.expect(200);
// 4. Verify completion
// Note: In real tests, would wait for actual processing
const finalStatus = await request(app)
.get(`/api/bulk-operations/queue/${uploadId}`)
.expect(200);
expect(finalStatus.body.data.status).toBe('completed');
});
});🔐 Security Considerations
Authentication & Authorization
- JWT Token Validation: All endpoints require valid authentication
- Organization Isolation: Users can only access their organization’s operations
- Role-Based Access: Admin/Super Admin/Account Holder permissions required
Input Validation
- File Type Validation: Only Excel and CSV files accepted
- File Size Limits: Maximum 10MB to prevent resource exhaustion
- Content Validation: Excel parsing with error handling for malicious files
- Schedule Validation: Prevent scheduling too far in future or past
Data Security
- Encrypted Storage: Files stored in Firebase Storage with proper access controls
- Secure Transmission: HTTPS only for all API communications
- Audit Logging: All operations logged with user and timestamp information
- Data Cleanup: Automatic removal of old files and records
Infrastructure Security
- Service Account Authentication: Cloud Tasks use dedicated service accounts
- Network Security: API endpoints behind authentication and rate limiting
- Secret Management: Environment variables for sensitive configuration
- Database Security: MongoDB with authentication and encrypted connections
🚀 Performance Optimization
Frontend Performance
- React Query Caching: Intelligent caching of queue data with background updates
- Virtual Scrolling: Efficient rendering of large operation lists
- Debounced Search: Prevent excessive API calls during filtering
- Progressive Enhancement: Mobile-first design with responsive optimizations
Backend Performance
- Database Indexing: Optimized queries for queue operations
- File Streaming: Large file processing without loading entirely into memory
- Async Processing: Cloud Tasks for non-blocking operation execution
- Connection Pooling: Efficient database connection management
System Scalability
- Horizontal Scaling: Stateless API design supports multiple instances
- Load Distribution: Queue scheduling spreads processing load over time
- Resource Management: Queue limits prevent system overload
- Caching Strategy: Redis caching for frequently accessed data
🔮 Future Enhancements
Short Term (Next 3 months)
- WhatsApp Notifications: Complete WhatsApp integration for mobile notifications
- Bulk Results Download: Provide downloadable results after processing completion
- Enhanced Validation: More detailed validation error reporting with suggestions
- Performance Dashboard: Real-time queue performance metrics for admins
Medium Term (3-6 months)
- Retry Mechanism: Automatic retry for transient failures with exponential backoff
- Progress Tracking: Real-time progress updates during file processing
- Template Management: Dynamic template generation based on current schema
- Advanced Scheduling: Recurring schedules with cron-like expressions
Long Term (6+ months)
- Multi-file Operations: Support for uploading and processing multiple files together
- Workflow Automation: Chain multiple operations with dependencies
- Analytics Dashboard: Insights into upload patterns, success rates, and optimization suggestions
- API Webhooks: External system integration via webhook notifications
- AI-Powered Validation: Intelligent data validation and error correction suggestions
🏁 Conclusion
Scheduled Bulk Operations (Bulkops v3) represents a significant evolution in Nimbly’s data management capabilities. By combining the reliability of atomic operations with flexible scheduling and comprehensive queue management, this feature provides enterprise-grade bulk operation capabilities that serve the needs of customers, support teams, product managers, and engineers alike.
The system’s architecture prioritizes reliability, scalability, and user experience while maintaining the simplicity that makes Nimbly accessible to organizations of all sizes. With comprehensive monitoring, intelligent error handling, and future-ready extensibility, Scheduled Bulk Operations positions Nimbly as a leader in enterprise data management solutions.
Whether you’re a customer scheduling your first bulk upload, a CS agent helping resolve an issue, a product manager analyzing user workflows, or an engineer implementing new features, this documentation provides the comprehensive guidance needed to successfully work with Scheduled Bulk Operations.
📞 Support & Resources
For Customers
- Help Center: help.nimbly.io/bulk-operations
- Live Chat: Available 24/7 in the admin portal
- Email Support: support@nimbly.io
For Internal Teams
- CS Playbook: Internal troubleshooting and support procedures
- API Documentation: Complete technical reference for developers
- Monitoring Dashboard: Real-time system health and performance metrics
Emergency Contacts
- Critical Issues: Escalate to engineering team via Slack bulk-operations-alerts
- System Outages: Follow incident response procedures in incidents channel
- Customer Escalations: Follow CS escalation matrix in internal documentation