LDM College

Technical Lead

📅Oct 2023 – July 2025·📍On-site

The initial implementation of the LDM Academic ERP was functionally complete but suffered from critical architectural flaws preventing production readiness.

PostgreSQLPythonScrapyPlaywrightNode.js
Exam submission latency
<1 second
Page load (heavy queries)
Instant (indexed)
Manual data collection
Automated

What I Did Here

The initial implementation of the LDM Academic ERP was functionally complete but suffered from critical architectural flaws preventing production readiness. The primary challenge was a fractured data model resulting in a dual-fee system where `StudentFee` and `FeeStructure` operated independently, threatening financial integrity. The obvious solution was to patch the routes, but without strict data validation and edge-level security, the system remained vulnerable to NoSQL Regex Injection and unauthenticated bypasses. I architected a zero-trust environment using Next.js Edge middleware to enforce Role-Based Access Control (RBAC) before requests reached the handlers, and replaced insecure Math.random() implementations with `crypto.randomInt` for OTPs. On the frontend, I discarded scattered spinners in favor of a 'foundation-first' UI architecture, deploying unified `SkeletonLoader`, `EmptyState`, and `ErrorState` components to drastically improve perceived performance and accessibility. The overhaul transformed a fragile MVP into a secure, production-grade enterprise platform, eliminating data orphans and significantly reducing unnecessary database queries via strict edge caching.

What I Was Accountable For

01

Designed cost-efficient architecture using serverless components (AWS Lambda) to minimize idle compute usage

02

Owned end-to-end development and operation of ERP system serving 500+ daily users, ensuring production stability over ~2 years

03

Optimized infrastructure costs by leveraging AWS services (Lambda, scheduled workloads, resource optimization), reducing unnecessary compute usage

04

Reduced latency 8s → <1s by redesigning backend workflows using asynchronous processing and batching

05

Improved database performance through indexing, query optimization, and efficient data access patterns

06

Automated data ingestion pipelines, eliminating 6+ hours/week of manual effort

Key Wins

Designed cost-efficient architecture using serverless components (AWS Lambda) to minimize idle compute usage
<1 second
Owned end-to-end development and operation of ERP system serving 500+ daily users, ensuring production stability over ~2 years
Instant (indexed)
Optimized infrastructure costs by leveraging AWS services (Lambda, scheduled workloads, resource optimization), reducing unnecessary compute usage
Automated

How It Was Built

01

Async Batched Exam Submissions

The critical bottleneck was the original submission flow: every time a student clicked submit, the application opened a synchronous database connection, executed the write, and held that connection open until the database confirmed the transaction. Under the pressure of 200+ concurrent submissions, the database connection pool exhausted instantly, causing the entire application to queue and latency to spike to 8 seconds. The solution was entirely decoupling the HTTP response from the database write. I implemented an asynchronous queue. Now, when a student submits, the application immediately drops the payload into an in-memory queue (backed by Redis for persistence) and instantly returns a 'success' response to the frontend in under 50 milliseconds. A separate background worker continuously pulls from this queue and executes bulk inserts into the PostgreSQL database in controlled batches of 50. This explicitly protects the database from being overwhelmed. The edge case I had to handle was ensuring that if the background worker crashed mid-batch, no student data was lost; this was mitigated by implementing strict message acknowledgment protocols within the queue.

submissions/handler.py
python
async def submit_exam(submission: Submission) -> Response:
    # Acknowledge immediately — don't hold the user waiting
    await queue.enqueue(submission)
    return Response(status='accepted', id=submission.id)

async def batch_worker():
    while True:
        batch = await queue.dequeue_batch(size=50)
        if batch:
            await db.bulk_insert('submissions', batch)
        await asyncio.sleep(0.1)
02

Query Optimization — Indexes + Materialized Views

The administration dashboards were functionally broken because the attendance aggregation queries were scanning every single row in the 100k+ row attendance table just to compute weekly totals. The problem was that the database was recalculating historical, immutable data on every single page load. I started by running `EXPLAIN ANALYZE` on the slowest queries to pinpoint the exact bottlenecks. First, I added highly targeted composite indexes on the specific columns that were frequently filtered together (e.g., `student_id`, `date`, `status`). For the heavy aggregations that ran frequently but didn't require up-to-the-second accuracy, I implemented Materialized Views. The database now pre-computes the attendance summaries overnight and stores the result. When an administrator loads the dashboard, the application queries the materialized view—a process that takes milliseconds—rather than executing a full table scan. I handled the edge case of data staleness by configuring a lightweight background cron job to refresh the materialized view concurrently every hour, ensuring the staff always had highly accurate, instantly available data.

migrations/optimize_attendance.sql
sql
-- Before: full table scan on 100k+ rows
-- After: composite index on the actual filter pattern
CREATE INDEX CONCURRENTLY idx_attendance_student_date
  ON attendance(student_id, date, status);

-- Materialized view for aggregation queries
CREATE MATERIALIZED VIEW attendance_summary AS
  SELECT student_id,
         COUNT(*) FILTER (WHERE status = 'present') AS present,
         COUNT(*) FILTER (WHERE status = 'absent')  AS absent
  FROM attendance
  GROUP BY student_id;

CREATE UNIQUE INDEX ON attendance_summary(student_id);

What Changed

The system achieved 100% production readiness, securing all 101+ API routes against unauthorized access, eliminating financial discrepancies caused by the dual-fee system, and vastly improving user experience across 8 core modules.

Exam submission latency
8 seconds
0
8× faster

An 8-second delay on a high-stakes exam submission causes immense anxiety, leading students to double-click and corrupt the database. Plunging this latency to under 1 second completely eliminated the duplicate data issues and vastly improved the user experience during the most stressful academic weeks.

Page load (heavy queries)
Slow (full table scan)
0
Composite indexes

When an administrative dashboard takes 30 seconds to load, staff simply stop using the software and revert to their manual spreadsheets. By implementing composite indexes and materialized views, the data became instantly accessible, driving 100% adoption of the new digital platform by the administration.

Manual data collection
6+ hours/week
0
Eliminated

The administration team was losing an entire workday every week just transcribing data manually between external portals. Automating this collection process not only eliminated human transcription errors but gave the staff back 6+ hours a week to focus on actual student support.

"Shipped a system that works. 500+ users, production-stable, still running. Not a demo — a real college using it every day."