My Projects
Please login to view protected content and access projects.
Day 4: Building an MTA Transit Tracker with Weather Alerts
For day four of our coding challenge, I built a personalized transit tracker for NYC's public transportation system. This app allows users to save their frequently used bus and subway stops and provides real-time departure information along with weather alerts.
Key Features
1. Personalized Transit Stations
The app allows users to save and manage their frequent transit stops:
- Add bus stops and subway stations with custom names
- Edit or delete saved stations with intuitive controls
- Store coordinates for each stop (manually or via geolocation)
- Organize stops by proximity to current location
- Persistent storage in a dedicated SQLite database
2. Real-Time Transit Data
Using the MTA's API, the app provides up-to-date transit information:
- Real-time bus arrival predictions with MTA Bus Time API
- Properly formatted stop IDs with the required "MTA_" prefix
- Minutes-away estimates and destination information
- Manual and automatic refreshing of departure times
- Support framework for subway GTFS Realtime data
3. Location-Based Features
The app uses geolocation to enhance the experience:
- Optional location tracking to sort stops by proximity
- Visual highlighting of the closest transit stop
- Distance indicators for each saved station
- Privacy-focused with opt-in location sharing
- Appropriate error handling for location permission denials
4. Weather Integration
To help users prepare for their commute, the app includes weather data:
- Current temperature and conditions from OpenWeatherMap
- Precipitation alerts for rain or snow in the next 12 hours
- "Bring an umbrella" warnings when rain is in the forecast
- Temperature range for daily planning
- Graceful fallbacks with mock data when API keys aren't available
Technical Implementation
API Integration and Security
The app connects to multiple external APIs with security in mind:
- Server-side API wrappers to protect credentials
- Environment variables for all API keys
- Proper error handling and response validation
- Mock data generation for testing and demos
- Detailed documentation for future subway API integration
Authentication and Data Architecture
The app leverages and extends the existing authentication system:
- Protected routes requiring user login
- Hybrid database approach: auth DB + dedicated transit DB
- User-specific saved stations with proper relationships
- Clean separation of concerns for maintainability
- Transaction-safe database operations
Reactive UI with SvelteKit
The user interface leverages Svelte's reactivity for a smooth experience:
- Real-time updates without page reloads
- Toast notifications for user feedback
- Form validation and confirmation dialogs
- Dynamic station cards with transit-specific styling
- Subway-specific UI enhancements (express indicators, direction arrows)
GTFS Realtime Documentation
While the current version uses mock data for subways, I've provided detailed documentation for implementing the MTA GTFS Realtime API:
- Complete guide to Protocol Buffer implementation
- Feed URL reference for all subway lines
- Station ID mapping strategies
- Code examples for parsing complex GTFS data structures
- Testing and troubleshooting recommendations
This project combines real-time data from multiple sources to create a practical tool for daily use. The MTA Transit Tracker demonstrates integration with external APIs, proper authentication handling, and responsive UI design, while providing actual utility for NYC commuters.
Day 3: Building the BIA Search Feature with Migration Utilities
For day three of our coding challenge, I developed a comprehensive search system for Board of Immigration Appeals (BIA) decisions. This system includes both a robust search interface and powerful data migration utilities, enabling legal researchers to efficiently find and analyze immigration precedent decisions.
Key Components
1. Full-Text Search with MeiliSearch
The search interface provides powerful text search capabilities:
- Typo-tolerant search that handles misspellings
- Highlighting of search terms in results
- Faceted filtering by year, court, and tags
- Contextual snippets showing matches in full text
- Server-side rendering for SEO and initial load performance
2. Structured Case Data Model
I designed a comprehensive database schema for legal decisions:
- Core case metadata (case name, citation, court, year)
- Structured elements (holdings, issues, rules of law)
- Full-text content with page segmentation
- Relational data (citations to other cases, statutes)
- Tagging system for categorization
3. Data Repository Pattern
The application implements a clean repository pattern:
- Abstraction over database operations
- Type-safe queries with TypeScript
- JSON transformation for nested case data
- Pagination and efficient data retrieval
- Graceful fallbacks when database is unavailable
4. Migration Utilities
To populate the database, I built flexible migration tools:
- Import from existing SQLite databases with schema detection
- CSV and JSON import with smart field mapping
- PDF text extraction for document collections
- Transaction-safe operations with automatic rollback
- Progress tracking for long-running migrations
5. SvelteKit UI Components
The search interface uses modern SvelteKit components:
- Responsive search results with highlighting
- Dynamic filter controls based on available facets
- Detailed case view with structured data presentation
- Real-time search with debouncing
- Server-side data fetching with client-side enhancements
Technical Challenges
Server-Side Rendering Compatibility
Making the search system work with SSR required careful architecture:
- Ensuring database connections are request-scoped
- Avoiding Node-specific operations in SSR context
- Using environment detection for browser-only code
- Proper hydration with initial server state
Search Optimization
Building efficient search for legal documents had unique challenges:
- Customizing ranking rules for legal relevance
- Balancing exact citation matching with keyword search
- Handling nested document structure in search index
- Providing fallbacks when search service is unavailable
Command-line Interface
I created user-friendly CLI tools for data management:
npm run migrate -- --db-source=path/to/source.db
- Import from another databasenpm run migrate -- --csv-source=path/to/cases.csv
- Import from CSV filenpm run migrate -- --json-source=path/to/cases.json
- Import from JSON filenpm run migrate -- --pdf-dir=path/to/pdfs
- Import from PDF directorynpm run index-cases
- Index all cases in MeiliSearch
This project combines the power of full-text search, structured data modeling, and flexible data migration to create a practical tool for immigration law research. By focusing on both the user interface and the data infrastructure, I've created a system that can be easily populated with real-world case data and provide valuable search capabilities for legal professionals.
Day 2: Building a Whisper-Powered Audio Notes App
For my second project, I created an audio notes application that uses OpenAI's Whisper API for speech-to-text transcription. This app allows users to record audio notes, transcribe them, edit the transcriptions, and save them for later reference.
Key Features
1. Audio Recording Interface
I implemented a browser-based audio recording system with intuitive controls:
- Start/stop recording buttons
- Pause and resume functionality
- Audio playback for review
- Real-time recording state indicators
This was built using the MediaRecorder API, which allows capturing audio directly in the browser without plugins.
2. OpenAI Whisper Integration
The app connects to OpenAI's Whisper API to provide high-quality speech-to-text conversion:
- Server-side API endpoint for secure processing
- Support for various audio formats (WebM, MP3)
- Fast and accurate transcription
- Error handling for API limitations
3. Draft Saving with localStorage
To prevent work loss, the app automatically saves in-progress recordings and transcriptions:
- Automatic saving of audio data in browser storage
- Draft recovery on page reload
- Manual save/discard options
- Efficient binary data handling
4. SQLite Database Integration
For long-term storage, notes are saved to the SQLite database:
- Integration with existing user authentication system
- User-specific note storage and retrieval
- CRUD operations for managing notes
- Proper SQL schema with foreign keys and timestamps
Technical Challenges
Server-Side Rendering Considerations
One interesting challenge was handling browser-only APIs in SvelteKit's server-side rendering environment:
- Dynamically importing browser-only modules
- Using the browser check from $app/environment
- Properly securing API endpoints
- Managing client-side state with Svelte 5 runes
Audio Data Management
Working with binary audio data presented unique challenges:
- Converting Blobs to base64 for localStorage
- Efficiently chunking audio data during recording
- Creating object URLs for audio playback
- Proper cleanup to prevent memory leaks
This project demonstrates the power of combining modern web APIs with AI capabilities to create practical tools. The app is fully protected by the authentication system I built in my first project, ensuring that each user's notes remain private and secure.
Day 1: Building a Session-Based Authentication System with SvelteKit and SQLite
For my first project in our two-week coding challenge, I decided to implement a fundamental feature that most web applications need: user authentication. Here's how I built it using SvelteKit and SQLite.
Key Components
1. Database Setup
I used better-sqlite3 to create and manage the database. The schema includes two tables:
- users - Stores user credentials with bcrypt-hashed passwords
- sessions - Manages active user sessions with expiration times
2. Authentication API
I created three endpoints using SvelteKit's server-side routes:
- POST - For user registration
- PUT - For user login, creates a session and sets a cookie
- DELETE - For logout, removes the session
3. Server Hooks
SvelteKit's server hooks are perfect for authentication. On each request, the hook:
- Checks for a session cookie
- Validates the session against the database
- Makes the user information available to all routes
- Protects routes that require authentication
4. User Interface
The UI includes:
- Login form with validation
- Registration form with password confirmation
- Conditional content based on authentication status
- Error handling and success messaging
Security Considerations
The system implements several security best practices:
- Passwords are hashed using bcrypt
- Session IDs are cryptographically secure random tokens
- Cookies are HttpOnly to prevent JavaScript access
- Sessions expire automatically and can be invalidated
I built this first because I want to work on a lot of future projects that include access to various LLMs and would prefer not to expose those to the public internet.