Neelesh Karthikeyan Logo Image
Neelesh Karthikeyan

AWS‑certified Software/Data Engineer (3+ years). Ex‑Tesla. I build edge‑to‑cloud AI systems, scalable data platforms, and full‑stack apps used by researchers and engineers. Pursuing an MS in Data Science at Indiana University; seeking new‑grad roles starting May 2025.

Honors

  • Awarded $80,000 tuition remission with less than 1% acceptance rate for research contributions at Indiana University.
  • Co-authored the award-winning paper, ML Field Planner with TAPIS, accepted at the Gateways 2025 Conference.

Certifications

AWS Certified Data Engineer - Associate
Confluent Kafka Certified
Neo4j Certified Pro

Education

Indiana University Bloomington

Master’s Degree, Computational Data Science — GPA: 3.9 / 4

Aug 2023 - May 2025 • Bloomington, Indiana

Coursework: Data Structures, Algorithms, Networking, Distributed Systems, Cloud Computing, Concurrency

College of Engineering Guindy, Anna University

Bachelor’s Degree, Engineering — GPA: 3.7 / 4

Aug 2017 - May 2021 • Chennai, India

Experience

Indiana University — Software Engineer

Jan 2024 - Present • Bloomington, Indiana

  • Engineered edge-to-cloud pipelines in $20M NSF-project with Kafka and Java for AI inference monitoring.
  • Released an open-source Python SDK for AI model documentation with fairness and explainability metrics.
  • Deployed full-stack web applications used by 20+ researchers to interact with containerized databases on Kubernetes.
  • Designed data models and ingested real-time streaming data from 100+ edge devices into PostgreSQL and Neo4j.
  • Devised RESTful APIs with JWT authentication for model repository access and database services using Golang.
  • Optimized API request round-trip time by 21% via A/B testing across protocols and Python web frameworks.
  • Developed MLOps framework for PyTorch/TensorFlow model lifecycle management with HuggingFace integration.

Tesla — Data Engineer Intern

May 2024 - Aug 2024 • Palo Alto, California

  • Deployed full-stack applications used by 5+ internal teams to analyze big datasets or detect issues using JavaScript.
  • Created internal Python libraries with APIs for engineers to render custom data visualizations using SQL and Spark.
  • Engineered scalable data processing for Parquet files with 1M+ rows using SQLAlchemy and Airflow.
  • Developed custom PySpark transformations to process telemetry data with 50 TB daily ingestion rate.
  • Streamlined service workflows by 15% with a Text-to-SQL chatbot using LLaMA and prompt engineering.

Gyan Data — Software Engineer II

May 2021 - Jul 2023 • Chennai, India

  • Hosted 8+ GraphQL microservices on AWS for automobile sensor data access using TypeScript (Client: Toyota).
  • Decoupled systems using AWS Elastic Beanstalk, SQS, Celery and managed S3 bucket policies using Terraform.
  • Migrated 7+ retailers’ sales data from SAS, SAP, on-prem SQL servers to Amazon Redshift (Client: Kellogg’s).
  • Designed Python tools to identify the critical 30% of airbag tests and optimized crash prototypes (Client: Ford).
  • Developed Docker-based microservices for a federated learning POC; secured deployed approval (Client: Tvarit).
Python
SQL
Go
R
C++
Java
JavaScript
TypeScript
Bash
React
Express
Node.js
Snowflake
Databricks
MongoDB
DynamoDB
Redis
Neo4j
BigQuery
Kafka
RabbitMQ
Hadoop
Spark
Kubernetes
GRPC

Projects

Project cover

Prompt Engineering Test Harness (Code)
React.js, PostgreSQL, Nginx, FastAPI, Redis, OpenAI API, CI/CD

View Code
Project cover

Autonomous AI Agent System for Trading Insights (Code)
LangChain Model Context Protocol, Llama, Python

View Code
Project cover

Streaming Data Pipeline for Weather Analytics (Code)
Kafka, Spark, MySQL, Airflow, Snowflake, dbt, Docker

View Code