Hi,
I'am Hershita,
a research
enthusiast

About

I’m an engineer and researcher with a solid foundation in electronics and sensor systems, having worked on cutting-edge technologies at the intersection of mobility, perception, and intelligence.

🌍 Along the way, I’ve had the privilege of exploring new countries, interacting with global tech communities, and absorbing diverse mindsets.
I believe “ innovation happens not just in code, but in conversations — learning from how people approach problems, implement ideas, and adapt technology to local challenges. ”

Skills

Professional Skills

I completed my M.Tech at IIT Hyderabad under Prof. P. Rajalakshmi, focusing on autonomous driving with research in V2X applications and deep learning-based camera-radar fusion. I now work at Suzuki Motor Corporation, Japan, advancing autonomous technologies while deepening my expertise in AI, ML, and Generative AI.

Autonomous Driving Systems
V2X Communication
Sensor Fusion (Camera + Radar)
Deep Learning / Neural Networks
Python (NumPy, Pandas, scikit-learn)
PyTorch / TensorFlow
Computer Vision (OpenCV)
Generative AI

Work

Edge Aware V2X Safety Systems: Enhancing Road Intelligence Through Cloud, Sensors, and Real-Time Communication

Python Sensors Socket Programming 802.11p Radio Module OpenCV Linux

My work involved building applications for Edge-Aware V2X Safety Systems at TiHAN IITH in collaboration with CDAC. The project used DSRC communication modules, On Board Units with GPRS, 802.11p radio, WiFi, GPS, Bluetooth, accelerometer, and gyroscope, Road Side Units, and stereo vision cameras to enhance real-time vehicular safety and pedestrian detection.

Multi-Sensor Fusion for Vehicle Detection and Speed Tracking using Camera and RADAR (in collaboration with Suzuki Motor Corporation)

Python Radar Camera Fusion RetinaNet Deep Learning

I worked on developing a modified camera and radar fusion algorithm for vehicle detection and speed tracking as a part of my MTech thesis in IITH-TiHAN. The project aimed to enhance single-stage detection accuracy using advanced fusion techniques.

IITH TiHAN – Autonomous Driving Applications (Suzuki Phase)

Autonomous Driving VSLAM Stereo Vision Radar-Camera Fusion Deep Learning NVIDIA Orin Human Factors

Continued the IITH TiHAN project at Suzuki, focusing on converting academic research into deployable autonomous driving modules and conducting real-world experiments. Building upon my previous academic work at IITH TiHAN, I contributed to implementing and testing algorithms on Suzuki’s autonomous driving platform. My work included:

Generative AI for Autonomous Driving: Trajectory & Scenario Generation

Generative AI Transformers Trajectory Generation Scenario Prediction World Models COSMOS Terra Gaia LLM4AD

Researching generative AI applications in autonomous driving, focusing on trajectory generation and next-sequence driving scenario prediction while advancing expertise in transformers and world modeling frameworks.

Papers and Publications

Improving Radar-Camera Fusion Network for Distance Estimation IEEE

Publisher: IEEE

Conference: 2024 16th International Conference on Computer and Automation Engineering (ICCAE)

Authors: Samuktha V., Hershita Shukla, Nitish Kumar, Tejasri N., D Santhosh Reddy, P Rajalakshmi

Abstract: A novel radar-camera fusion technique using deep learning improves object detection in autonomous systems by addressing occlusion, illumination changes, and sensor noise. The method involves enhanced feature extraction and fusion for better performance in real time.

View PDF

V2X Enabled Emergency Vehicle Alert System arXiv (Preprint)

Repository: arXiv

Authors: Nitish Kumar, Hershita Shukla, Prof. P. Rajalakshmi

Abstract: Today’s major concern in traffic management systems includes time-efficient emergency transports. The awareness of environment and vehicle information is necessary for emergency vehicles and surrounding commercial vehicles. This paper proposes a DSRC-based V2X alerting system for emergency scenarios, enabling rapid and interactive communication between vehicles and infrastructure. A real-time monitoring dashboard using Grafana is also developed for base station-level tracking.

View PDF

Implementation of Edge-Cloud for Autonomous Navigation Applications IEEE

Publisher: IEEE

Conference: 2023 15th International Conference on Communication Systems & Networks (COMSNETS)

Authors: Yuvraj Chowdary Makkena, Rajashekhar Reddy Tella, Nisarg Parekh, Prem Kumar Saraf, Annu, Hershita Shukla

Abstract: This paper presents an edge-cloud infrastructure for autonomous navigation applications, addressing low-latency and compute challenges in real-time systems. Deployed at the TiHAN testbed, the solution brings computational power closer to resource-constrained devices, significantly improving performance across use cases.

View PDF

Contact