Hello, I'm
Jiwon Jeong
Passionate about foundation model training, LLM reasoning, and advancing NLP research.
About Me
AI Research Engineer with a Master's degree in Artificial Intelligence from Sungkyunkwan University. Passionate about foundation model training & tuning, LLM inference optimization, and reasoning models. Published at NAACL 2025.
Currently at Lotte Innovate as a Language AI Engineer, working on enterprise LLM solutions. Deeply interested in LLM internals, scaling laws, and building more capable reasoning systems.
class AIResearchEngineer:
def __init__(self):
self.name = "Jiwon Jeong"
self.role = "AI Research Engineer"
self.edu = "SKKU M.S. in AI"
self.stack = [
"Python", "PyTorch",
"FastAPI", "LangChain"
]
self.research = [
"Logical Fallacy",
"Commonsense QA",
"Knowledge Graph",
"Prompt Engineering"
]
self.future_research = [
"Reasoning LLM",
"Foundation LLM"
]
self.goal = "Ph.D. in NLP/LLM"
Skills
Python / PyTorch
End-to-end ML pipeline, model training, data preprocessing, and evaluation
NLP / LLM
Text classification, language modeling, fine-tuning, prompt engineering, text generation & evaluation
FastAPI / Docker
REST API development, containerized deployment, async backend systems
MCP / LangChain
Model Context Protocol servers, LangChain-based chatbot & RAG development
Computer Vision
YOLOv5 object detection, GIT multi-modal model, image-to-text generation
ML Engineering
End-to-end ML pipeline design, model training & evaluation, classical ML to deep learning architectures
Projects
MCP Server for ChatGPT Apps
Built MCP (Model Context Protocol) servers for ChatGPT Apps (OpenAI). Designed middleware architecture connecting frontend, backend, and LLM-powered enterprise solutions.
B-Peach LAB - Text Adaptation Service
Designed evaluation metrics for an AI-powered text adaptation service improving information accessibility for slow learners. Reduced adaptation time from 1 day to under 1 minute.
Korean LLM Benchmark
Evaluation framework for Korean LLM performance. Benchmarking various language models across Korean NLP tasks with standardized metrics and reproducible pipelines.
EN-KO Neural Machine Translation
Built Seq2Seq and Transformer models from scratch for English-Korean translation. Implemented Beam Search, attention mechanism, positional encoding, and BLEU evaluation.
Experience & Education
Online Mentor @ LIKELION (AI NLP Engineer Intensive Course, 3rd Cohort)
Mentoring 4 teams on AI-based service development projects using corporate and public data. Providing code review, project coaching, and Q&A support via Discord. Guiding students with Python fundamentals through hands-on NLP project development.
AI Research Engineer @ Lotte Innovate
Language AI Engineer in AI Tech LAB, Lotte GPT Part. Fine-tuning GPT-OSS-120B for enterprise LLM solutions. Responsible for language AI development at the AI Innovation Center.
AI Researcher @ B-Peach LAB (Tech for Impact)
Designing evaluation metrics for a text adaptation service aimed at improving information accessibility for slow learners. Conducting ongoing research on adaptation quality assessment.
Teaching Assistant @ Sungkyunkwan University
TA for Language Model course (SWE3032-41) and Deep Learning advanced courses for non-CS educators. Supported lectures and student mentoring.
M.S. in Artificial Intelligence @ Sungkyunkwan University
Research on Commonsense QA and Logical Fallacy Detection. Published at NAACL 2025 Findings and KSC 2022 (Outstanding Paper Award). Advisor: Prof. Hogun Park.
Industry-Academia Project (Outstanding)
Built an AI-based construction site hazard detection system using YOLOv5x, GIT, and ELECTRA. Integrated three models into an API for real-time safety monitoring. F1 Score up to 0.97.
Text Ethics Hackathon - 2nd Place
Developed an ELECTRA-based classification model for detecting hate speech and biased text. Retrained BPE tokenizer for domain adaptation. Won 2nd place award.
B.S. in Electronic Engineering @ Kookmin University
Major in Electronic Systems.
Publications
Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation
The 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics
arXivImproving Commonsense-based QA Model through a Cycle-Encoder
Proceedings of the Korea Software Congress, pp. 791-793, 2022. Top 10% of accepted papers.
DBpiaGet In Touch
Open to collaborations, research discussions, and new opportunities.
I'm also actively seeking a Ph.D. position in NLP/LLM — feel free to reach out anytime!