Hello, I'm

Jiwon Jeong

I build |

Passionate about foundation model training, LLM reasoning, and advancing NLP research.

About Me

AI Research Engineer with a Master's degree in Artificial Intelligence from Sungkyunkwan University. Passionate about foundation model training & tuning, LLM inference optimization, and reasoning models. Published at NAACL 2025.

Currently at Lotte Innovate as a Language AI Engineer, working on enterprise LLM solutions. Deeply interested in LLM internals, scaling laws, and building more capable reasoning systems.

0 Publications
0+ Projects
0 Awards
about.py
class AIResearchEngineer:
    def __init__(self):
        self.name = "Jiwon Jeong"
        self.role = "AI Research Engineer"
        self.edu = "SKKU M.S. in AI"
        self.stack = [
            "Python", "PyTorch",
            "FastAPI", "LangChain"
        ]
        self.research = [
            "Logical Fallacy",
            "Commonsense QA",
            "Knowledge Graph",
            "Prompt Engineering"
        ]
        self.future_research = [
            "Reasoning LLM",
            "Foundation LLM"
        ]
        self.goal = "Ph.D. in NLP/LLM"

Skills

🐍

Python / PyTorch

End-to-end ML pipeline, model training, data preprocessing, and evaluation

🤖

NLP / LLM

Text classification, language modeling, fine-tuning, prompt engineering, text generation & evaluation

FastAPI / Docker

REST API development, containerized deployment, async backend systems

🔌

MCP / LangChain

Model Context Protocol servers, LangChain-based chatbot & RAG development

👁

Computer Vision

YOLOv5 object detection, GIT multi-modal model, image-to-text generation

📊

ML Engineering

End-to-end ML pipeline design, model training & evaluation, classical ML to deep learning architectures

Projects

🔌
2026.01 ~ Present

MCP Server for ChatGPT Apps

Built MCP (Model Context Protocol) servers for ChatGPT Apps (OpenAI). Designed middleware architecture connecting frontend, backend, and LLM-powered enterprise solutions.

Python FastAPI MCP ChatGPT Apps
🍑
2025.08 ~ Present

B-Peach LAB - Text Adaptation Service

Designed evaluation metrics for an AI-powered text adaptation service improving information accessibility for slow learners. Reduced adaptation time from 1 day to under 1 minute.

NLP Evaluation Text Adaptation Accessibility
📊
2025.12 ~ 2026.01

Korean LLM Benchmark

Evaluation framework for Korean LLM performance. Benchmarking various language models across Korean NLP tasks with standardized metrics and reproducible pipelines.

Python LLM Evaluation Benchmark
🤖
2025.05 ~ 2025.06

EN-KO Neural Machine Translation

Built Seq2Seq and Transformer models from scratch for English-Korean translation. Implemented Beam Search, attention mechanism, positional encoding, and BLEU evaluation.

PyTorch Transformer NMT NLP
2021.11 ~ 2021.12

Unethical Text Detection (Hackathon)

ELECTRA-based hate speech and bias detection model. Retrained BPE tokenizer for domain adaptation. Won 2nd place at 2021 Text Ethics Hackathon.

ELECTRA BPE Ethics Award

Experience & Education

2026.02.09 ~ 2026.02.24

Online Mentor @ LIKELION (AI NLP Engineer Intensive Course, 3rd Cohort)

Mentoring 4 teams on AI-based service development projects using corporate and public data. Providing code review, project coaching, and Q&A support via Discord. Guiding students with Python fundamentals through hands-on NLP project development.

2025.09 ~ Present

AI Research Engineer @ Lotte Innovate

Language AI Engineer in AI Tech LAB, Lotte GPT Part. Fine-tuning GPT-OSS-120B for enterprise LLM solutions. Responsible for language AI development at the AI Innovation Center.

2025.08 ~ Present

AI Researcher @ B-Peach LAB (Tech for Impact)

Designing evaluation metrics for a text adaptation service aimed at improving information accessibility for slow learners. Conducting ongoing research on adaptation quality assessment.

2023 ~ 2024

Teaching Assistant @ Sungkyunkwan University

TA for Language Model course (SWE3032-41) and Deep Learning advanced courses for non-CS educators. Supported lectures and student mentoring.

2022.03 ~ 2024.08

M.S. in Artificial Intelligence @ Sungkyunkwan University

Research on Commonsense QA and Logical Fallacy Detection. Published at NAACL 2025 Findings and KSC 2022 (Outstanding Paper Award). Advisor: Prof. Hogun Park.

2023.08 ~ 2023.12

Industry-Academia Project (Outstanding)

Built an AI-based construction site hazard detection system using YOLOv5x, GIT, and ELECTRA. Integrated three models into an API for real-time safety monitoring. F1 Score up to 0.97.

2021.11 ~ 2021.12

Text Ethics Hackathon - 2nd Place

Developed an ELECTRA-based classification model for detecting hate speech and biased text. Retrained BPE tokenizer for domain adaptation. Won 2nd place award.

2016.03 ~ 2022.02

B.S. in Electronic Engineering @ Kookmin University

Major in Electronic Systems.

Publications

NAACL 2025 Findings

Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation

Jiwon Jeong, Hyeju Jang, Hogun Park

The 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics

arXiv
KSC 2022 · Outstanding Paper Award

Improving Commonsense-based QA Model through a Cycle-Encoder

Jiwon Jeong, Soyoung Lee, Hogun Park

Proceedings of the Korea Software Congress, pp. 791-793, 2022. Top 10% of accepted papers.

DBpia

Get In Touch

Open to collaborations, research discussions, and new opportunities.
I'm also actively seeking a Ph.D. position in NLP/LLM — feel free to reach out anytime!