Jiwon Jeong

Hello, I'm

Jiwon Jeong

I build |

Passionate about foundation model training, LLM reasoning, and advancing NLP research.

About Me

AI Research Engineer with a Master's degree in Artificial Intelligence from Sungkyunkwan University, with research focused on LLM reasoning, logical fallacy detection, and commonsense QA.

Currently at Lotte Innovate as a Language AI Engineer, working on enterprise LLM solutions. Deeply interested in LLM internals, scaling laws, and building more capable reasoning systems.

0 Publications
0+ Projects
0 Awards
about.py
class AIResearchEngineer:
    def __init__(self):
        self.name = "Jiwon Jeong"
        self.role = "AI Research Engineer"
        self.edu = "SKKU M.S. in AI"
        self.stack = [
            "Python", "PyTorch",
            "FastAPI", "LangChain"
        ]
        self.research = [
            "Logical Fallacy",
            "Commonsense QA",
            "Knowledge Graph",
            "Prompt Engineering"
        ]
        self.future_research = [
            "Reasoning LLM",
            "Foundation LLM"
        ]
        self.goal = "Ph.D. in NLP/LLM"

Research Interests

My research focuses on understanding and improving large language models, with a goal of pursuing a Ph.D. in NLP/LLM.

🧠

Reasoning LLM

Enhancing multi-step reasoning and problem-solving capabilities of LLMs through inference-time scaling, chain-of-thought, and self-consistency.

🏭

Foundation Models

Pre-training, post-training, and scaling laws for large language models. Understanding training dynamics and emergent capabilities.

💡

Commonsense QA

Enabling models to answer questions requiring world knowledge and commonsense reasoning beyond surface-level pattern matching.

🔬

Logical Fallacy Detection

Identifying and classifying logical fallacies in text using LLMs with counterargument and goal-aware prompt formulation.

📝

Prompt Engineering

Designing effective prompts for LLMs to improve task performance, including few-shot, chain-of-thought, and structured prompting.

🕸️

Knowledge Graph

Integrating structured knowledge into language models for improved factual grounding and reasoning capabilities.

Publications

NAACL 2025 Findings

Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation

Jiwon Jeong, Hyeju Jang, Hogun Park

The 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics

arXiv
KSC 2022 · Outstanding Paper Award

Improving Commonsense-based QA Model through a Cycle-Encoder

Jiwon Jeong, Soyoung Lee, Hogun Park

Proceedings of the Korea Software Congress, pp. 791-793, 2022. Top 10% of accepted papers.

DBpia

Projects

🤗
2026.03

Open Source Contribution: Hugging Face Transformers

Fixed a bug in continuous batching for multimodal models (e.g. Qwen3.5). Added input preprocessing and fallback mechanism for non-text-only models. Merged into the official Hugging Face Transformers repository.

Open Source Transformers Multimodal Bug Fix
🔌
2026.01 ~ Present

MCP Server for ChatGPT Apps

Built MCP (Model Context Protocol) servers for ChatGPT Apps (OpenAI). Designed middleware architecture connecting frontend, backend, and LLM-powered enterprise solutions.

Python FastAPI MCP ChatGPT Apps
🍑
2025.08 ~ 2026.01

B-Peach LAB - Text Adaptation Service

Designed evaluation metrics for an AI-powered text adaptation service improving information accessibility for slow learners. Reduced adaptation time from 1 day to under 1 minute.

NLP Evaluation Text Adaptation Accessibility
📊
2025.12 ~ 2026.01

Korean LLM Benchmark

Evaluation framework for Korean LLM performance. Benchmarking various language models across Korean NLP tasks with standardized metrics and reproducible pipelines.

Python LLM Evaluation Benchmark

Experience & Education

2025.09 ~ Present

AI Research Engineer @ Lotte Innovate

Language AI Engineer in AI Tech LAB, Lotte GPT Part. Responsible for language AI research and development at Lotte Innovate.

2026.02.09 ~ 2026.02.24

Online Mentor @ LIKELION (AI NLP Engineer Intensive Course, 3rd Cohort)

Mentoring 4 teams on AI-based service development projects using corporate and public data. Providing code review, project coaching, and Q&A support via Discord. Guiding students with Python fundamentals through hands-on NLP project development.

2025.08 ~ 2026.01

AI Researcher @ B-Peach LAB (Tech for Impact)

Designing evaluation metrics for a text adaptation service aimed at improving information accessibility for slow learners. Conducting ongoing research on adaptation quality assessment.

2023 ~ 2024

Teaching Assistant @ Sungkyunkwan University

TA for Language Model course (SWE3032-41) and Deep Learning advanced courses for non-CS educators. Supported lectures and student mentoring.

2022.03 ~ 2024.08

M.S. in Artificial Intelligence @ Sungkyunkwan University

Research on Commonsense QA and Logical Fallacy Detection. Published at NAACL 2025 Findings and KSC 2022 (Outstanding Paper Award). Advisor: Prof. Hogun Park.

2023.08 ~ 2023.12

Industry-Academia Project (Outstanding)

Built an AI-based construction site hazard detection system using YOLOv5x, GIT, and ELECTRA. Integrated three models into an API for real-time safety monitoring. F1 Score up to 0.97.

2021.11 ~ 2021.12

Text Ethics Hackathon - 2nd Place

Developed an ELECTRA-based classification model for detecting hate speech and biased text. Retrained BPE tokenizer for domain adaptation. Won 2nd place award.

2016.03 ~ 2022.02

B.S. in Electronic Engineering @ Kookmin University

Major in Electronic Systems.

Get In Touch

Open to collaborations, research discussions, and new opportunities.
I'm also actively seeking a Ph.D. position in NLP/LLM — feel free to reach out anytime!