Hello, I'm
Jiwon Jeong
Passionate about foundation model training, LLM reasoning, and advancing NLP research.
About Me
AI Research Engineer with a Master's degree in Artificial Intelligence from Sungkyunkwan University, with research focused on LLM reasoning, logical fallacy detection, and commonsense QA.
Currently at Lotte Innovate as a Language AI Engineer, working on enterprise LLM solutions. Deeply interested in LLM internals, scaling laws, and building more capable reasoning systems.
class AIResearchEngineer:
def __init__(self):
self.name = "Jiwon Jeong"
self.role = "AI Research Engineer"
self.edu = "SKKU M.S. in AI"
self.stack = [
"Python", "PyTorch",
"FastAPI", "LangChain"
]
self.research = [
"Logical Fallacy",
"Commonsense QA",
"Knowledge Graph",
"Prompt Engineering"
]
self.future_research = [
"Reasoning LLM",
"Foundation LLM"
]
self.goal = "Ph.D. in NLP/LLM"
Research Interests
My research focuses on understanding and improving large language models, with a goal of pursuing a Ph.D. in NLP/LLM.
Reasoning LLM
Enhancing multi-step reasoning and problem-solving capabilities of LLMs through inference-time scaling, chain-of-thought, and self-consistency.
Foundation Models
Pre-training, post-training, and scaling laws for large language models. Understanding training dynamics and emergent capabilities.
Commonsense QA
Enabling models to answer questions requiring world knowledge and commonsense reasoning beyond surface-level pattern matching.
Logical Fallacy Detection
Identifying and classifying logical fallacies in text using LLMs with counterargument and goal-aware prompt formulation.
Prompt Engineering
Designing effective prompts for LLMs to improve task performance, including few-shot, chain-of-thought, and structured prompting.
Knowledge Graph
Integrating structured knowledge into language models for improved factual grounding and reasoning capabilities.
Publications
Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation
The 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics
arXivImproving Commonsense-based QA Model through a Cycle-Encoder
Proceedings of the Korea Software Congress, pp. 791-793, 2022. Top 10% of accepted papers.
DBpiaProjects
Open Source Contribution: Hugging Face Transformers
Fixed a bug in continuous batching for multimodal models (e.g. Qwen3.5). Added input preprocessing and fallback mechanism for non-text-only models. Merged into the official Hugging Face Transformers repository.
MCP Server for ChatGPT Apps
Built MCP (Model Context Protocol) servers for ChatGPT Apps (OpenAI). Designed middleware architecture connecting frontend, backend, and LLM-powered enterprise solutions.
B-Peach LAB - Text Adaptation Service
Designed evaluation metrics for an AI-powered text adaptation service improving information accessibility for slow learners. Reduced adaptation time from 1 day to under 1 minute.
Experience & Education
AI Research Engineer @ Lotte Innovate
Language AI Engineer in AI Tech LAB, Lotte GPT Part. Responsible for language AI research and development at Lotte Innovate.
Online Mentor @ LIKELION (AI NLP Engineer Intensive Course, 3rd Cohort)
Mentoring 4 teams on AI-based service development projects using corporate and public data. Providing code review, project coaching, and Q&A support via Discord. Guiding students with Python fundamentals through hands-on NLP project development.
AI Researcher @ B-Peach LAB (Tech for Impact)
Designing evaluation metrics for a text adaptation service aimed at improving information accessibility for slow learners. Conducting ongoing research on adaptation quality assessment.
Teaching Assistant @ Sungkyunkwan University
TA for Language Model course (SWE3032-41) and Deep Learning advanced courses for non-CS educators. Supported lectures and student mentoring.
M.S. in Artificial Intelligence @ Sungkyunkwan University
Research on Commonsense QA and Logical Fallacy Detection. Published at NAACL 2025 Findings and KSC 2022 (Outstanding Paper Award). Advisor: Prof. Hogun Park.
Industry-Academia Project (Outstanding)
Built an AI-based construction site hazard detection system using YOLOv5x, GIT, and ELECTRA. Integrated three models into an API for real-time safety monitoring. F1 Score up to 0.97.
Text Ethics Hackathon - 2nd Place
Developed an ELECTRA-based classification model for detecting hate speech and biased text. Retrained BPE tokenizer for domain adaptation. Won 2nd place award.
B.S. in Electronic Engineering @ Kookmin University
Major in Electronic Systems.
Get In Touch
Open to collaborations, research discussions, and new opportunities.
I'm also actively seeking a Ph.D. position in NLP/LLM — feel free to reach out anytime!