AI-Generated ‘Saenggi’ (Student Activity) Records: How Interviewers Are Filtering Out Fake Applications — Same Questions, Different Answers

In the neon-drenched dystopia of Blade Runner, the Voight-Kampff test was designed to distinguish humans from replicants by measuring emotional responses to provocative questions. Decades later, a similar ethical dilemma has emerged in Seoul, where university admissions officers are grappling with a modern parallel: how to discern authentic self-expression from AI-generated content in student applications. As South Korea’s top universities tighten scrutiny of seonggi bu (student activity records), the line between cultivated identity and fabricated narrative is being tested—not with empathy probes, but with algorithmic detection tools and intensified interview protocols.

The comparison to Ridley Scott’s 1982 sci-fi classic is not merely metaphorical. Just as the Voight-Kampff test sought to uncover whether memories were lived or implanted, today’s admissions committees face the challenge of determining whether a student’s accomplishments reflect genuine passion and growth—or were optimized by generative AI to meet perceived expectations. This tension has intensified following public statements from Seoul National University officials suggesting that AI-assisted seonggi bu materials may be filtered out during review, sparking nationwide debate about authenticity, equity and the evolving role of technology in education.

At the heart of this discussion lies a fundamental question: in an age where AI can generate convincing personal essays, extracurricular summaries, and even reflective statements, how do institutions preserve the integrity of holistic review? And more importantly, what does it mean for students navigating a system where the very tools meant to assist them may also undermine their credibility?

The Voight-Kampff Test: Measuring Humanity Through Response

In Blade Runner, the Voight-Kampff apparatus measures physiological reactions—iris fluctuation, capillary dilation, respiratory rate—to emotionally charged questions designed to provoke empathy. Replicants, lacking genuine emotional development, often fail to respond authentically, revealing their artificial nature. The test does not assess knowledge or logic, but the depth of lived experience reflected in involuntary biological responses.

Administered by blade runners like Rick Deckard, the process is intentionally unsettling. Questions about a tortoise stranded in the desert or a mother’s reaction to a child’s death are crafted to bypass intellectual defense mechanisms and tap into primal emotional responses. As depicted in the film, even advanced Nexus-6 replicants struggle to simulate these responses convincingly, highlighting the gap between programmed behavior and organic consciousness.

From Instagram — related to Voight, Kampff

Though fictional, the Voight-Kampff test has become a cultural touchstone for discussions about authenticity, consciousness, and the ethics of distinguishing between the real and the simulated. Its relevance has been revisited in academic circles as AI systems grow increasingly capable of mimicking human language, tone, and narrative structure—raising concerns about where to draw the line between assistance and deception in personal expression.

Blu-ray.com notes that the Voight-Kampff scene remains one of the most analyzed sequences in science fiction cinema, frequently cited in discussions about AI ethics and human identity.

Seoul National University and the Scrutiny of Seonggi Bu

In South Korea, the seonggi bu (생활기록부) is a comprehensive student activity record that documents academic performance, extracurricular involvement, leadership roles, volunteer operate, and personal reflections throughout high school. Unlike Western systems that rely heavily on standardized test scores and personal essays, Korean universities place significant weight on the seonggi bu as a holistic portrait of the applicant’s development over time.

However, the rise of accessible generative AI tools has complicated this process. Students can now use large language models to draft or refine activity descriptions, personal statements, and reflective essays—potentially producing polished narratives that may not reflect their actual experiences or growth trajectory. Recognizing this challenge, admissions officers at Seoul National University have begun implementing stricter verification measures, including more probing interview questions designed to uncover inconsistencies between written materials and verbal accounts.

According to a 2023 report by the Korean Educational Development Institute, over 60% of university admissions officers expressed concern about the difficulty of verifying the authenticity of seonggi bu entries in the age of AI. The report noted that while no formal ban on AI use exists, institutions are increasingly relying on interviews and school recommendations to cross-check submitted materials.

Korean Educational Development Institute (KEDI) published findings in late 2023 indicating that universities are adapting evaluation frameworks to address AI-generated content, though standardized detection tools remain limited in reliability and ethical acceptability.

In interviews with local media, SNU admissions officials have emphasized that their goal is not to penalize students for using AI as a study aid, but to ensure that the seonggi bu represents a truthful account of personal development. One officer, speaking on condition of anonymity, described the approach as asking “the same question in different rooms”—a direct echo of the Voight-Kampff methodology—where inconsistencies in narrative under pressure may reveal whether experiences were lived or constructed.

Ethical Implications: Authenticity in the Age of AI

The growing reliance on AI in academic preparation raises profound ethical questions about fairness, access, and the purpose of holistic admissions. On one hand, AI tools can help students from under-resourced backgrounds articulate their experiences more effectively, potentially leveling the playing field in competitive admissions. Unequal access to advanced AI guidance—or the temptation to over-rely on automation—could exacerbate existing inequalities or reward performance over authenticity.

Ethical Implications: Authenticity in the Age of AI
Voight Kampff Seoul

Experts in education technology warn that without clear guidelines, the use of AI in application materials risks undermining the very values holistic review seeks to uphold: self-reflection, personal growth, and genuine engagement with one’s community. As noted by researchers at Stanford’s Graduate School of Education, the danger lies not in AI use itself, but in the opacity surrounding it—when applicants present AI-generated content as their own unmediated voice.

Stanford GSE has published work emphasizing the need for transparency in AI-assisted writing, particularly in high-stakes contexts like college admissions, where authenticity is a core evaluative criterion.

Some educators advocate for explicit disclosure policies, similar to those used in academic publishing, where students would acknowledge AI assistance in drafting or editing materials. Others argue that the focus should shift toward strengthening interview components and teacher recommendations—elements less susceptible to AI generation—as a means of verifying authenticity through human interaction.

In South Korea, where the seonggi bu carries significant weight, any shift in evaluation methodology could have wide-ranging implications for students, schools, and private education providers. The country’s highly competitive academic culture, exemplified by the intense focus on the College Scholastic Ability Test (CSAT), means that changes in admissions criteria are closely watched and often provoke public debate.

Detection, Verification, and the Limits of Technology

While several companies offer AI detection tools claiming to identify machine-generated text, their reliability remains contested. Studies have shown that such tools often produce false positives, particularly disadvantaging non-native English writers or students with distinct linguistic styles. In 2023, OpenAI discontinued its AI classifier due to low accuracy rates, acknowledging the difficulty of reliably distinguishing between human and AI-generated text at scale.

Spotting AI-Generated Content in Student Work

OpenAI stated in July 2023 that its classifier could not reliably detect AI-generated text and was subsequently withdrawn, highlighting the technical limitations of current detection methods.

Given these challenges, many institutions are turning to human-centered verification strategies. At Seoul National University, admissions committees have reportedly increased the use of follow-up interviews that probe not just what students did, but how they felt, what they learned, and how experiences changed them—questions designed to elicit reflective, narrative responses that are hard to fabricate convincingly without genuine internalization.

This approach mirrors the Voight-Kampff test’s focus on emotional resonance rather than factual recall. By asking students to describe moments of failure, ethical dilemmas, or personal transformation in varied contexts, interviewers aim to assess whether the narrative reflects a coherent, evolving self—or a externally optimized performance.

Education specialists caution, however, that over-reliance on interviews may introduce bias, favoring students with greater confidence, verbal fluency, or familiarity with Western-style self-promotion—traits not necessarily correlated with merit or potential. Balancing authenticity verification with equitable evaluation remains an ongoing challenge.

What This Means for Students and Educators

For high school students preparing university applications, the evolving landscape underscores the importance of authentic engagement in extracurricular and reflective activities. Rather than focusing solely on producing impressive-sounding records, students may benefit more from pursuing meaningful experiences and developing the ability to articulate them thoughtfully—whether with or without AI assistance.

Educators, meanwhile, face the task of guiding students in ethical AI use. This includes teaching critical awareness of how AI shapes expression, encouraging reflection on personal growth, and fostering environments where students feel safe to share genuine struggles and insights—not just polished achievements.

Some schools have begun integrating AI literacy into career counseling and writing instruction, emphasizing that while AI can help refine language, it cannot substitute for lived experience or authentic self-understanding. As one Seoul-based college counselor noted in a recent interview, “The goal isn’t to eliminate AI use—it’s to ensure that the voice behind the application is still the student’s.”

In this light, the comparison to Blade Runner takes on a deeper resonance. Just as the film questioned what it means to be human in a world of near-perfect imitations, today’s admissions landscape invites reflection on what it means to be authentic in an age of intelligent assistance. The Voight-Kampff test may be fiction, but the pursuit of sincerity it represents remains profoundly real.

Looking Ahead: The Future of Holistic Review

As AI continues to evolve, universities worldwide are likely to face increasing pressure to adapt their evaluation methods. In South Korea, where the seonggi bu system is deeply entrenched, any changes will require careful consideration of cultural context, educational values, and equity implications.

Seoul National University has not announced plans to overhaul its admissions process, but officials have indicated that ongoing reviews of application verification practices are underway. The next scheduled update on undergraduate admissions policy is expected ahead of the 2025 application cycle, typically released in early spring by the university’s Office of Admissions.

Prospective applicants are encouraged to consult official guidelines from SNU and the Korean Council for University Education when preparing materials, and to seek advice from trusted educators or counselors regarding ethical AI use in academic contexts.

As the line between human and machine-generated expression continues to blur, the challenge for institutions is not to eliminate AI’s role, but to preserve the space where genuine self-discovery can still be seen, heard, and valued.

What are your thoughts on the role of AI in college applications? Share your perspective in the comments below, and consider sharing this article with others navigating the evolving landscape of education and technology.

Leave a Comment