"Driven by Social Impact: Behind the Scenes of Developing Fake Detection to Keep Up with Evolving Generative AI"
- NABLAS
- Jul 1
- 5 min read

With the rapid evolution and spread of generative AI technology, various types of fake content, such as images, videos, audio, and text, have become commonplace. Fake information spreads mainly through social media, posing a growing risk of harm to individuals, companies, and society as a whole.
NABLAS has been focusing on the social impact of generative AI and deepfake technologies from early on, and has been working on detection technology development since 2021.
This time, we interviewed Xue san, a research engineer in charge of developing KeiganAI, a proprietary product being developed as part of the fake detection project, about the challenges and rewards of the project.
Please give us a brief introduction about yourself.
I am a research engineer in the R&D team, mainly engaged in research and development in the field of computer vision. I have been with NABLAS for over two years, including a one-year internship. My experience includes AI editing of cooking videos using diffusion models and an AI-based technical document classification project for an automobile manufacturer. Currently, I am involved in a fake detection project, working on the development of a detection model that can identify fake videos generated by AI.
▶ Click here for NABLAS' fake detection service, KeiganAI.
What kind of technological developments are you working on in the fake detection project?
I am mainly working on detecting fake videos, with the aim of distinguishing between real and fake videos. With the rapid evolution of generative models in the field of video generation, technologies that can reliably detect generated or altered content are becoming increasingly important. The spread of fake content poses a serious risk to the credibility of the media and social trust, so I believe this work is not only technically challenging but also has great social significance.
We are developing a model that enables high-precision analysis by combining multiple approaches rather than relying on a single perspective to detect unnatural features (artifacts) in fake videos. For example, we output comprehensive results based on various evaluation factors, such as video texture, movement, and unnatural flow. By performing fake detection through this multifaceted analysis, we are increasing the versatility of the model.
Generation technology is evolving daily, and new fake videos that cannot be detected using traditional methods are continuously emerging. This is why developing detection models that are not dependent on specific generation methods or styles has become a critical focus for us.

What difficulties did you encounter (or are encountering) during development?
One of the challenges we faced during development was how to enable AI to detect the slight unnaturalness characteristic of generative AI from high-precision fake videos. Recent generative AI is extremely high-performance, and at first glance, it appears natural and very realistic, with almost no unnaturalness. Cases of misuse of such high-precision fake content have actually occurred in countries around the world, and considering its influence, we feel that the development of countermeasures is urgently needed.
The improvement in the quality of generative AI is particularly evident in video generation, making it increasingly difficult to detect fakes using conventional approaches. To detect unnaturalness, we tried various detection approaches, but it was necessary to repeatedly trial and error and revise the approach until satisfactory results were achieved.
Furthermore, the versatility of the model was another major challenge. While initial models achieved certain detection results, as generative technology evolved, we began to struggle with detecting more precise fake videos. As mentioned earlier, the pace of technological evolution in generative AI is extremely rapid, with new technologies emerging daily. To counter this, it is not only current detection accuracy that matters, but also the versatility to adapt to the continuously emerging new generative technologies. Therefore, the development of a detection model with high generalization capabilities capable of handling a wide range of patterns was indispensable.
To reiterate, our goal is to develop a model that can reliably detect fake videos generated by any model. Therefore, as long as generative AI technology continues to evolve, these challenges will persist.
How did you overcome the above difficulties?
Detecting unnaturalness in moving images was a very delicate and difficult task. However, even if the image as a whole appears natural, there are still slight movement discrepancies and inconsistencies unique to generative AI. We focused on the “subtle movement discrepancies” unique to videos that conventional detection approaches could not fully capture, and after trial and error, we arrived at an approach that can detect unnaturalness more fundamentally. This approach allows us to highlight artifacts that were previously difficult to detect, enabling the detection model to detect fakes with higher accuracy.
Additionally, to enhance the model's versatility, we made improvements to the training data used for learning. Specifically, we intentionally used fake videos created with the latest, more realistic and high-precision generation technology as training data, training the model to handle cases with high detection difficulty. This enabled us to evolve the model into one with strong generalization capabilities, capable of detecting not only traditional fakes but also a variety of fake video patterns.

What aspects of the fake detection project do you find most rewarding?
What I find most rewarding about the fake detection project is that I am able to tackle real-world issues that are currently expanding. With the rapid advancement of generative AI, particularly video generation technology, it is becoming increasingly difficult to distinguish between real and fake content. In today's information-driven society, fake information has the potential to cause significant negative impacts not only on businesses but also on national-level issues such as politics and the economy. I find meaning in my work because we are developing countermeasures against fake information that could lead to significant societal disruption.
Personally, I also see this project as a significant opportunity for growth. Technically challenging, constantly evolving, and requiring constant critical thinking, it pushes me to learn and adapt. I challenge new ideas, fail, hit dead ends, but through trial and error, I learn something new every day. It's not always smooth sailing, but through this process, I've gained confidence and improved both my problem-solving skills and mindset. I'm also grateful for the team atmosphere. Everyone is open to discussion, and when faced with difficult problems, we support each other. This collaborative environment, where we grow together, makes this project experience even more meaningful.
Please tell us about your future prospects and goals for project development.
Going forward, we aim to further enhance the capabilities of our models to detect fake videos created by increasingly sophisticated generative models. Given the rapid evolution of generative technology, we believe it is important to continuously refine our detection strategies while adapting to these changes.
In the long term, we aim to build a more integrated, general-purpose detection framework. This will be an “all-in-one” system capable of identifying various types of tampered videos, including not only text-to-video and image-to-video generation, but also video editing, inpainting, and object replacement. By developing such a system, we aim to achieve a model with greater flexibility and scalability across a wider range of scenarios.
Personally, I aspire to deepen my knowledge in the field of fake video detection and become someone who can take on responsibilities in designing more advanced architectures. I am excited about continuing to be involved in projects that are technically challenging yet socially meaningful.
Thank you for the interview, Xue san!
NABLAS is currently recruiting new team members!