CIFAKE: A Hybrid Approach for Detecting and Explaining AI-Generated Synthetic Images

Authors

  • Mrs. K.Anusha1|Mohammad Bushra Fathima2|Sukamanchi Chandu Sai3|Panduri Bhumesh4|Nelluri Vamsi Gopala Krishna5 Author

DOI:

https://doi.org/10.64751/

Keywords:

AI-generated image detection, Deepfake detection, Convolutional Neural Network (CNN), Computer vision, Explainable AI (XAI), Image classification, Synthetic media, Digital forensics

Abstract

With the rapid growth of artificial intelligence in India, synthetic media generation has increased across social media, journalism, advertising, and cybercrime. India recorded thousands of deepfake-related complaints in recent years, especially involving misinformation and identity misuse. As AI-generated images become photorealistic, traditional visual inspection fails. This raises serious concerns for digital trust, national security, and data authenticity, motivating research into automated AI-generated image detection using computer vision techniques.The objective is to automatically distinguish real images from AI-generated images using CNN-based classification and explainable AI, ensuring image authenticity, reliability, and trustworthiness in digital ecosystems. In the manual system, image authenticity is verified through human visual inspection, expert judgment, metadata analysis, and basic digital forensic techniques. Analysts examine image quality, lighting inconsistencies, compression artifacts, and contextual information. Sometimes watermark verification or source validation is used to determine whether an image is real or artificially generated. Manual analysis is time-consuming, subjective, and error-prone. Human observers cannot reliably detect high-quality synthetic images, especially diffusion-generated content. The process does not scale for large datasets, lacks consistency, and fails when metadata is removed or manipulated, making it ineffective against modern AI-generated images. The motivation is to overcome the limitations of manual inspection by introducing an automated, scalable, and objective system. Machine learning can detect subtle visual artifacts invisible to humans, improve accuracy, reduce analysis time, and provide explainable insights, addressing reliability and scalability challenges present in traditional methods. The proposed system uses a Convolutional Neural Network (CNN) to classify images as Real or Fake by learning discriminative visual patterns from both CIFAR-10 real images and synthetically generated images (CIFAKE dataset).

Downloads

Published

03-04-26

How to Cite

Mrs. K.Anusha1|Mohammad Bushra Fathima2|Sukamanchi Chandu Sai3|Panduri Bhumesh4|Nelluri Vamsi Gopala Krishna5. (2026). CIFAKE: A Hybrid Approach for Detecting and Explaining AI-Generated Synthetic Images. American Journal of AI Cyber Computing Management, 6(2), 209-216. https://doi.org/10.64751/