rock-paper-scissors ai classifier

TECH-STACK: TensorFlow, Python, Google Colab, PIL

This project presents a machine learning solution to the classic game of Rock-Paper-Scissors using image classification. The core of this solution is a Convolutional Neural Network (CNN) designed to recognize hand gestures corresponding to Rock, Paper, and Scissors from static images. The images are preprocessed to a uniform size of 128×128 pixels and labeled accordingly with ‘rock’ as 0, ‘scissors’ as 1, and ‘paper’ as 2.

cnn architecture:

The CNN model includes multiple layers: two convolutional layers with ReLU activation to capture textural features from the images, followed by max-pooling layers to reduce dimensionality. This is complemented by a flattening step and two dense layers, the last of which uses a softmax activation to classify the images into one of the three categories.

Training and Evaluation:

The dataset, comprising labeled images of hand gestures, is split into 80% for training and 20% for testing. After training over 10 epochs, the model achieved an impressive accuracy, demonstrating its ability to generalize well on unseen data.

Game Simulation:

An additional feature of this project is the simulation of a Rock-Paper-Scissors game where an AI agent plays 100 rounds against predetermined moves, showing a solid win rate based on its predictive capabilities.

Real-World Application

The model’s robustness was further tested with random images from the internet, showcasing its potential in real-world scenarios where it could be integrated into interactive games or educational tools.

Share your love