DRAW 2 (which stands for
D
etect and
R
ecognize
A
W
ide range of cards version 2) is an object detector
trained to detect
Yu-Gi-Oh!
cards in all types of images, and in particular in dueling images.
With this new version,
DRAW 2
goes beyond its predecessor. It’s more accurate, more robust, and way easier to use.
It now includes an
OBS plugin
that lets users seamlessly integrate the
detector directly into their live streams or recorded videos; and those
without any particular technical skills
.
The plugin can display detected cards in real time for an enhanced viewing experience.
Other works exist (see
Related Works
) but none is capable of recognizing cards
during a duel.
If you juste want to use the plugin, please refer to the
OBS plugin page
.
You don't need to install anything from this repository.
The documentation below is for people who want to use the detector outside of OBS, this will require some coding skills.
Installation
You need python to be installed. Python installation isn't going to be detailed here, you can refer to
the
documentation
.
We first need to install pytorch. It is recommended to use a package manager such
as
miniconda
.
Please refer to the
documentation
.
When everything is set up you can run the following command to install pytorch:
python -m pip install torch torchvision
If you want to use you gpus to make everything run faster, please refer
the
documentation
Then you just have to clone the repo and install
requirements
:
Once the installation is done, you can use the detector by executing the following command:
python -m draw
You can use the
--help
flag to see all available options:
python -m draw --help
Here are the most important options:
--source
: Path to your image, video, or webcam index (default is
0
for webcam).
--save
: Save path for the output.
--show
: Display the output in a window.
--display-card
: Display detected cards on the output.
--deck-list
: Path to a ydk file containing the list of cards in your deck for better recognition.
--fps
: FPS of the saved video (default is 60).
💡Inspiration
This project is inspired by content creator
SuperZouloux
's idea of a
hologram bringing
Yu-Gi-Oh!
cards to life.
His project uses chips inserted under the sleeves of each card,
which are read by the play mat, enabling the cards to be recognized.
Inserting the chips into the sleeves is not only laborious, but also poses another problem:
face-down cards are read in the same way as face-up ones.
So an automatic detector is a really suitable solution.
Although this project was discouraged by
KONAMI
®
, the game's publisher (which is quite understandable),
we can nevertheless imagine such a system being used to display the cards played during a live duel,
to allow viewers to read the cards.
🔗Related Works
Although to my knowledge
draw
is the first detector capable of locating and detecting
Yu-Gi-Oh!
cards in a dueling
environment,
other works exist and were a source of inspiration for this project. It's worth mentioning them here.
Yu-Gi-Oh! NEURON
is an official application developed by
KONAMI
®
.
It's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at
a time, which is very decent.
The drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel
context.
What's more, it can't be integrated, so the only way to use it is to use the application.
yugioh one shot learning
made by
vanstorm9
is a
Yu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its
classification
model. It gives very impressive results on images with a good quality but not that good on low quality images, and it
can't localize cards.
Yolov11
is the last version of the very famous
yolo
family of object
detector models that handle oriented bounding boxes.
I think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.
ViT
is a pre-trained model for image classification based on the Vision
Transformer architecture.
It relies entirely on attention mechanisms to process image patches instead of using convolutional layers.
It fits our task well since pre-trained versions on large-scale datasets such as ImageNet-21K are available.
This is particularly relevant for our use case, as it enables handling a large number of visual categories similar to
the 13k+ unique cards found in
Yu-Gi-Oh!
.
SpellTable
is a free application designed and built by
Jonathan Rowny
and his team
for playing paper
Magic: The Gathering
from a distance.
It allows player to click on a card on any player's feed to quickly identify it.
It has some similarity with
draw
since it can localize and recognize any card from a built in database of 17 000
cards.
The idea is close to this project, but it didn't originate it.
🔍Method Overview
A medium blog post explainng the main process from data collection to final prediction has been written.
You can access it at
this
adress. If you have any questions, don't hesitate to open an issue.
draw2-large huggingface.co is an AI model on huggingface.co that provides draw2-large's model effect (), which can be used instantly with this HichTala draw2-large model. huggingface.co supports a free trial of the draw2-large model, and also provides paid use of the draw2-large. Support call draw2-large model through api, including Node.js, Python, http.
draw2-large huggingface.co is an online trial and call api platform, which integrates draw2-large's modeling effects, including api services, and provides a free online trial of draw2-large, you can try draw2-large online for free by clicking the link below.
HichTala draw2-large online free url in huggingface.co:
draw2-large is an open source model from GitHub that offers a free installation service, and any user can find draw2-large on GitHub to install. At the same time, huggingface.co provides the effect of draw2-large install, users can directly use draw2-large installed effect in huggingface.co for debugging and trial. It also supports api for free installation.