Skip to content
Snippets Groups Projects
Name Last commit Last update
example
LICENSE
README.md
tria-v1.pt

TRIA: TRIchogrammA parasitized egg count.

Description

TRIA (TRIchogrammA parasitized egg count) is an ultralytics model that identificate eggs of Ephestia kuehnniella previously parasitized by Trichogramma wasps. TRIA was initialized on YOLO11 and trained on a specific datasets.

Installation

  • You need ultralytics. Note that for ultralytics, you should have Anaconda or Miniconda installed on your system. If not, download and install it from Anaconda or Miniconda. Then, you can install the Ultralytics package using pip. Here's a basic installation command (in a terminal):

pip install ultralytics

  • You need to download the trained model "tria-v1.pt".

Usage

Just call the model on the pictures you want to annotate (let imagine that the pictures are in the folder "pictures2count") with the following command (in a terminal, in the same folder where "tria-v1.pt" was saved):

yolo predict model=tria-v1.pt source=pictures2count/ save_txt=True conf=0.5

The model will create a folder "runs/detect/predict" with the annotated images and a folder "runs/detect/predict/labels/" with all the annotation in ".txt" format.

If you are not satisfied with the predictions

If predictions are not perfect, modifications can be made using Yolo_Label:

In order to use Yolo_Label, you will need the pictures in the ".jpg" format. If the pictures are not initially in ".jpg" (let imagine that the pictures are in ".tif"), you might convert the pictures with FFmpeg, by tapping the following in a terminal in the "pictures2count" folder:

for i in *.tif; do ffmpeg -i "$i" "${i%.*}.jpg"; done

When pictures are in ".jpg" format, paste the predicted annotations "runs/detect/predict/labels/" in the folder, paste a "names.txt" file with a single letter (e.g. "p") inside, and open Yolo_Label in this specific folder. You will be able to correct manually the predictions.

Example

Low density

Initial image Predictions

High density

Initial image Predictions

Limits

This model only works for images with size and definition similar to the example.

Authors and acknowledgment

  • C. BRESCH (INRAE) annotated the images.
  • L. VAN OUDENHOVE (INRAE) led the project.
  • Thanks to J. DE GOËR and L. PETITCOLIN for advices.
  • The CoLab.Ia platform provided the GPU for training.

Contact

louise.vanoudenhove@inrae.fr

License

GNU GENERAL PUBLIC LICENSE, v3.