Aktuelles

Veröffentlichung von "Synthesizing Training Data for Intelligent Weed Control Systems Using Generative AI" / Publication of "Synthesizing Training Data for Intelligent Weed Control Systems Using Generative AI"   [10.10.24]

Paper auf der Architecture of Computing Systems (ARCS) 2024 Conference unter der Beteiligung von Sourav Modak und Fachgebietsleiter JProf. Anthony Stein erschienen. - Paper published at Architecture of Computing Systems (ARCS) 2024 Conference with the participation of Sourav Modak and our head of department JProf. Anthony Stein.

ENGLISH VERSION BELOW.

Ein weiteres Paper ist aus der Arbeit des KI-Fachgebiets erschienen.

Hier kommen die Hard-Facts zur Veröffentlichung.

Titel:
Synthesizing Training Data for Intelligent Weed Control Systems Using Generative AI

Autoren:
S. Modak und A. Stein

Venue:
Architecture of Computing Systems (ARCS 2024)

Abstract:
Deep Learning already plays a pivotal role in technical systems performing various crop protection tasks, including weed detection, disease diagnosis, and pest monitoring. However, the efficacy of such data-driven models heavily relies on large and high-quality datasets, which are often scarce and costly to acquire in agricultural contexts. To address the overarching challenge of data scarcity, augmentation techniques have emerged as a popular strategy to expand training data amount and variation. Traditional data augmentation methods, however, often fall short in reliably replicating real-world conditions and also lack diversity in the augmented images, hindering robust model training. In this paper, we introduce a novel methodology for synthetic image generation designed specifically for object detection tasks in the agricultural context of weed control. We propose a pipeline architecture for synthetic image generation that incorporates a foundation model called Segment Anything Model (SAM), which allows for zero-shot transfer to new domains, along with the recent generative AI-based Stable Diffusion Model. Our methodology aims to produce synthetic training images that accurately capture characteristic weed and background features while replicating the authentic style and variability inherent in real-world images with high fidelity. In view of the integration of our approach into intelligent technical systems, such a pipeline paves the way for continual self-improvement of the perception modules when put into a self-reflection loop. First experiments on real weed image data from a current research project reveal our method’s capability to reconstruct the innate features of real-world weed infested scenes from an outdoor experimental setting.

Unter folgendem Link kommen Sie zur gesamten Übersicht der bisherig veröffentlichten Forschungsarbeiten des Fachgebiets: ki-agrartechnik.uni-hohenheim.de/veroeffentlichungen

--

Another paper has been published with the participation of the AI department.

Here are the hard facts about the publication.

Title:
Synthesizing Training Data for Intelligent Weed Control Systems Using Generative AI

Authors:
S. Modak and A. Stein

Venue:
Architecture of Computing Systems (ARCS 2024)

Summary:
Deep Learning already plays a pivotal role in technical systems performing various crop protection tasks, including weed detection, disease diagnosis, and pest monitoring. However, the efficacy of such data-driven models heavily relies on large and high-quality datasets, which are often scarce and costly to acquire in agricultural contexts. To address the overarching challenge of data scarcity, augmentation techniques have emerged as a popular strategy to expand training data amount and variation. Traditional data augmentation methods, however, often fall short in reliably replicating real-world conditions and also lack diversity in the augmented images, hindering robust model training. In this paper, we introduce a novel methodology for synthetic image generation designed specifically for object detection tasks in the agricultural context of weed control. We propose a pipeline architecture for synthetic image generation that incorporates a foundation model called Segment Anything Model (SAM), which allows for zero-shot transfer to new domains, along with the recent generative AI-based Stable Diffusion Model. Our methodology aims to produce synthetic training images that accurately capture characteristic weed and background features while replicating the authentic style and variability inherent in real-world images with high fidelity. In view of the integration of our approach into intelligent technical systems, such a pipeline paves the way for continual self-improvement of the perception modules when put into a self-reflection loop. First experiments on real weed image data from a current research project reveal our method’s capability to reconstruct the innate features of real-world weed infested scenes from an outdoor experimental setting.

The following link will take you to the complete overview of the research work published to date by the department: ki-agrartechnik.uni-hohenheim.de/veroeffentlichungen


Zurück zu Aktuelle Themen