3260 papers • 126 benchmarks • 313 datasets
Plant Phenotyping refers to the use of various techniques and methods to measure and describe the external characteristics and traits of plants. In the field of machine learning, Plant Phenotyping typically involves the use of tools such as image processing, computer vision, sensor technologies, etc., to automatically capture and analyze data related to the morphology, structure, and growth patterns of plants.
(Image credit: Papersgraph)
These leaderboards are used to track progress in plant-phenotyping-24
No benchmarks available.
Use these libraries to find plant-phenotyping-24 models and implementations
No subtasks available.
This paper investigates the use of synthetic data for leaf instance segmentation and presents UPGen: a Universal Plant Generator for bridging the species gap, which leverages domain randomisation to produce widely distributed data samples and models stochastic biological variation.
A domain-adversarial learning approach for domain adaptation of density map estimation for the purposes of object counting does not assume perfectly aligned distributions between the source and target datasets, which makes it more broadly applicable within general object counting and plant organ counting tasks.
Precision Agriculture and especially the application of automated weed intervention represents an increasingly essential research area, as sustainability and efficiency considerations are becoming more and more relevant. While the potentials of Convolutional Neural Networks for detection, classification and segmentation tasks have successfully been demonstrated in other application areas, this relatively new field currently lacks the required quantity and quality of training data for such a highly data-driven approach. Therefore, we propose a novel large-scale image dataset specializing in the fine-grained identification of 74 relevant crop and weed species with a strong emphasis on data variability. We provide annotations of labeled bounding boxes, semantic masks and stem positions for about 112k instances in more than 8k high-resolution images of both real-world agricultural sites and specifically cultivated outdoor plots of rare weed types. Additionally, each sample is enriched with an extensive set of meta-annotations regarding environmental conditions and recording parameters. We furthermore conduct benchmark experiments for multiple learning tasks on different variants of the dataset to demonstrate its versatility and provide examples of useful mapping schemes for tailoring the annotated data to the requirements of specific applications. In the course of the evaluation, we furthermore demonstrate how incorporating multiple species of weeds into the learning process increases the accuracy of crop detection. Overall, the evaluation clearly demonstrates that our dataset represents an essential step towards overcoming the data gap and promoting further research in the area of Precision Agriculture.
This paper investigates the problem of counting rosette leaves from an RGB image, an important task in plant phenotyping and proposes a data-driven approach for this task generalized over different plant species and imaging setups using state-of-the-art deep learning architectures.
This approach proceeds by introducing a fixed number of labels and then dynamically assigning object instances to those labels during training (coloring), and a standard semantic segmentation objective is then used to train a network that can color previously unseen images.
This work proposes an active learning algorithm to enable an autonomous system to collect the most informative samples in order to accurately learn the distribution of phenotypes in the field with the help of a Gaussian Process model.
The algorithm was able to estimate the plant heights in a field with 112 plots with a root mean square error (RMSE) of 6.1 cm, which is the first such dataset for 3D LiDAR from an airborne robot over a wheat field.
The novel learning setting of explanatory interactive learning is introduced and its benefits on a plant phenotyping research task are illustrated and it is demonstrated that explanatory interactiveLearning can help to avoid Clever Hans moments in machine learning.
The incorporation of a deep learning–based SR model in the imaging process enhances the quality of low‐resolution images of plant roots and boosts the performance of a machine learning system trained to separate plant roots from their background.
Adding a benchmark result helps the community track progress.