Skip to main content

Flora Capture: a citizen science application for collecting structured plant observations

Abstract

Background

Digital plant images are becoming increasingly important. First, given a large number of images deep learning algorithms can be trained to automatically identify plants. Second, structured image-based observations provide information about plant morphological characteristics. Finally in the course of digitalization, digital plant collections receive more and more interest in schools and universities.

Results

We developed a freely available mobile application called Flora Capture allowing users to collect series of plant images from predefined perspectives. These images, together with accompanying metadata, are transferred to a central project server where each observation is reviewed and validated by a team of botanical experts. Currently, more than 4800 plant species, naturally occurring in the Central European region, are covered by the application. More than 200,000 images, depicting more than 1700 plant species, have been collected by thousands of users since the initial app release in 2016.

Conclusion

Flora Capture allows experts, laymen and citizen scientists to collect a digital herbarium and share structured multi-modal observations of plants. Collected images contribute, e.g., to the training of plant identification algorithms, but also suit educational purposes. Additionally, presence records collected with each observation allow contribute to verifiable records of plant occurrences across the world.

Background

Efforts to automatically identify species from images have substantially increased in recent years [1, 2]. Deep learning methods revolutionize our ability to train computers in identifying organisms from image data, such as insects [3], fishes [4], plankton [5], mammals [6] and plants [7]. Specifically, convolutional neural networks (CNNs) allow for superior recognition performance [8, 9] and form the basis for successful automated plant species identification [1, 10]. Deep CNNs have been demonstrated to facilitate classification accuracies that are on par with human performance for general object recognition tasks [8] as well as for fine-grained species identification tasks [11]. Latest studies on automated image-based plant identification show identification accuracy that at least reaches humans’ identification abilities for common plants [1, 7, 12]. However, with more than 380,000 described species worldwide [13], automated plant identification still constitutes a challenging image recognition problem, further complicated by low interspecific variability and high intraspecific variability for many species (cp. Fig. 1).

Fig. 1
figure 1

a Interspecific variability across five Ranunculus species and intraspecific variability within two of the species. b Five Ranunculus species hardly distinguishable by their flowers alone, but clearly identifiable when accompanied by leave images

CNNs rely on vast numbers of training images. While the algorithms themselves are constantly refined and improved [1, 14], the accuracy of identification is strongly dependent on the quality of images used in the training as well as the eventual identification process. Therefore, distinguishing very similar plants requires the provision of suitable training images depicting species-specific details. Often, a single image is not enough to reliably identify a species (cp. Fig. 1). This is true for humans as well as for algorithms [15] and accuracy of species identification has been shown to be considerably improved by analyzing more than one image perspective in an identification process [7, 16, 17]. However, most plant image datasets (e.g., GBIF [18], iNaturalist [19], Pl@ntNet [20]) only contain one image per observed plant and were not collected in a structured manner. While prominent organs such as the flower of angiosperms are well populated, other organs such as leaves and fruits are often underrepresented or even missing. Furthermore, the majority of images belongs to single-image observations which, in many cases, do not allow for a safe identification, especially if leaves and other important details are not depicted [15]. Unfortunately, the frequency of misidentifications is proportional to the difficulty of discriminating species. Therefore, the ability of algorithms to distinguish visually similar species is further reduced through the presence of incorrectly labelled training data. Structured multi-image observations of plants, following predefined perspectives, provide solid classification results even for species showing strong visual resemblance, such as grasses [7]. Within a structured observation also images depicting less prominent perspectives can safely be identified and labeled, which might not be possible without their accompanying perspectives. Having these perspectives as training data, eventually enables plant identification algorithms to identify plants based on less conspicuous vegetative parts. The creation of structured image data sets is an important task to further improve automatic plant recognition.

Upcoming trends in crowdsourcing and citizen science offer excellent opportunities to generate and continuously update image datasets. Advances in mobile technology and the ubiquity of smartphones provides billions of potential users with a powerful tool to record, collect and share images of species surrounding them. Members of the public are therefore able to acquiring data and to contribute to scientific research projects with very little required knowledge of the subject. Involving citizen to accumulate such structured observations can produce a large and high-quality dataset [21]. Keeping participants engaged is challenging but can be accomplished by providing constant feedback of their performance [22]. In the following section, we introduce the mobile application Flora Capture, allowing users to create structured plant observations and to submit them to the servers of the Flora Incognita research project. Depending on a plant’s life form, Flora Capture guides a user through a sequence of suitable plant perspectives to be captured. These perspectives have been carefully selected and evaluated in order to capture complementary information making up an observation likely containing sufficient information for an expert botanist to identify or verify a provided id of the depicted plant [7, 23]. Currently, each captured and accepted observation contributes to a dataset used for improving the Flora Incognita plant identification app [24]. Additionally, Flora Capture has been used to conduct other studies and may be used in various ways to support new studies [7].

Implementation

Architectural overview

We designed the Flora Capture system as a flexible client–server solution consisting of scalable micro services running in our data center and client applications realizing different plant identification scenarios (see Fig. 2). Below, we briefly describe the system’s modules.

Fig. 2
figure 2

Architectural overview of the Flora Capture system

Flora Capture app The Flora Capture app is a multi-platform app, freely available on Android and iOS with a modular code base ensuring maximum reuse and consistency of functionality across the different applications developed within the project. Flora Capture enables users to take offline multi-image observations that are identified batch-wise upon sync to our server and allow for creating a digital plant collection.

Observation service After synchronization, observations collected with Flora Capture are handed over to an observation service. An observation consists of several images from predefined perspectives as well as provenance data (device, author, date, etc.). Each observation is optionally associated with a geolocation and a species name. Each observation not identified by the user themselves will instantly be analyzed by an automated identification service providing an initial feedback to the user upon synchronization. This feedback is conservative meaning that the result is only reported if the classifier is very confident in its decision. In addition to this initial feedback, each uploaded observation will be reviewed by botanical experts using our Flora Expert app. Each confirmed observation can then be used for further analysis. Currently, we use them as training images for the Flora Incognita identification service [24].

Flora Expert app Flora Capture relies on image reviews and eventual species identifications that are solely conducted by botanical experts associated with the project. We are continuously expanding this group of expert reviewers but plan to only involve specialists. We argue that involving an open community may introduce too much noise and potentially wrong labels into the acquired data. For the purpose of reviewing incoming Flora Capture observations, we have developed the Flora Expert app available as website and on mobile devices. Using the app, our experts can confirm, relabel, postpone and reject observations on a per image level (cp. Fig. 3). Observations that cannot be identified by our experts, either because an exact species cannot unambiguously be determined or the observation’s quality is not suitable, will be rejected (cp. Fig. 3b). We performed a cross-evaluation, where each of our experts re-reviewed 50 randomly drawn observations without knowing the result of a previous review conducted by another expert. These independent results matched in 96% of the evaluated observations. The differences were mainly due to fact that different experts have different experiences for particular plant groups. In reality, critical observations would be discussed among different experts to reach a consensus decision.

In order to engage and teach our Flora Capture users, feedback on their observations including the final species label is transferred back to their device. The My Observations list within the app shows all observations collected by a user and a color-coded symbol indicates their review status, i.e., not-synced, under review, accepted, rejected. In addition, the user receives a lot of information about the species, e.g. protection status, characteristics, distribution maps. If our experts reject an observation, the user that contributed it will see detailed information why her or his observation did not meet the project’s general acceptance criteria and, if required, an individual message (cp. Fig. 3b).

Fig. 3
figure 3

a Main Flora Expert user interface showing a Daucus carota observation. b Experts rejecting incorrect observations can select from a list of predefined problems and can also add individual feedback to guide the user

How Flora Capture works

After downloading the app and registering, users can create new observations, sync existing observations and view all previously taken observations (cp. Fig. 4a). When creating a new observation a user will be guided step-by-step in a questionnaire-like manner suitable for laymen as well as for experts. If a user is able to identify the plant to be observed or has a hypothesis about its species, she or he may provide it—if not, she or he may choose to only enter its growth form (cp. Fig. 4b, Table 1). Depending on the growth form, either directly entered by the user or derived form the selected species, we ask a user to take images of the plant to be observed from the most suitable perspectives. In order to reduce users’ cognitive load when using the app, we group questions logically and continuously inform about the overall progress visually. Figure 4 shows an example workflow for the herb or shrub growth form. Each growth form is associated with a number of mandatory perspectives and may additionally offer optional perspectives that a user may or may not acquire (cp. Table 1). For example, an additional second leaf could be recorded if a plant has differently shaped basal and stem leaves. An optional perspective called characteristic feature allows users to capture specifics of a plant that she or he deems particularly relevant for identification of the species and that have not been captured sufficiently in any of the other perspectives (cp. Fig. 4d). After taking the mandatory and potentially further optional images, we display a short questionnaire inquiring a rough estimate of the number of individuals of the same species on the site, the flowering state of the observed plant, and a classification of the habitat (cp. Fig. 4e). Additionally, users may take individual notes. Filling this questionnaire completes an observation. A complete observation consists of all images, the information acquired via the questionnaire and metadata, such as time, date, and geolocation of the observation. With this final step, the user lands back on the main screen of the app, showing the number of pending observations to be synchronized to the server (cp. Fig. 4f). Decoupling recording and synchronization of observations enables users to take observations at remote locations without network coverage and to synchronize them once they are back home and have a stable internet connection.

Users can delete their observations before transferring them to the server. In this case, the observation and all its content will be deleted entirely and never synced. Once transferred to the server, users can still remove observations from their profile. Such observation will not further be shown on a user’s devices, not reviewed and not used for further analyses. Since any observation is expert-reviewed before further usage, we do strictly remove any image that shows humans or other non-plant objects, except hands holding a plant. Users find a consistent privacy statement across the app, the app stores and the project’s webpage explaining which information is being collected by the app and how it may be used. We explain that the collected data is used for research purposes, that we share observation data with conservation authorities and that collected observation data may in an anonymized form be released as datasets at a later point in time.

Fig. 4
figure 4

Taking an observation with Flora Capture illustrated for an unknown species with growth form “Herb or shrub”

Table 1 Observation recording scheme for different plant growth forms

Using Flora Capture to acquire plant observations

There are multiple ways in which Flora Capture can be used to acquire plant observations for a new research purpose. First, users can export all as well as filtered subsets of their observations from all their devices as a comma-separated values (CSV) file containing all meta-data (i.e., species id, date, location) and can also export the associated images. Second, groups of users may share one Flora Capture account in order to independently collect plant observations with their mobile devices, have them reviewed by our experts, share them among another and eventually export observation data as described before. Third, we are open for scientific collaborations by setting up new observation projects and based on already collected observation data. For example, we recently established a collaboration with an agricultural school in Austria where students create digital plant collections replacing the traditional task of excavating and collecting plant specimen in order to prepare herbarium sheets. In 2020, the most motivated students collected up to 200 plant observations being much more than required to fulfill the school’s task. Another example is a recent study that used Flora Capture observations to explore the image perspectives’ information content for automated plant identification [7]. This experiment follow a strict protocol asking study participants to collect observations of 100 species explicitly containing such that are easily confused, i.e., many congeneric species and 12 Poaceae. In total, 10,000 observation have been collected. This study found, e.g., that a combination of the front and lateral perspectives of flowers and the top perspective of leaves allow accuracies greater than 95%.

Flora Capture additionally offers a blog called stories in which we provide general tips for improving the quality of observations but also guide citizen in collecting specific species. In a story series called “Species of the month”, we present species where more information about their distribution and more images depicting their trait expressions are desired. Indeed, some of the advertised “Species of the month” do occur now in the most observed species (e.g.: Anemone nemorosa, Ficaria verna, Alliaria petiolata, Glechoma hederacea and others; Fig. 5d). Demonstrating that the blog allows communicating current research goals to the citizen. We consider providing a clear focus important for keeping users involved in the research process. Stories are an ideal measure to communicate new research objects and to encourage existing app user to collect observations supporting this research.

Fig. 5
figure 5

a Flora Capture observations contributed by citizen scientists (colored line) and by all users including project members and partners (grey line) per month since January 2017. b Number of valid observations collected per user ID. Blue points belong to IDs associated with project members. c Number of observations per species collected so far by citizen scientists. d Top 20 most frequently observed species by Flora Capture citizen scientist users

Results

Since the release of Flora Capture in October 2016, we collected more than 40,000 accepted plant observations. As of September 2020, these comprise more than 200,000 images and cover more than 1890 plant species (cp. Fig. 5). The number of external contributions by citizen scientists amounts to more than 5000 observations per year (2017: 1158; 2018: 5919; 2019: 5542; 2020 (as of September): 10,640). More than 3000 distinct user IDs contributed at least one valid plant observation until now and about 500 users uploaded more than 5 observations (eg.: Fig. 5b). The most active citizen scientists have contributed hundreds (in one case even more than 2000) of high quality observations since app release. The distribution of observed plants is highly skewed (Fig. 5c), where the most frequently observed species are common, broadly distributed and conspicuous species. The majority of of observed species represent ruderal and nitrophilic species often found on roadsides and farmland (Fig. 5d). A majority of observations originates from within Germany where the app is being developed. Here, members and partners of the Flora Incognita project are active in promoting the app and getting in touch with potentially interested citizen scientists. However, an increasing number of observations is also taken in other parts of Europe and across the world. Flora Capture is currently available in eleven different languages and the number of observed species is steadily increasing.

Discussion

Active facilitation of research projects involving citizen scientists can ultimately inspire individual behavior and encourage public action with respect to conservation efforts [25]. The Flora Capture app provides a convenient tool for users to rise their awareness for plant diversity in their surroundings and buildup knowledge about plants. Users that take images of a plant from different perspectives, eventually touch the leaves to arrange them appropriately or look for specific characteristics inevitably sharpen their eye for details characterizing the plant of interest. At the same time, Flora Capture also allows to increase the quality of automated species identification by creating a high quality image dataset [7]. Training images for CNNs are required to be highly variable as well as informative in order to provide reliable results when distinguishing very similar species. Images generated by thousands of users inherently increase the variability of data through the diversity of the contributing users [26]. The diversity and quality of image contributors will presumingly match the quality and diversity of the images which are to be identified. At the same time, the structured approach of acquiring predefined perspectives combined with an expert review ensures high reliability and information-richness in the acquired image data [27]. The metadata collected with each observation additionally provides valuable information supporting the identification process with new aspects to consider (e.g. location, time of the year). In conclusion, the proposed multi-modal approach for recording plant observations allows creating a verifiable plant observation database and serves as an important source of information on its own [28, 29].

Future directions

Currently, Flora Capture supports 4800 species mainly distributed in central Europe. Future versions of the app will successively cover a larger pool of species and potentially require a revision of the available growth forms. Furthermore, the observation protocol will optionally be extended with a standardized mapping protocol, allowing plant mappers to provide even more detailed descriptions of their observations and allow the use of Flora Capture in vegetation mapping campaigns. In fact, structured image databases for other kinds of organisms can be developed in a similar way. Physical species records in biological collections could be supplemented with in-situ observation data [30]. Current pilot projects in schools and universities where students are using Flora Capture on botanical excursions show encouraging results. Students learned to take close looks at plants while taking pictures for their digital observations. In subsequent presentations and discussions with classmates, it became clear that fusing botany and digital tools was a reliable approach to spark interest in plant taxonomy. Last but not least, various passionate users have reported that spending time in nature and collecting plant observations using Flora Capture is beneficial for their health and inspires learning. Identifying plants encourages users to collect even more observations. In fact, smartphone applications offer a great potential to engage users via gamification [31, 32]. Providing convincing, appealing and well implemented applications involving plants instead of Pokémons in nature might even inspire previously uninvolved citizen scientists to participate.

Availability and Requirements

  • Project name: Flora Capture

  • Project home page: https://www.floraincognita.com/flora-capture-app/

  • Operating systems: Android 4.4W or higher, iOS 10 or higher

  • Programming languages: TypeScript, Swift, Kotlin, SQL, PHP, Python

  • License: https://www.apple.com/legal/internet-services/itunes/dev/stdeula/ Any restrictions to use by non-academics: no

Availability of data and materials

App usage data analyzed within this paper are available from the corresponding author on reasonable request.

Abbreviations

CNN::

Convolutional neural network

CSV::

Comma-separated values

References

  1. Wäldchen J, Mäder P. Machine learning for image based species identification. Methods Ecol Evol. 2018;9(11):2216–25.

    Article  Google Scholar 

  2. Weinstein BG. A computer vision for animal ecology. J Anim Ecol. 2018;87(3):533–45. https://doi.org/10.1111/1365-2656.12780.

    Article  PubMed  Google Scholar 

  3. Valan M, Makonyi K, Maki A, Vondráček D, Ronquist F. Automated taxonomic identification of insects with expert-level accuracy using effective feature transfer from convolutional networks. Syst Biol. 2019;68(6):876–95.

    Article  Google Scholar 

  4. Qin H, Li X, Liang J, Peng Y, Zhang C. Deepfish: accurate underwater live fish recognition with a deep architecture. Neurocomputing. 2016;187:49–58.

    Article  Google Scholar 

  5. Dunker S, Boho D, Wäldchen J, Mäder P. Combining high-throughput imaging flow cytometry and deep learning for efficient species and life-cycle stage identification of phytoplankton. BMC Ecol. 2018;18(1):51.

    Article  Google Scholar 

  6. Norouzzadeh MS, Nguyen A, Kosmala M, Swanson A, Palmer MS, Packer C, Clune J. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proc Natl Acad Sci. 2018;115(25):5716–25. https://doi.org/10.1073/pnas.1719367115.

    Article  CAS  Google Scholar 

  7. Rzanny M, Mäder P, Deggelmann A, Chen M, Wäldchen J. Flowers, leaves or both? How to obtain suitable images for automated plant identification. Plant Methods. 2019;15(1):77. https://doi.org/10.1186/s13007-019-0462-4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L. ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV). 2015;115(3):211–52. https://doi.org/10.1007/s11263-015-0816-y.

    Article  Google Scholar 

  9. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM. 2012;60:84–90.

    Article  Google Scholar 

  10. Christin S, Hervet E, Lecomte N. Applications for deep learning in ecology. Methods Ecol Evol. 2019;10(10):1632–44.

    Article  Google Scholar 

  11. Goëau H, Joly A, Bonnet P, Lasseck M, Šulc M, Hang ST. Deep learning for plant identification: how the web can compete with human experts. Biodivers Inf Sci Standards. 2018;2:25637. https://doi.org/10.3897/biss.2.25637.

    Article  Google Scholar 

  12. Seeland M, Rzanny M, Boho D, Wäldchen J, Mäder P. Image-based classification of plant genus and family for trained and untrained plant species. BMC Bioinform. 2019;20(1):4.

    Article  Google Scholar 

  13. Willis K. State of the world’s plants 2017. Report. Royal Botanic Gardens, Kew 2017.

  14. Rawat W, Wang Z. Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput. 2017;29(9):2352–449 PMID: 28599112.

    Article  Google Scholar 

  15. Joly A, Bonnet P, Goëau H, Barbe J, Selmi S, Champ J, Dufour-Kowalski S, Affouard A, Carré J, Molino J-F, et al. A look inside the pl@ntnet experience. Multimed Syst. 2016;22(6):751–66.

    Article  Google Scholar 

  16. Lee SH, Chan CS, Remagnino P. Multi-organ plant classification based on convolutional and recurrent neural networks. IEEE Trans Image Process. 2018;27(9):4287–301. https://doi.org/10.1109/TIP.2018.2836321.

    Article  PubMed  Google Scholar 

  17. He A, Tian X. Multi-organ plant identification with multi-column deep convolutional neural networks. In: 2016 IEEE international conference on systems, man, and cybernetics (SMC), 2016;002020–002025.

  18. GBIF (2020). http://www.gbif.org. Accessed on 09 Jan 2020

  19. iNaturalist (2020). http://www.inaturalist.org. Accessed on 09 Jan 2020

  20. Joly A, Goëau H, Bonnet P, Bakić V, Barbe J, Selmi S, Yahiaoui I, Carré J, Mouysset E, Molino J-F, Boujemaa N, Barthélémy D. Interactive plant identification based on social image data. Ecol Inform. 2014;23:22–34. https://doi.org/10.1016/j.ecoinf.2013.07.006 Special Issue on Multimedia in Ecology and Environment.

    Article  Google Scholar 

  21. Kosmala M, Wiggins A, Swanson A, Simmons B. Assessing data quality in citizen science. Front Ecol Environ. 2016;14(10):551–60. https://doi.org/10.1002/fee.1436.

    Article  Google Scholar 

  22. De Moor T, Rijpma A, Prats López M. Dynamics of engagement in citizen science: results from the “yes, i do!” project. Citiz Sci Theory Pract. 2019;4(1):1–17.

    Article  Google Scholar 

  23. Rzanny M, Seeland M, Wäldchen J, Mäder P. Acquiring and preprocessing leaf images for automated plant identification: understanding the tradeoff between effort and information gain. Plant Methods. 2017;13(1):1–11.

    Article  Google Scholar 

  24. Mäder P, Boho D, Rzanny M, Wittich HC, Seeland M, Deggelmann A, Wäldchen J. Flora incognita—automated species identification enables effective species monitoring. submitted (submitted)

  25. McKinley DC, Miller-Rushing AJ, Ballard HL, Bonney R, Brown H, Cook-Patton SC, Evans DM, French RA, Parrish JK, Phillips TB, Ryan SF, Shanley LA, Shirk JL, Stepenuck KF, Weltzin JF, Wiggins A, Boyle OD, Briggs RD, Chapin SF, Hewitt DA, Preuss PW, Soukup MA. Citizen science can improve conservation science, natural resource management, and environmental protection. Biol Conserv. 2017;208:15–28. https://doi.org/10.1016/j.biocon.2016.05.015 The role of citizen science in biological conservation.

    Article  Google Scholar 

  26. Pocock MJO, Tweddle JC, Savage J, Robinson LD, Roy HE. The diversity and evolution of ecological and environmental citizen science. PLoS ONE. 2017;12(4):1–17. https://doi.org/10.1371/journal.pone.0172579.

    Article  CAS  Google Scholar 

  27. Kelling S, Johnston A, Bonn A, Fink D, Ruiz-Gutierrez V, Bonney R, Fernandez M, Hochachka WM, Julliard R, Kraemer R, et al. Using semistructured surveys to improve citizen science data for monitoring biodiversity. BioScience. 2019;69(3):170–9.

    Article  Google Scholar 

  28. Terry JCD, Roy HE, August TA. Thinking like a naturalist: enhancing computer vision of citizen science images by harnessing contextual data. Methods Ecol Evol. 2019;11:303–15.

    Article  Google Scholar 

  29. Wittich HC, Seeland M, Wäldchen J, Rzanny M, Mäder P. Recommending plant taxa for supporting on-site species identification. BMC Bioinform. 2018;19(1):190.

    Article  Google Scholar 

  30. Heberling JM, Isaac BL. Inaturalist as a tool to expand the research value of museum specimens. Appl Plant Sci. 2018;6(11):e01193.

    Article  Google Scholar 

  31. Balmford A, Clegg L, Coulson T, Taylor J. Why conservationists should heed pokémon. Science. 2002;295(5564):2367.

    Article  CAS  Google Scholar 

  32. Dorward LJ, Mittermeier JC, Sandbrook C, Spooner F. Pokémon go: benefits, costs, and lessons for the conservation movement. Conserv Lett. 2017;10(1):160–5.

    Article  Google Scholar 

Download references

Acknowledgements

We thank all users of the Flora Capture app, in particular Martina Hartel. We thank all colleagues and student helpers, in particular Christian Engelhardt, Karl Amende Benedict Stephan, and Anke Bebber, concerned with the project. Images in Fig. 2 were provided by OpenClipart-Vectors on Pixabay under the CC0—Creative Commons license.

Funding

Open Access funding enabled and organized by Projekt DEAL. We are funded by the German Federal Ministry for the Environment, Nature Conservation, Building and Nuclear Safety (BMUB) Grants: 3514685C19, 3519685A08 and 3519685B08; the German Ministry of Education and Research (BMBF) Grants: 01LC1319, 01IS20062; the the Stiftung Naturschutz Thüringen (SNT) Grant: SNT-082-248-03/2014; and the Thuringian Ministry for Environment, Energy and Nature Conservation Grant: 68678. The funder had no role in the implementation of the app, the design of the study, analysis, and interpretation of data and in writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Conceptual design: DB, MR, PM, JW; programming: DB, FN, HW, MS; data analysis: MR, AD, DB, JW, MS; data visualization: MR, JW, DB; writing manuscript: MR, DB, JW, PM; funding acquisition: PM, JW. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jana Wäldchen.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boho, D., Rzanny, M., Wäldchen, J. et al. Flora Capture: a citizen science application for collecting structured plant observations. BMC Bioinformatics 21, 576 (2020). https://doi.org/10.1186/s12859-020-03920-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12859-020-03920-9

Keywords