Focus and Scope

Biodiversity Data Journal (BDJ) is a community peer-reviewed, open-access, comprehensive online platform, designed to accelerate publishing, dissemination and sharing of biodiversity-related data of any kind. All structural elements of the articles – text, morphological descriptions, occurrences, data tables, etc. – will be treated and stored as DATA, in accordance with the Data Publishing Policies and Guidelines of Pensoft Publishers.

The journal will publish papers in biodiversity science containing taxonomic, floristic/faunistic, morphological, genomic, phylogenetic, ecological or environmental data on any taxon of any geological age from any part of the world with no lower or upper limit to manuscript size. For example:

  • single taxon treatments and nomenclatural acts (e.g., new taxa, new taxon names, new synonyms, changes in taxonomic status, re-descriptions, etc.);
  • data papers describing biodiversity-related databases, including ecological and environmental data;
  • sampling reports, local observations or occasional inventories, if these contain novel data;
  • local or regional checklists and inventories;
  • habitat-based checklists and inventories;
  • ecological and biological observations of species and  communities;
  • any kind of identification keys, from conventional dichotomous to multi-access interactive online keys;
  • descriptions of biodiversity-related software tools.

For more information, you may look at the Editorial Beyond dead trees: integrating the scientific process in the Biodiversity Data Journal and press release The Biodiversity Data Journal: Readable by humans and machines.

ISSN 1314-2828 (online)

Archived in PubMedCentral and CLOCKSS.
​Indexed by DOAJ and Google Scholar.

Globally Unique Innovations

The Biodiversity Data Journal  (BDJ) and associated Pensoft Writing Tool (PWT), launched within the FP7 project ViBRANT, created several, globally unique, innovations:

  1. The first work flow ever to support the full life cycle of a manuscript, from writing through submission, community peer-review, publication and dissemination within a single online collaborative platform.
  2. The online, collaborative, article-authoring platform Pensoft Writing Tool (PWT) provides a large set of pre-defined, but flexible, Biological Codes and Darwin Core compliant, article templates.
  3. Authors may work collaboratively on a manuscript and invite external contributors, such as mentors, potential reviewers, linguistic and copy editors, colleagues, who may watch and comment on the text before submission. These comments can be submitted along with the manuscript for editor’s consideration.
  4. Import/export conversion of data files into text and vice versa, from text to data, such as checklists, catalogues and occurrence data in Darwin Core format, simply at the click of a button.
  5. Automated import of data-structured manuscripts generated in various platforms (Scratchpads, GBIF Integrated Publishing Toolkit (IPT), authors’ databases).
  6. A novel community-based pre-publication peer-review and possibilities to comment after publication (post-publication peer-review). Authors may also opt for an entirely public peer-review process. Reviewers may opt to be anonymous or to disclose their names.

For more information, you may look at the Editorial Beyond dead trees: integrating the scientific process in the Biodiversity Data Journal and press release The Biodiversity Data Journal: Readable by humans and machines.

Criteria for Publication

  • Originality: Papers and associated data should be sufficiently novel and contribute to a better understanding of the topic under scrutiny. Please consider the examples below to judge if your manuscript might be suitable for BDJ:
    • Example 1: Single species occurrence records (e.g., new country or province records) ARE NOT encouraged for submission in BDJ and will most probably be rejected, unless they contain detailed studies and new information on other aspects of species' morphology, genomics, biology, ecology,  distribution, etc. (see also Examples 2 and 3 below).
    • Example 2: A single species observation must be SIGNIFICANT, either because the species is important in some regard (e.g., medically, or by being an invasive species, or by being endangered, or by being important in biocontrol or biosecurity, etc.), or it expands the species range considerably and represent unexpected discovery of biogeographical or other significance. An additional argument for acceptability  of a manuscript could be the  presence of images and multimedia (acccompanied by any other new ecological/ethological data), if a species had never been illustrated, or filmed, or recorded for songs. Single species observations are considered novel, if they bring new information based on author(s) personal investigations and do not just repeat or compile already published information.
    • Example 3: Multiple species occurrence records are welcome, however occurrence data are considered novel, if they significantly extend the ranges (geographical, temporal or habitat type), or list new country/province records of several species, or concern taxa of high natural or social importance,  or feature taxa that are data-poor; occurrence data will NOT be considered novel and suitable for publication, if they only list new localities of a well-known and common (data rich) taxon within a well-studied region.
    • Example 4: A local checklist is considered novel, if it includes new data from a locality; a local checklist is NOT considered novel if it is mostly confirmatory and repetitive and lists common species from a locality in a well-studied region.
  • Data are published: All data underpinning an article, including data tables on which graphs are produced, must be published alongside the paper, e.g. as supplementary files, or links to external repositories where data are deposited, and contain sufficient metadata to facilitate data discovery.
  • Structure: Manuscripts should be concisely written, in a good academic style, and follow a logical sequence. Results should be clearly and concisely described and supported by the data published with the article, or data published elsewhere but linked to the article.

  • Previous research: Previously published information should be considered and cited in compliance with the good academic practice. References should be complete and accurate, where possible including DOIs or links to the article.

Peer Review

Text and data submitted to the Biodiversity Data Journal will be formally peer-reviewed and evaluated for technical soundness and the correct presentation of appropriate and sufficient metadata. All manuscripts undergo a pre-submission technical evaluation in the Pensoft Writing Tool (PWT) environment. The scientific quality and importance of the paper and data will be further judged by the scientific community, through a novel community-based pre-publication and post-publication peer-review.

Reviewers may opt to be anonymous or to disclose their names. The deadlines for the peer-review and editorial processes are strict and limited to a maximum of two months after submission.

The peerreview process and deadlines described below are articulated on the assumption that the contributions are technically well-prepared and concisely written so that the peer-review is easy, straightforward and not requiring much time from the reviewer.

What is "community peer-review" ?

It is evident that the peer-review system is increasingly under strain. Our response to this situation is to decrease the load on each individual reviewer without in any way compromising the quality of the final product. The purpose of community peer-review is to distribute effort, increase transparency, engage the broader community of experts, and enhance the quality of the science we publish.

Stepwise description of the peer-review and editorial process

1. Upon submission, the manuscript is assigned to the Subject Editor responsible for the topic by the in-house Assistant Editor. The Subject Editor is alerted by email.

2. The Subject Editor reads the manuscript and decides if it complies with the journal's scope and should be processed for peer-review.

3. The Subject Editor sends review requests to two or three "nominated" reviewers and several other "panel" reviewers. 

Note-1: How editors invite reviewers? The journal's database will provide a list of potential reviewers and if necessary the editor can add additional names to the list. Review requests will be emailed by a ‘single-click’ option.

Note-2: "Nominated" and "Panel" reviewers. The difference between "Nominated" and "Panel" reviewers is that "Nominated" reviewers are expected to provide a formal review by the deadline; "Panel" reviewers are invited but not required to evaluate the manuscript. Both "Nominated" and "Panel" reviewers can propose changes and corrections, and make comments in the manuscript online and submit a concise reviewer's form.

Note-3: "Community" and "public" peer-review. "Community" peer-review means that during the peer-review process the manuscript is visible only to editor, the reviewers and the authors. We are planning to introduce soon an entirely public review process where authors may opt to make their manuscript available for comment by all registered journal users. Reviewers may opt to stay anonymous or disclose their names in either case.

4. The Subject Editor receives a notification email if the nominated reviewer agrees or declines to review the manuscript. In the latter case the editor can appoint alternative reviewers.

5. Reviews are expected within 10 days and can be extended on demand. The Subject Editor will then decide to accept, reject, or request revision of the manuscripts.

Note-4: Provision of reviews.  Reviewers will be prompted by automated email notification sent one day before the deadline. In case of delay, the review request can be cancelled automatically.

6. The authors must provide a revised version of their manuscript within one week, but can ask for an extension, if there is a demonstratable need.

7. After submission of the revised version, the Subject Editor compares it against the reviews through an easy-to-use online tool and decides to accept or reject the manuscript. The authors may be asked to make additional revisions, OR in case of substantial changes, the reviewing procedure will be started again.

8. The manuscript will be formatted, proof-read, copy-edited and published within two weeks after acceptance.

Guidelines for reviewers and editors

Peer-reviewers and editors of the Biodiversity Data Journal are expected to evaluate the completeness and quality of the manuscript text, related dataset(s) and their description (metadata), as well as the publication value of data. This may include the appropriateness and validity of the methods used, compliance with applicable standards during collection, management and curation of data, and compliance with appropriate metadata standards in the description of the data resources.

The following aspects of evaluation will be considered:

  • Quality of the manuscript
    • Is the study sufficiently novel and contributes to a better understanding of the topic, or is the work rather confirmatory and repetitive?
    • Do the title, abstract and keywords accurately reflect the contents and data?
    • Is the manuscript consistent, suitably organised and written in grammatically correct English?
    • Are the relevant non-textual data and media (data sets, audio and video files) also available as supplementary files to the manuscript or as links to external repositories?
    • Have abbreviations and symbols been properly defined?
    • Does the manuscript put the data resource being described properly into the context of prior research, citing pertinent articles and datasets?
    • Are conflicts of interest, relevant permissions and other ethical issues addressed in an appropriate manner?
  • Quality of the data
    • Are the data completely and consistently recorded within the dataset(s)?
    • Does the data resource cover scientifically important and sufficiently large region(s), time period(s) and/or group(s) of taxa to be worthy of publication?
    • Are the data consistent internally and described using applicable standards (e.g. in terms of file formats, file names, file size, units and metadata)?
    • Are the methods used to process and analyses the raw data, thereby creating processed data or analytical results, sufficiently well documented that they could be repeated by third parties?
    • Are the data correct, given the protocols? Authors are encouraged to report any tests undertaken to address this point.
    • Is the repository to which the data are submitted appropriate for the nature of the data?
  • Consistency between manuscript and data
    • Does the manuscript provide an accurate description of the data?
    • Does the manuscript properly describe how to access the data?
    • Are the methods used to generate the data (including calibration, code and suitable controls) described in sufficient detail?
    • Is the dataset sufficiently novel to merit publication?
    • Have possible sources of error been appropriately addressed in the protocols and/ or the paper?
    • Is anything missing in the manuscript or the data resource itself that would prevent replication of the measurements, or reproduction of the figures or other representations of the data?
    • Are all claims made in the manuscript substantiated by the underlying data?

Pensoft journals support the open science approach in the peer-review and publication process. We encourage our reviewers to open their identity to the authors and consider supporting the peer-review oaths, which tend to be short declarations that reviewers make at the start of their written comments, typically dictating the terms by which they will conduct their reviews (see Aleksic et al. 2015, doi: 10.12688/f1000research.5686.2 for more details):

Principles of the open peer-review oath

  • Principle 1: I will sign my name to my review
  • Principle 2: I will review with integrity
  • Principle 3: I will treat the review as a discourse with you; in particular, I will provide constructive criticism
  • Principle 4: I will be an ambassador for the practice of open science

For Authors

There are NO author guidelines in BDJ with regard to text formatting. The Arpha Writing Tool (AWT) will guide you during the authoring and submission process. There are only two simple rules to follow, so please please read carefully the half page of text below before you start your manuscript!

The submission process in BDJ starts with writing a manuscript in the Arpha Writing Tool (AWT) through fixed although flexible article templates to be selected on the AWT homepage (after clicking on the "Write new manuscript" button). The article templates cannot be changed once the writing process is started, therefore please consider the following:

1. How I can decide what article type to choose, if I want to publish:

  • Free text paper (e.g., editorial, correspondence, opinion paper, etc.) -> Select Editorial / Correspondence.
  • Taxonomic or nomeclatural acts (taxon treatments) -> Select Taxonomic paper, then open a treatment in the Taxon treatments section and define its type (either New taxon or Re-description). This article type should contain at least EITHER one taxon treatment, OR checklist OR identification key, otherwise it cannot be submitted.
  • Systematic list of taxa with notes -> Select Taxonomic paper, then open a Checklist. This article type should contain at least EITHER one taxon treatment, OR checklist OR identification key, otherwise it cannot be submitted. The checklist itself is not a treatment.
  • Species observations (new records, biology, ecollogy, conservation, etc.) -> Select Taxonomic paper, then open a treatment in the Taxon treatments section of the paper and define its type (Description, Re-description or Species observation). This article type should contain at least EITHER one taxon treatment, OR checklist OR identification key, otherwise it cannot be submitted.
  • Dichotomic identification keys -> Select Taxonomic paper, then open Identification key(s). This article type should contain at least EITHER one taxon treatment, OR checklist OR identification key, otherwise it cannot be submitted. The identification key itself is not a treatment.
  • Online interactive identification key-> Select Interactive key, then follow the format. This article type should contain a link to the described online key, which should be available in open access.
  • Data paper (description of large data sets) -> Select Data paper, then follow the format. This article type should contain a link to the described data, which should be available in open access. Alternatively, data sets can be uploaded and published as supplementary files. See our Data publication guidelines.
  • Software and online platforms -> Select Software description, then follow the format. This article type should contain a link to the described software or platform, which should be available in open access.

2. How I can cite references, figures and tables?

  • Do not write in-text citations of references, figures or tables manually! The citations will be inserted automatically at the place of your cursor through the Cite a figure, Cite a table or Cite a reference commands. Once you select the place you want a citation, click on the desired reference, table or figure from the respective list.
  • Before citing a reference, figure or table, you have to upload these, so that they become visible in the respective list of figures, tables or references.
  • Do not number captions of figures or tables – they will be numbered automatically and can be re-ordered, if needed.
  • All uploaded figures, tables and references must be cited in the text and vice versa.
  • Authors are encouraged to cite in the References list the publications of the original descriptions of the taxa treated in their manuscript.

3. Materials and Methods

In line with responsible and reproducible research, as well as FAIR data principles, we highly recommend that authors describe in detail and deposit their science methods and laboratory protocols in the open access repository protocols.io.

Once deposited on protocols.io, protocols and methods will be issued a unique digital object identifier (DOI), which could be then used to link a manuscript to the relevant deposited protocol. By doing this, authors could allow for editors and peers to access the protocol when reviewing the submission to significantly expedite the process.  

Furthermore, an author could open up his/her protocol to the public at the click of a button as soon as their article is published.

Stepwise instructions:

1. Prepare a detailed protocol via protocols.io.

2. Click Get DOI to assign a persistent identifier to your protocol.

3. Add the DOI link to the Methods section of your manuscript prior to submitting it for peer review.

4. Click Publish to make your protocol openly accessible as soon as your article is published (optional).

5. Update your protocols anytime.

Data Publishing Guidelines

By submitting to this journal, authors agree to make the data that underpin or are described in their articles publicly available. Authors must include a separate "Data resources" section in their articles, listing datasets and where they are deposited (including accession numbers, DOIs or other persistent URL identifiers).

Please remember that publication of data associated with your article in machine-readable form (databases, data sets, data tables) in this journal is mandatory!

We provide various modes of data publishing:

  • Import of data files in the text (e.g., Darwin Core occurrence data, checklists, data tables, literature references).
  • Supplementary data files (up to 20 MB each) that support graphs, hypotheses, results, etc. published with the article.
  • Deposition of large data sets in established international repositories (e.g. GBIF IPT, Dryad, NCBI GenBank, Pangaea, TreeBASE, Morphbank, and others).
  • Marked up text published as XML to ensure machine harvesting.

For biodiversity and biodiversity-related data the reader may consult the Strategies and guidelines for scholarly publishing of biodiversity data (Penev et al. 2017, Research Ideas and Outcomes 3: e12431. https://doi.org/10.3897/rio.3.e12431). For reader's convenience, we list here the hyperlinked table of contents of these extensive quidelines:

A best practice rule is that datasets and software should be deposited and permanently archived in appropriate, trusted, general, or domain-specific repositories (please consult Data Deposition in Open Repositories, or BioSharing, and/or software repositories such as GitHubGitLabBioinformatics.org, or equivalent). The associated persistent identifiers (e.g. DOI, or others) of the dataset(s) must be included in the data or software resources section of the article. Reference(s) to datasets and software should also be included in the reference list of the article with DOIs (where available). Where no domain-specific data repository exists, authors should deposit their datasets in a general repository such as Zenodo, DryadDataverse, or others.

  • Large primary biodiversity data sets (e.g., institutional collections of species-occurrence records) should be published with the GBIF Integrated Publishing Toolkit (IPT); small data sets of this kind can be imported into the article text through an Excel template, available in the ARPHA Writing Tool, or via direct online import from GBIF, BOLD, iDigBio and PlutoF (see Import of Darwin Core Specimen Records into Manuscripts).

  • Gene sequence and genomic data should be deposited with INSDC (GenBank/EMBL/DDBJ), either directly or via a partnering repository, e.g. Barcode of Life Data Systems (BOLD). Transcriptomics data should be deposited in Gene Expression Omnibus (GEO) or ArrayExpress (see sections Gene Sequence and Genomics for detail).

  • Phylogenetic data should be deposited at TreeBASE.

  • Biodiversity-related geoscience and environmental data should be deposited in PANGAEA.

  • Morphological images other than those presented in the article should be deposited at Morphbank. Images of a specific kind should be deposited in appropriate repositories if these exist (e.g., Morphosource for MicroCT data).

  • Videos should be uploaded to video sharing sites like YouTube, Vimeo or SciVee and linked back to the article text. Similarly,  audio files should go to platforms like FreeSound or SoundCloud, and presentations to Slideshare. In addition, multimedia files can also be uploaded as supplementary files on the journal’s website. 3D and other interactive models can be embedded in the article’s HTML and PDF.

  • Any other large data sets (e.g., ecological observations, environmental data, morphological and other data types) should be deposited in the Dryad Data Repository, Zenodo, or Dataverse, either prior to or upon acceptance of the manuscript. Other specialised data repositories can be used if these offer unique identifiers and long-term preservation.

All external data used in a journal paper must be cited in the reference list, and links to these data (as deposited in external repositories) must be included in a separate data resources section of the article (see How to Cite Data).

For more information, please see our detailed Strategies and guidelines for scholarly publishing of biodiversity data

Dryad Repository Submissions

This journal is integrated with the Dryad Digital Repository to make data publication simple for authors. There is a $120 USD Data Publishing Charge for Dryad submissions, payable via the Dryad website.  For more information, please see their FAQ.

Data Quality Checklist and Recommendations


An empowering aspect of digital data is that they can be merged, reformatted and reused for new, imaginative uses that are more than the sum of their parts. However, this is only possible if data are well curated. To help authors avoid some common mistakes we have created this document to highlight those aspects of data that should be checked before publication.

By "mistakes" we do not mean errors of fact, although these should also be avoided! It is possible to have entirely correct digital data that are low-quality because they are badly structured or formatted, and, therefore, hard or impossible to move from one digital application to another. The next reader of your digital data is likely to be a computer program, not a human. It is essential that your data are structured and formatted so that they are easily processed by that program, and by other programs in the pipeline between you and the next human user of your data.

The following list of recommendations will help you maximise the re-usability of your digital data. Each represents a test carried out by Pensoft when auditing a digital dataset at the request of an author. Following the list, we provide explanations and examples of each recommendation.

Authors are encouraged to perform these checks themselves prior to data publication. For text data, a good text editor (https://en.wikipedia.org/wiki/List_of_text_editors) can be used to find and correct most problems. Spreadsheets usually have some functions for text checking functions, e.g. the "TRIM" function that removes unneeded whitespace from a data item. The most powerful text-checking tools are on the command line, and the website "A Data Cleaner's Cookbook" (https://www.polydesmida.info/cookbook/) is recommended for authors who can use a BASH shell.

When auditing datasets for authors, Pensoft does not check taxonomic or bibliographic details for correctness, but we will do basic geochecks upon request, e.g. test to see if the stated locality is actually at or near the stated latitude/longitude. We also recommend checking that fields do not show "domain schizophrenia", i.e. fields misused to containing data of more than one type.

Proofreading data takes at least as much time and skill as proofreading text. Just as with text, mistakes easily creep into data files unless the files are carefully checked. To avoid the embarrassment of publishing data with such mistakes, we strongly recommend that you take the time to run these basic tests on your data.





- The dataset is UTF-8 encoded

- The only characters used that are not numbers, letters or standard punctuation, are tabs and whitespaces

- Each character has only one encoding in the dataset

- No line breaks within data items

- No field-separating character within data items (tab-separated data preferred)

- No "?" or replacement characters in place of valid characters

- No Windows carriage returns

- No leading, trailing, duplicated or unnecessary whitespaces in individual data items



- No broken records, i.e. records with too few or too many fields

- No blank records

- No duplicate records (as defined by context)



- No empty fields

- No evident truncation of data items

- No unmatched braces within data items

- No data items with values that are evidently invalid or inappropriate for the given field

- Repeated data items are consistently formatted

- Standard data items such as dates and latitude/longitude are consistently formatted

- No evident disagreement between fields

- No unexpectedly missing data





  • The dataset is UTF-8 encoded

Computer programs do not "read" characters like "A" and "4". Instead, they read strings of 0's and 1's and interpret these strings as characters according to an encoding scheme. The most universal encoding scheme is called UTF-8 and is based on the character set called Unicode. Text data should always be shared with UTF-8 encoding, as errors can be generated when non-UTF-8 encodings (such as Windows-1252) are read by a program expecting UTF-8, and vice-versa. (See also below, on replacement characters). 

  • The only characters used that are not numbers, letters or standard punctuation are tabs and whitespaces

Unusual characters sometimes appear in datasets, especially when databases have been merged. These "control" or "gremlin" characters are sometimes invisible when data are viewed within a particular application (such as a spreadsheet or a database browser) but can usually be revealed when the data are displayed in a text editor. Examples include vertical tab, soft hyphen, non-breaking space and various ASCII control characters (https://en.wikipedia.org/wiki/Control_character).

  • Each character has only one encoding in the dataset

We have seen individual datasets in which the degree symbol (°) is represented in three different ways, and in which a single quotation mark (') is also represented as a prime symbol, a right single quotation mark and a grave accent. Always use one form of each character, and preferably the simplest form, e.g. plain quotes rather than curly quotes.

  • No line breaks within data items

Spreadsheet and database programs often allow users to have more than one line of text within a data item, separated by linebreaks or carriage returns. When these records are processed, many computer programs understand the embedded linebreak as the end of a record, so that the record is processed as several incomplete records:

item A  itemB1          itemC



itemA           itemB1

itemB2          itemC

  • No field-separating character within data items (tab-separated data preferred)

Data are most often compiled in table form, with a particular character used to separate one field ("column") from the next. Depending on the computer program used, the field-separating character might be a comma (CSV files), a tab (TSV files), a semicolon, a pipe (|) etc.

Well-structured data keeps the field-separating character out of data items, to avoid confusion in processing. Because commas are commonly present within data items, and because not all programs understand how to process CSVs, we recommend using tabs as field-separating characters (and avoiding tabs within data items!): https://en.wikipedia.org/wiki/Tab-separated_values.

  • No "?" or replacement characters in place of valid characters

When text data are moved between different character encodings, certain characters can be lost because the receiving program does not understand what the sending program is referring to. In most cases, the lost character is then represented by a question mark, as in "Duméril" becoming "Dum?ril", or by a replacement character, usually a dark polygon with a white question mark inside.

It is important to check for these replacements before publishing data, especially if you converted your data to UTF-8 encoding from another encoding.

  • No Windows carriage returns

On UNIX, Linux and Mac computers, a linebreak is built with just one character, the UNIX linefeed '\n' ('LF'). On Windows computers, a linebreak is created using two characters, one after the other: '\r\n' ('CRLF'), where '\r' is called a 'carriage return' ('CR'). Carriage returns are not necessary in digital data and can cause problems in data processing on non-Windows computers. Check the documentation of the program in which you are compiling data to learn how to remove Windows carriage returns.

  • No leading, trailing, duplicated or unnecessary whitespaces in individual data items

Like "control" and "gremlin" characters, whitespaces are invisible and we pay little attention to them when reading a line of text. Computer programs, however, see whitespaces as characters with the same importance as "A" and "4". For this reason, the following four lines are different and should be edited to make them the same:

Aus bus (Smith, 1900)

   Aus bus (Smith, 1900)

Aus bus (Smith,   1900)

Aus  bus   (Smith, 1900  )



  • No broken records, i.e. records with too few or too many fields

If a data table contains records with, for example, 25 fields, then every record in the table should have exactly 25 data items, even if those items are empty. Records with too few fields are often the result of a linebreak or field separator within a data item (see above). Records with too many fields also sometimes appear when part of a record has been moved in a spreadsheet past the end of the table.

  • No blank records

Blank records contribute nothing to a data table because they contain no information, and a tidy data table has no blank lines. Note, however, that a computer program looking for blank lines may not find what looks to a human like a blank line, because the "blank" line actually contains invisible tabs or whitespaces.

  • No duplicate records (as defined by context)

It can be difficult to find duplicate records in some datasets, but our experience is that they are not uncommon. One cause of duplicates is database software assigning a unique ID number to the same line of data more than once. Context will determine whether one record is a duplicate of another, and data compilers are best qualified to look for them.



  • No empty fields

 Fields containing no data items do not add anything to the information content of a dataset and should be omitted.

  •  No evident truncation of data items

The end of a data item is sometimes cut off, for example when a data item with 55 characters is entered into a database field with a 50-character maximum limit. Truncated data items should be repaired when found, e.g.

Smith & Jones in Smith, Jones and Bro

repaired to:

Smith & Jones in Smith, Jones and Brown, 1974

  • No unmatched braces within data items

These are surprisingly common in datasets and are either data entry errors or truncations, e.g.

Smith, A. (1900 A new species of Aus. Zool. Anz. 23: 660-667.

5 km W of Traralgon (Vic

  • No data items with values that are evidently invalid or inappropriate for the given field

For example, a field labelled "Year" and containing years should not contain the data item "3 males".

  •  Repeated data items are consistently formatted

The same data item should not vary in format within a single dataset, e.g.

Smith, A. (1900) A new species of Aus. Zool. Anz. 23: 660-667.

Smith, A. 1900. A new species of Aus. Zoologischer Anzeiger 23: 660-667.

Smith, A. (1900) A new species of Aus. Zool. Anz. 23, 660-667, pl. ix.

  • Standard data items such as dates and latitude/longitude are consistently formatted

Data compilers have a number of choices when formatting standard data items, but whichever format is chosen, it should be used consistently. A single date field should not, for example, have dates represented as 2005-05-17, May 19, 2005 and 23.v.2005.

  • No evident disagreement between fields

If there are fields which contain linked information then these fields should be checked to ensure that they do not conflict with each other. For example, the year or an observation cannot be after the year it was published.


Year            Citation

1968            Smith, A. (1966) Polychaete anatomy. Academic Press, New York; 396 pp.


Genus           Subgenus

Aus             Bus (Aus)

  • No unexpectedly missing data

This is a rare issue in datasets that have been audited, but occasionally occurs. An example is the Darwin Core "verbatimLocality" field for a record containing a full latitude and longitude, but with the "decimalLatitude" and "decimalLongitude" fields blank.

  • Spelling of Darwin Core terms

Darwin Core terms are usually considered case sensitive, therefore you should use their correct spelling (http://rs.tdwg.org/dwc/).


We thank Dr. Robert Mesibov for preparing the Data Quality Checklist draft and Dr. Quentin Groom for reviewing it.


General Statement

The journal policies and guidelines are mandatory. Exceptions to elements of the policies may be granted in specific cases, but will require justification that will be made public together with the article.

License and Copyright Agreement

In submitting the manuscript to the journal, the authors certify that:

  • They are authorized by their co-authors to enter into these arrangements.
  • The work described has not been formally published before (except in the form of an abstract or as part of a published lecture, review, thesis, or overlay journal), that it is not under consideration for publication elsewhere, that its publication has been approved by all the author(s) and by the responsible authorities – tacitly or explicitly – of the institutes where the work has been carried out.
  • They secure the right to reproduce any material that has already been published or copyrighted elsewhere.
  • They agree to the following license and copyright agreement:


Licensing for Data Publication

Pensoft’s  journals use a variety of waivers and licenses, that are specifically designed for and appropriate for the treatment of data:

Other data publishing licenses may be allowed as exceptions (subject to approval by the editor on a case-by-case basis) and should be justified with a written statement from the author, which will be published with the article.

Open Data and Software Publishing and Sharing

The journal strives to maximize the replicability of the research published in it. Authors are thus required to share all data, code or protocols underlying the research reported in their articles. Exceptions are permitted, but have to be justified in a written public statement accompanying the article.

Datasets and software should be deposited and permanently archived in appropriate, trusted, general, or domain-specific repositories (please consult http://service.re3data.org and/or software repositories such as GitHub, GitLab, Bioinformatics.org, or equivalent). The associated persistent identifiers (e.g. DOI, or others) of the dataset(s) must be included in the data or software resources section of the article. Reference(s) to datasets and software should also be included in the reference list of the article with DOIs (where available). Where no domain-specific data repository exists, authors should deposit their datasets in a general repository such as ZENODO,Dryad, Dataverse, or others.

Small data may also be published as data files or packages supplementary to a research article, however, the authors should prefer in all cases a deposition in data repositories.

Privacy Statement

The names and email addresses present on the journal’s website will be used exclusively for the purposes of the journal.

Author Policies

It is a responsibility of the corresponding author that all named authors have agreed to its submission.

The Corresponding Author’s Role and Responsibilities are to:

  1. Inform all co-authors of the submission of the manuscript to the journal (note: each co-author will receive a confirmation email upon submission and will need to confirm their authorship).
  2. Manage all correspondence between the journal and all co-authors, keeping the full co-author group apprised of the manuscript progress.
  3. Designate a substitute correspondent for times of unavailability.
  4. Ensure payment of the publication charges at the point of Editorial Acceptance, or before that in case some specific services have been purchased (e.g., conversion to ARPHA or linguistic editing).
  5. Ensure that the manuscript is in full adherence with all the journal policies (including such items as publication ethics, data deposition, materials deposition, etc).
  6. Post Publication: Respond to all queries pertaining to the published manuscript, provide data and materials as requested.
  7. The submission must be created (and completed) by one of the co-authors, not by an agency, or by some other individual who is not one of the co-authors.

Commenting Policies

All public comments follow the normal standards of professional discourse. All commenters are named, and their comments are associated to the journal profile. The journal does not allow anonymous or pseudonymous commenting or user profiles.

The journal does not tolerate language that is insulting, inflammatory, obscene or libelous. The journal reserves the right to remove all or parts of Comments to bring them in line with these policies. The journal is the final arbiter as to the suitability of any comments.

Conflicts of Interest

The journal requires that all parties involved in a publication (i.e. the authors, reviewers and academic editors) should transparently declare any potential Conflicts of Interest (also known as Competing Interests). The disclosure of a Conflict of Interest does not necessarily mean that there is an issue to be addressed; it simply ensures that all parties are appropriately informed of any relevant considerations while they work on the submission.

Potential Conflicts of Interest should be declared even if the individual in question feels that these interests do not represent an actual conflict. Examples of Conflicts of Interest include, but are not limited to: possible financial benefits if the manuscript is published; patent activity on the results; consultancy activity around the results; personal material or financial gain (such as free travel, gifts, etc.) relating to the work, and so on.

While possible financial benefits should appear here, actual funding sources (institutional, corporate, grants, etc.) should be detailed in the funding disclosure statement.

Funding Disclosure

The journal requires that authors declare the funding which made their work possible, including funding programmes, projects, or calls for grant proposals (when applicable).

Publication Ethics and Malpractice Statement


Manuscripts submitted to the journal must be original work and is not currently being considered for publication by another journal.

The publishing ethics and malpractice policies of the journal follow the relevant COPE guidelines (http://publicationethics.org/resources/guidelines), and in case a malpractice is suspected, the journal Editors will act in accordance with them.


Research misconduct may include: (a) manipulating research materials, equipment, or processes, (b) changing or omitting data or results such that the research is not accurately represented in the article.

A special case of misconduct is plagiarism, which is the appropriation of another person's ideas, processes, results, or words without giving appropriate credit.

Research misconduct does not include honest error or differences of opinion.

If misconduct is suspected, journal Editors will act in accordance with the relevant COPE guidelines: http://publicationethics.org/resources/guidelines.

Should a comment on potential misconduct be submitted by the Reviewers or Editors, an explanation will be sought from the Authors. If this is satisfactory, and a mistake or misunderstanding has taken place, the matter can be resolved. If not, the manuscript will be rejected, and the Editors will impose a ban on that individual's publication in the journal for a period of three years.

In cases of published plagiarism or dual publication, an announcement will be made in the journal, explaining the situation.

Appeals and Open Debate

We encourage academic debate and constructive criticism. Authors do not have a right to neglect unfavourable comments about their work and to choose not to respond to criticisms.

No Reviewer’s comment or published correspondence may contain a personal attack on any of the Authors. Criticism of the work is encouraged, and Editors should edit (or reject) personal or offensive statements.

The Author(s) should submit their appeal on editorial decisions to the Editorial Office.

The journal encourages publication of open opinions, forum papers, corrigenda, critical comments on a published paper and Author’s response to criticism.


The journal reserves the right to retract articles that are found to be fraudulent or in breach of the journal’s policies.

Terms of Use

This document describes the Terms of Use of the services provided by the Biodiversity Data Journal (BDJ), hereinafter referred to as BDJ. All Users agree to these Terms of Use when signing up to BDJ. Signed BDJ Users will be hereinafter referred to as "User" or "Users".

BDJ is provided by Pensoft Publishers Ltd., "Geo Milev 13A Str., 1111 Sofia, Bulgaria". We as providers will be hereinafter referred to as "the Provider".

The Provider reserves the right to update the Terms of Use occasionally. Users will be notified via posting on the site and by email. If using the services of BDJ after such notice, the User will be deemed to have accepted the proposed modifications. If the User disagrees with the modifications, they must stop using BDJ services. Users are advised to periodically check the Terms of Use for updates or revisions. Violation of any of the terms will result in the termination of the User's account. The Provider is not responsible for any content posted by the User in BDJ.

Account Terms

After an accounts is created for BDJ journal the User is automatically signed the ARPHA Platform. Read more about the ARPHA Terms of Use and Account Terms here.

Services and Prices

The Provider reserves the right to modify or discontinue, temporarily or permanently the services provided by BDJ. Plans and prices are subject to change upon 30 days notice from the Provider. Such notice may be provided at any time by posting the changes to the relevant service website.


The User retains full ownership to content uploaded in BDJ. We claim no intellectual property rights over the material provided by the User in BDJ. However, by setting pages to be viewed publicly (Open Access), the User agrees to allow others to view and download the relevant content. In addition, Open Access articles, being publicly available data, might be employed by the Provider (or anyone) for data mining purposes.

The Provider reserves the rights in their sole discretion to refuse or remove any content that is available via the Website.

Copyrighted materials

Unless stated otherwise, the BDJ website may contain some copyrighted material (for example logos and other proprietary information, including, without limitation, text, software, photos, video, graphics, music and sound ("Copyrighted Material"). The User may not copy, modify, alter, publish, transmit, distribute, display, participate in the transfer or sale, create derivative works, or in any way exploit any of the Copyrighted Material, in whole or in part, without written permission from the copyright owner. Users will be solely liable for any damage resulting from any infringement of copyrights, proprietary rights, or any other harm resulting from such a submission.

Exceptions from this rule are e-chapters or e-articles published under Open Access (see below), which are normally published under Creative Commons Attribution 3.0 license (CC-BY) or Creative Commons Attribution 4.0 license (CC-BY)

Open access materials

BDJ is a supporter of Open Science. Open access to content is clearly marked, with text and/or the open access logo, on all materials published under this model. Unless otherwise stated, open access content is published in accordance with the Creative Commons Attribution 4.0 license (CC-BY). This particular license allows to copy, display and distribute the content at no charge, provided that the author and source are credited.

Privacy Statement

BDJ (and the ARPHA Platform of which the journal is part) collects personal information from Users (i.e. Name, postal and email addresses) only to improve and for the purpose of its services. All personal data will be used exclusively for the stated purposes of the website and will not be made available for any other purpose or to third parties.

Disclaimer of Warranty and Limitation of Liability

Neither Pensoft and its affiliates nor any of their respective employees, agents, third party content providers or licensors warrant that the BDJ service will be uninterrupted or error free; nor do they give any warranty as to the results that may be obtained from use of the journal, or as to the accuracy or reliability of any information, service or merchandise provided through BDJ.

Legal, medical, and health-related information located, identified or obtained through the use of the Service, is provided for informational purposes only and is not a substitute for qualified advice from a professional.

In no event will the Provider, or any person or entity involved in creating, producing or distributing BDJ or the contents included therein, be liable in contract, in tort (including for its own negligence) or under any other legal theory (including strict liability) for any damages, including, but without limitation to, direct, indirect, incidental, special, punitive, consequential or similar damages, including, but without limitation to, lost profits or revenues, loss of use or similar economic loss, arising from the use of or inability to use the journal platform. The User hereby acknowledges that the provisions of this section will apply to all use of the content on BDJ. Applicable law may not allow the limitation or exclusion of liability or incidental or consequential damages, so the above limitation or exclusion may not apply to the User. In no event will Pensoft’s total liability to the User for all damages, losses or causes of action, whether in contract, tort (including own negligence) or under any other legal theory (including strict liability), exceed the amount paid by the User, if any, for accessing BDJ.

Third Party Content

The Provider is solely a distributor (and not a publisher) of SOME of the content supplied by third parties and Users of BDJ. Any opinions, advice, statements, services, offers, or other information or content expressed or made available by third parties, including information providers and Users, are those of the respective author(s) or distributor(s) and not of the Provider.

How It Works

Manuscripts for the Biodiversity Data Journal and other journals in the future can only be submitted from the online, collaborative, article-authoring Pensoft Writing Tool (PWT) that provides a large set of pre-defined, but flexible, article templates.

To facilitate the writing process, the PWT also provides an automated search and import function from external databases including: electronic registries; catalogues; occurrence data in Darwin Core format; and reference bibliographies.

In the PWT environment, the authors may invite external contributors, such as mentors, potential reviewers, linguistic and copy editors, colleagues, etc., who are not authors, but may watch and comment on the text during the preparation of the manuscript.

Please consider the following steps, illustrated at the figure below:

  1. Start a manuscript in the Pensoft Writing Tool (PWT)
  2. Validate and submit it when ready to the Biodiversity Data Journal (also to other journals in the future)
  3. Make corrections and respond to reviewers and editors completely online
  4. Submit the revised version at the click of a button
  5. Work with the our copyeditors on final improvements
  6. Enjoy publication within 3 days after final acceptance

For more information, you may also look at the Editorial Beyond dead trees: integrating the scientific process in the Biodiversity Data Journal and press release The Biodiversity Data Journal: Readable by humans and machines.


Frequently Asked Questions (FAQ)

1. Does Biodiversity Data Journal publish only data?

NO! The journal focuses on data, but one can publish analyses and discussions, within the article, as in any other journal.

2. Does Biodiversity Data Journal require all data underlying an article to be published as well?

YES! All small data sets that underpin an article should be imported in the text (e.g., Darwin Core occurrence data, checklists, data tables, literature references) or uploaded as supplemnetary files (e.g., a data table used to create a graph). Large and complex data sets should be deposited in an internationally recognized repository (see Data publication section for details).

3. What kind of data does Biodiversity Data Journal publish?

Any kind of data related to biodiversity, for example: species occurrence data, local or regional checklists, inventories, genomic data, morphological descriptions, ecological observations, environmental data, etc.

4. What is the minimum "publishable" manuscript that can be submitted to the journal?

Any manuscript that brings novel information on any organism from any part of the world. Manuscripts are expected to demonstrate novelty, so it is unlikely that, for instance, a single observation would be sufficient. Please carefully consider our Criteria for publication before you decide to submit a manuscript to BDJ.

5. Why do you define fixed templates for articles?

Templates include some mandatory elements, but they are not fixed. Authors can additional sections or subsections in a manuscript. Using the templates is necessary because the journal's online peer-review and editorial system are designed to automate parts of the publication process to deliver both fast turnover and low cost. These systems do not accept manuscripts written in text processors (e.g., MS Word, or ODT) because they cannot be automated in this way.

6. What? Does Biodiversity Data Journal really NOT accept manuscripts written in MS Word?

NO, it does not! To keep the costs low and affordable for all, manuscripts submitted to the journal must either be written either within a specially designed tool (Pensoft Writing Tool, or PWT) or submitted from integrated external platforms, such as Scratchpads or GBIF Integrated Publishing Toolkit (IPT).

7. What does mean "public", and "community" peer-review?

"Community" peer review means that during the peer-review process the manuscript is visible only to editor, the reviewers and the authors; this is the traditional method in academic publishing and is the default option. Authors may opt, however, to make their manuscript available for comments from all registered journal users ("public" peer review). Reviewers may opt to stay anonymous or disclose their names in either case.

For more information, you may also look at the Editorial Beyond dead trees: integrating the scientific process in the Biodiversity Data Journal and press release The Biodiversity Data Journal: Readable by humans and machines.

8. What is a "Data Paper"?

A "Data Paper" is a scholarly journal publication whose primary purpose is to describe a dataset or a group of datasets, rather than to report a research investigation. As such, it contains facts about data, not hypotheses and arguments in support of the data, as found in a conventional research article. Its purposes are three-fold:

• to provide a citable journal publication that brings scholarly credit to data publishers;
• to describe the data in a structured human-readable form;
• to bring the existence of the data to the attention of the scholarly community.

If you are interested to learn more about it, you may have a look at our Data Publishing page, or the Data Paper Poster.

Article Processing Charges

Core services included in our Article Processing Charges:

  • Manuscript authoring in the ARPHA Writing Tool
  • Online collaboration with your co-authors and peers during authoring
  • Online import of occurrence records into manuscripts from GBIF, BOLD, iDigBio and PlutoF
  • Automated creation of data paper manuscript from GBIF IPT and DataONe EML files
  • Online search and import of cited references and data from CrossRef, DataCite, PubMed, RefBank, GNUB, and Mendeley
  • Automated technical check for consistency of the manuscript
  • Pre-submission technical and editorial checks by the editorial office
  • Pre-submission peer-review, organized by the author (optional)
  • Pre-publication peer-review
  • Community-driven post-publication peer-review (optional)
  • Automated registration of peer reviews at Publons
  • Registration of new taxa in ZooBank and IPNI 
  • Export of occurrence records in Darwin Core Archive to GBIF 
  • Export of taxon treatments to EOL, Plazi and Species-ID
  • Export of taxon treatments in Darwin Core Archive 
  • Publication in semantically enhanced HTML, PDF and JATS XML formats
  • Machine-readable, harvestable content via JATS XML and Web services
  • Archiving in trusted international repositories
  • Active dissemination via social networks and email alerts

Publication type

Peer-review process 

Size limits*

Article processing charges

Author organised pre-submission peer-review

Journal organized pre-publication peer-review

Post-publication peer-review


Figures or tables







€  0




€  0

Single Taxon Treatment




€  150

Data Paper



€  300

Software Description



€  300




€  300

Interactive Key



€  300

Forum Paper



€  450




€  450

Research Article



€  450

Taxonomic Paper



€  450

* Manuscripts that exceed the indicated word limit would incur an additional charge, equal to the standard APC for this publication type. For manuscripts that exceed the indicated limit more than two times, please contact us.

Special Issues

Special issues enable conference organizers or project coordinators to publish a number of articles under a common theme and editorship. Depending on the number of articles to be included, Pensoft offers discounts on APCs as described in the table below.





Number of articles

< 10

10 – 20

21 +

Discount on APCs




PR campaign

By agreement

By agreement


Institutional branding

By agreement

By agreement


We are happy to discuss alternative arrangements if there is a better way to suit your needs for a special issue. Please do not hesitate to contact us!

Additional Services (Optional)

Optional service



Linguistic services

€ 15 per 1800 characters

For texts that require additional editing by a native English speaker

Tailored PR campaign

€ 300*

Press release, dedicated media and social networks promotion

Tailored PR campaign + Video interview

€ 450

Video interview organized by the Editorial Office

Paper reprints

At cost

On demand

Auditing of the Darwin Core data associated with manuscript**

€ 75 for datasets up to 10000 records. For large datasets (10,000 + records) please contact Dr. Bob Mesibov for pricing

On demand

Cleaning of the Darwin Core data associated with my manuscript**

€ 225 for datasets up to 10000 records. For large datasets (10,000 + records) please contact Dr. Bob Mesibov for pricing

On demand

*This service can be discounted or waived for articles of outstanding importance for the science and society
**Pensoft reviewers do not usually have time to check through large data files included with manuscripts. If you would like us to have your data files checked, we offer the services of Pensoft editor Dr Bob Mesibov, who is also a data auditor.
Suitable data files for checking would be large tables of occurrence records or of genetic data. These can be checked for duplicate and broken records, misuse of fields, disagreements between fields, character encoding problems and incorrect or inconsistent formatting. Georeferencing can also be checked, on request. Please note that this service does not apply to taxonomic, nomenclatural or bibliographic details in data files.

Discounts and Waivers

  • Discount of 10 % is offered to:
    • Scientists working privately
    • Graduate and PhD students, if they are first authors
    • Scientists living and working in lower middle-income countries (http://data.worldbank.org/income-level/lower-middle-income) if they are sole authors of a manuscript.
    • Discounts are also offered to our editors and reviewers, for more information see here
  • Waivers (once per year per (co-) author for manuscripts no larger than 10 printed pages, or for the first 10 pages of a larger manuscript) are offered to:

Discounts and weavers do not accumulate.

Institutional and Other Membership Plans

Our plans provide additional flexibility and affordability for institutions, research groups, consortia, conference organizers and other larger research teams and organizations. Affiliated authors can publish in any Pensoft journal by using a streamlined payment interface. Pensoft’s plans are a great way to support Open Access publishing while also simplifying budgeting, invoicing, and author reimbursement procedures. We offer three plans to choose from, however if they do not quite suit your needs, we would be happy to discuss alternative arrangements with you. Please do not hesitate to contact us for a preliminary conversation about our plans!

Key benefits

Annual membership

  • Flat rate - publish all you can
  • Cost based on the size and publishing pattern of your organization
  • Beginning of year budgeting
  • One invoice / no billing during the year

Pre-paid plans

  • Discount on APCs
  • Deposit funds up-front and spend without a time limit
  • Add funds to your account at any time
  • Choose whether to cover full (discounted) cost of publishing or split costs with authors

Direct billing

  • No up-front payments
  • One monthly invoice for all publications by affiliated authors
  • Regular reports to track publication pattern and expenses

Additional services we can provide upon request

  • PR campaigns for specific publications or sets of publications, including press releases and video interviews
  • Institutional branding – including institutional logos on published papers, dedicated webpages, institutional online collections of articles
  • Research output reporting, detailing number and types of publications, expenses, views, and downloads

Please find more details about each individual plan below. If you would like to recommend Pensoft’s plans to your institution you can fill out this simple form or contact us at info@pensoft.net and we will forward your recommendation with some additional information.

Annual Memberships

Annual memberships allow institutions to plan their publishing expenses in the beginning of the fiscal year by providing unlimited publishing in all Pensoft journals in exchange for a flat annual payment. The cost of membership depends on the total publishing output capacity of the institution and its historical publishing pattern in Pensoft journals. We will adjust the cost of your membership annually.

Pre-Paid Plans

Pre-paid plans allow institutions and / or research groups to deposit a certain amount of funds with Pensoft and make them available to affiliated researchers for covering Article Processing Charges in any Pensoft journal. Member institutions decide whether to cover APCs in full or share the expense with the authors. Depending on the amount members are prepared to commit, Pensoft is offering a discount on APCs per the table below. Additional funds can be added to an account at any point in time within the calendar year of purchasing the plan, while left-over funds are preserved until spent.





Minimum deposit

€ 1,000 – 3,000

€ 3,000 – 5,000

€ 5,000 +

Discount on APCs




Direct Billing

The direct billing plan allows institutions to reduce the complexity of billing and reimbursements. It consolidates all Pensoft invoices for articles authored by researchers affiliated with an institution into a single monthly bill that is sent directly to the institution.

Writing a Press Release

Pensoft’s experienced PR team puts a lot of effort in the wide dissemination of the works we publish through press releases, news aggregators, blogs, social network communication and the mass media.

It goes without saying that press releases and news stories can have a major effect on the impact and popularity of research findings. Moreover, they are of benefit to all parties involved: the authors, their institutions, funding agencies, publishers and the society in general. Thanks to a well-established dissemination network, Pensoft press releases regularly provide the basis for print, online, radio and TV news stories in reputed international media outlets, including National Geographic, BBC, Sky News, CNN, New York Times, The Guardian, Deutsche Welle, Der Standard, DR, etc.

Here are some examples of Pensoft's press releases, posted on EurekAlert, which have enjoyed high popularity and thousands of views within the first days following their publication:

Our PR team invites you to prepare (or request) a short press release on your accepted paper whenever you find your research of public interest. We have provided a template and instructions to guide you through the specific text format.

While the press release needs to be in English, in case you find it suitable for the promotion of your study, you are welcome to also submit a translation of the press release in the following languages: French, German, Spanish, Portuguese, Japanese and Chinese. Please note that all translations need to be based on the final English version of the press release as approved by our press officers.

We are always happy to promote your research by preparing a press release for you and coordinating our dedicated PR campaigns with the PR offices of our partnering institutions. You are welcome to approach us with your press release drafts or any queries regarding our PR campaign via email at either pressoffice@pensoft.net, or dissemination@pensoft.net.

To keep up with the latest news, subscribe to our blog and follow us on Twitter, Facebook and Google+. Also, keep an eye on EurekAlert! AAAS for our top breaking stories!

Benefits for Editors and Reviewers

Pensoft editors and reviewers are entitled to a set of benefits in appreciation for their contribution to the quality of the works we publish.

  For Editors   For Reviewers
  • 15% unconditional discount on APCs and reprints for the journal in which you are an editor
  • 10% unconditional discount on
    • APCs in all other Pensoft journals
    • All books published by Pensoft
    • Article reprints for all other Pensoft journals
    • Dedicated PR campaigns
  • Special conditions for publication of large works or articles that need customized technical solutions
  • 15% discount on APCs for the journal in which the review was provided
    • Valid for one manuscript per review, submitted within 6 months of the review, where the reviewer is the lead author
  • Automated registration of reviews at Publons after confirmation by the reviewer
  • Open reviews are provided with DOIs and citation details

* When an individual qualifies for multiple discounts Pensoft will use the largest that applies

  Apply to become an editor via Editor Application Form