Summary
Many researchers hope to find a single, definitive “list of predatory, pay-to-publish journals” that will reliably separate safe outlets from dangerous ones. Unfortunately, such a list does not—and probably cannot—exist. Publication fees alone do not make a journal predatory; many reputable open-access journals charge article processing charges (APCs) to cover real editorial and production costs. Predatory publishers, by contrast, accept papers and collect fees while providing little or no genuine peer review, editing, or long-term archiving.
This article explains why no blacklist or whitelist can be completely reliable, and how to use existing resources, such as community-maintained blacklists and curated white-lists, as screening tools rather than final verdicts. It introduces key blacklist and evaluation resources with links, including community watchlists, commercial assessment services, and university library guides, without naming any specific journals. The article then sets out a practical framework for assessing journals yourself, based on editorial-board credibility, peer-review transparency, article quality, fee disclosure, indexing claims, and colleagues’ experience.
By combining cautious use of blacklists and whitelists with critical evaluation of each journal, you can distinguish legitimate APC-funded venues from predatory “pay-to-publish” operations. This protects your time, your research funding, and your reputation, and helps ensure your work appears in outlets that genuinely support and disseminate high-quality scholarship.
📖 Full Length Article (Click to collapse)
Is There a Reliable List of Predatory Pay-To-Publish Journals?
1. Why “Pay-To-Publish” Is Not the Whole Story
When you search online for “pay-to-publish journals” or “list of predatory journals,” it is easy to conclude that any journal charging a fee must be suspicious. Lists, blogs, and social-media threads often reinforce this impression, warning authors away from “pay-to-publish” outlets as if all of them were inherently predatory.
In reality, the situation is more nuanced. Many legitimate journals—especially in open-access models—charge article processing charges (APCs), submission fees, or page charges. These fees support genuine costs: managing editorial workflows, coordinating expert peer review, copyediting and typesetting, online hosting, and long-term preservation. Without subscription income, such fees can be the only way to sustain high-quality publication.
Predatory journals, by contrast, treat fees as a price for publication. They may:
- promise rapid or guaranteed acceptance;
- perform only superficial or no peer review;
- provide minimal editorial improvement or quality control;
- do little to ensure long-term discoverability or preservation.
The key issue is therefore not whether a fee exists, but whether a journal provides the editorial and scholarly services that authors, readers, and funders expect in return.
2. Why No Single List Can Be Fully Reliable
In response to predatory publishing, scholars and organisations have created two broad types of lists:
- Blacklists – journal or publisher lists flagged as predatory or of serious concern.
- Whitelists – lists of journals that meet defined quality or ethical criteria.
These efforts have raised awareness and helped many researchers avoid harmful outlets, but neither approach can be perfect.
2.1 Limitations of blacklists
Blacklists try to identify journals and publishers that repeatedly display problematic behaviour, such as:
- fake editorial boards or invented staff biographies;
- fabricated metrics and misleading “impact factors”;
- spam emails inviting submissions in all disciplines at once;
- unrealistic promises of review and publication within days.
However:
- They cannot capture all problematic outlets. New titles appear, split, merge, or rebrand frequently.
- Assessments can be subjective. Distinguishing a weak but improving journal from a deliberately predatory one often involves judgement.
- False positives are possible. New or small journals still building processes and reputation may be flagged prematurely.
Blacklists are therefore best used as warning tools. If a journal appears in more than one blacklist, you should exercise particular caution, but absence from a blacklist does not mean a journal is safe.
2.2 Limitations of whitelists
Whitelists compile journals or publishers that meet stated criteria, such as transparent peer-review policies, editorial oversight, and ethical standards. They are valuable, but they also have constraints:
- Coverage may be partial. Many sound journals may not appear simply because they have not applied or been evaluated yet.
- Criteria vary in strictness. One whitelist may impose more demanding standards than another.
- Quality can drift. A journal that once met the criteria may decline if ownership or editorial leadership changes.
Being whitelisted is generally a positive sign, but not an absolute guarantee of quality or integrity.
2.3 Key caveats
Several general points follow from these limitations:
- No list is exhaustive; being absent from a list does not prove a journal is safe.
- Being listed (especially on a blacklist) does not automatically mean every article it publishes is worthless, but it does signal higher risk.
- Lists age quickly; a judgement made several years ago may not reflect current practice.
Lists are therefore useful starting points, not final answers. To make good decisions, you need both external resources and your own critical evaluation.
3. Examples of Blacklists, Whitelists, and Evaluation Resources
While no resource is perfect, several widely discussed tools can help you screen journals before you invest time and money. The links below are provided as examples of resources, not endorsements, and no specific journal names are discussed.
3.1 Community-maintained blacklists
Two well-known community watchlists are:
-
Archived “Beall’s List” mirror – https://beallslist.net/
A community-maintained mirror of the now-defunct list once compiled by librarian Jeffrey Beall. It lists publishers and standalone journals that have been flagged as potentially predatory. The criteria and maintenance are informal, so its information should be treated as a starting point for deeper investigation. -
Predatory Journals resource – https://predatoryjournals.org/
A site that collects and organises community reports of suspect journals and publishers, often with links to supporting evidence. It is useful for seeing whether a title has attracted repeated criticism, but, again, evaluations are not infallible.
3.2 Commercial assessment services
Some commercial services specialise in evaluating journals against systematic criteria. One of the best-known examples is:
-
Cabells Predatory Reports – https://www2.cabells.com/about-predatory
A subscription-based database that classifies journals according to documented violations of good practice. Many institutions subscribe and provide access to their researchers as part of research-integrity support. Because it is curated and constantly updated, it can be a powerful screening tool—provided you use it alongside your own judgement.
3.3 Whitelists and positive-selection directories
Resources that highlight journals meeting minimum quality criteria include:
-
Directory of Open Access Journals (DOAJ) – https://doaj.org/
A curated directory of open-access journals that must apply and satisfy criteria relating to peer review, transparency, and licensing. Inclusion is a positive sign, though you should still review individual journals yourself. -
COPE membership list – https://publicationethics.org/members
The Committee on Publication Ethics (COPE) lists journals and publishers that are members and commit to follow its ethical guidelines. Membership does not guarantee perfection, but shows an engagement with recognised standards. -
OASPA member list – https://oaspa.org/membership/members/
The Open Access Scholarly Publishing Association (OASPA) lists open-access publishers that have passed its membership review process.
3.4 University and national research-integrity guidance
Many universities host library or research-integrity guides that explain how to recognise predatory journals and conferences. For example, national bodies such as the UK Research Integrity Office provide checklists and principles to support journal evaluation:
-
UK Research Integrity Office guidance – https://ukrio.org/
Offers advice and resources to help institutions and researchers maintain integrity in publication decisions.
Your own institution’s library website is often one of the best starting points, as it will tailor advice to your discipline and local context.
4. How to Evaluate a Journal Yourself (Without Naming Any Journal)
Because no external list can replace personal judgement, you need a robust, repeatable way to evaluate journals. The following steps will help, regardless of discipline.
4.1 Examine the editorial board and governance
Check the journal’s “About” or “Editorial Board” page:
- Are editor-in-chief and board members clearly named, with institutional affiliations?
- Do their research profiles, where you can find them, align with the journal’s stated scope?
- Are contact details professional (e.g., institutional emails) rather than generic free email accounts alone?
Lack of identifiable, verifiable editorial leadership is a serious warning sign.
4.2 Read the description of peer review
Look for a concise explanation of the review process and realistic decision times. Be cautious if the journal:
- promises decisions in a few days for all submissions;
- claims to guarantee acceptance once a fee is paid;
- provides no information at all on how manuscripts are evaluated.
Genuine review takes time. While fast decisions are possible for some papers, blanket guarantees are almost always incompatible with serious peer review.
4.3 Check the scope and subject focus
Read the aims and scope critically:
- Does the journal have a sensible, coherent subject area?
- Does the description reflect an understanding of current debates, methods, and terminology?
- Does the journal avoid presenting itself as equally expert in every discipline at once?
Extremely broad or vague scopes can indicate a focus on maximising volume rather than maintaining a coherent scholarly community.
4.4 Inspect the website and author instructions
Pay attention to both content and language:
- Are policies on ethics, conflicts of interest, corrections, and retractions stated?
- Are instructions for authors detailed, including referencing style and reporting expectations?
- Is the writing generally clear, with only occasional minor errors rather than numerous mistakes?
While perfect language is not required, persistent errors and inconsistencies suggest limited editorial care.
4.5 Read sample articles
Perhaps the strongest test is to examine what the journal has already published:
- Are methods and data described clearly enough for critical evaluation?
- Do statistical analyses seem appropriate, and are limitations acknowledged?
- Are references relevant, reasonably up to date, and accurately cited?
- Is the formatting professional, with consistent headings, tables, and figures?
If multiple recent articles show poor methods, unsupported claims, or numerous basic errors, it is unlikely that the journal provides meaningful peer review.
4.6 Evaluate transparency about fees
Where fees apply, transparency is critical:
- Are APCs or other charges clearly stated on the website, not hidden until acceptance?
- Is it clear when payment is due and what services it covers?
- Does the journal explain fee waivers or discounts for authors without funding?
Hidden or vaguely described fees, especially combined with guaranteed acceptance, are typical of predatory models.
4.7 Verify indexing and metrics
Finally, check claims about indexing and impact:
- If the journal claims to be indexed in a major database, verify this by searching the database directly.
- Be cautious about unusual metrics or “impact factors” from obscure sources.
- Understated, accurate indexing statements are more trustworthy than inflated, unverifiable claims.
5. Using Lists and Evaluation Together
Once you have examined a journal yourself, you can combine that knowledge with information from blacklists, whitelists, and institutional guidance:
- If a journal looks suspicious based on your own checks and appears in multiple blacklists, the safest choice is to avoid it.
- If a journal passes your checklist and appears in respected whitelists or directories, that strengthens your confidence—but you should still remain alert.
- If you are unsure after your own assessment, contact your university library or research office. They often have experience evaluating journals and can advise you based on your discipline and career stage.
6. Conclusion: No Simple Lists, but Better Decisions
There is no single, reliable, and permanent list of predatory pay-to-publish journals. The publishing landscape changes too quickly, and judgements about quality and intent are too nuanced, for any static blacklist or whitelist to be fully trustworthy on its own. However, by using a combination of:
- community watchlists (such as archived “Beall-type” lists and other blacklists);
- curated white-lists and directories (such as DOAJ, COPE, and OASPA member lists);
- commercial evaluation tools (such as predatory-journal assessment services provided to institutions);
- and your own structured evaluation of each journal,
you can significantly reduce your risk of submitting to predatory outlets. This protects your time and funding, supports your long-term reputation, and helps ensure that your research appears in venues that contribute to the advancement of knowledge rather than exploiting it.
Whenever you remain uncertain, pause and seek advice—from your supervisor, senior colleagues, your institution’s library, or research-integrity office. Combined with careful human proofreading and editing, this cautious approach gives your work the best chance of being published in reputable journals that will truly benefit your academic career.
If you are preparing a manuscript for a reputable journal and want to ensure that language, formatting, and references meet high standards, working with a professional academic editor can also help. Specialist human proofreaders, such as those at Proof-Reading-Service.com, can support you in presenting your research clearly and professionally—so that, once you have chosen a trustworthy journal, your submission stands the best possible chance of success.