Summary
Clear methods and clean results are the backbone of publishable research. Your methodology section should name and justify the design, define sampling and materials, specify procedures step-by-step, and explain how you ensured validity, reliability, and ethics. Use figures and tables only when they communicate faster than prose—and build them with stand-alone legends.
Results are not a mystery novel. Present them in a logical structure that mirrors your research questions or hypotheses. Lead with the primary outcome, then secondary findings and robustness checks. Combine concise narrative with well-labelled tables/figures. For quantitative work, report effect sizes, uncertainty (CIs), exact p-values, and any multiplicity controls. For qualitative work, show credible patterns with transparent coding, thick descriptions, and carefully chosen quotations linked to themes.
Bottom line: justify the why, document the how, and report the what with precision. Think like a reviewer: could another researcher reproduce your methods and reach the same results using only what you’ve written and your appendices? If yes, you’re ready to submit.
📖 Full Length (Click to collapse)
Describing Methodology & Reporting Results in Academic Writing
After you introduce your research problem and situate it in the literature, readers need two things in fast succession: a trustworthy map of how you generated evidence (the methodology) and a clean account of what you found (the results). This article shows you how to write both sections so editors, reviewers, and future researchers can verify and reuse your work—without wading through unnecessary detail.
1) The purpose of the methodology section
Your methods must do more than list steps; they must justify why this design is appropriate for these questions with these constraints. A good methodology section answers five questions:
- Design: What overall approach did you use (experimental, quasi-experimental, observational, case study, ethnography, survey, mixed methods)?
- Sampling: Who/what was studied? How were cases selected? What was the sample size and rationale (power or saturation)?
- Materials & instruments: What tools, measures, or equipment did you use and how were they validated or calibrated?
- Procedures: What exactly happened, in what order, and under what conditions?
- Quality & ethics: How did you ensure validity/reliability or credibility/dependability, and what approvals and safeguards were in place?
2) What to include (and where)
| Component | Include in main text | Place in appendix/repository |
|---|---|---|
| Design & rationale | Yes—2–4 concise paragraphs with citations | None |
| Sampling & power/saturation | Eligibility, recruitment, n, power calc. or saturation logic | Full flow diagram; recruitment materials |
| Measures/instruments | Names, constructs, reliability/validity; calibration | Item lists; scoring rules; raw specs |
| Procedures/protocol | Sequence, timing, randomization/masking/blinding | Full protocol; pre-registration; code |
| Analysis plan | Primary/secondary outcomes; models; assumptions | Alternative specs; derivations; diagnostics code |
| Ethics & data | Approvals; consent; data/code availability statements | De-identified dataset or synthetic data; repository links |
3) Writing the design & rationale
Open with a compact paragraph that names the design and ties it to your research questions or hypotheses.
Model sentence: “We used a prospective cohort design to estimate the association between exposure X and outcome Y, chosen over a randomized trial due to ethical/logistical constraints, and mitigated confounding via propensity score weighting.”
4) Sampling and case selection
- Define the population and the frame; give inclusion/exclusion criteria.
- Explain size: for quantitative work, report power assumptions (effect size, alpha, power). For qualitative work, explain how you judged thematic saturation.
- Flow: use a diagram to show approached → eligible → consented → analyzed.
5) Materials, instruments, and measures
- Name each measure/tool and state what it captures (construct), how it is scored, and known reliability/validity.
- For devices/assays: report model/version, calibration schedule, and tolerance/error.
- For surveys: indicate sources (adapted items vs new), piloting, and translation/back-translation if used.
6) Procedures and controls
Describe the sequence of events precisely. If randomization was used, state unit, method (e.g., blocked, stratified), and allocation concealment. If blinding occurred, clarify who was blinded and how it was tested. For observational designs, specify handling of confounding and missing data. For lab work, document replicates, exclusion rules, and environment controls.
7) Analysis plan and assumptions
- Define primary and secondary outcomes clearly.
- State the models used (e.g., linear mixed effects, logistic regression, thematic coding framework) and assumptions checked.
- Explain multiplicity control if you test multiple hypotheses (e.g., Holm-Bonferroni, FDR).
- Pre-specify robustness/sensitivity analyses; reserve exploratory work for Discussion.
8) Validity, reliability, and bias mitigation
Reviewers scan for these signals:
- Internal validity: randomization/blinding, balance checks, manipulation checks.
- Measurement validity/reliability: inter-rater reliability, Cronbach’s alpha, instrument calibration.
- External validity: representativeness, context limits, boundary conditions.
- Bias control: preregistration, handling of missing data, contamination checks, reflexivity (qualitative).
9) Visuals that genuinely help
Use visuals to compress complexity—never to decorate.
- Figures: schematics of apparatus; timelines; DAGs; theme maps. Legends must let a reader understand the figure without the main text.
- Tables: eligibility criteria; variable definitions; descriptive statistics; model summaries. Avoid duplication—if it’s in a table, don’t repeat numbers verbatim in prose.
Reporting Results
Results should be a factual narrative anchored by your tables/figures, not a discussion of implications (save that for the next section). The structure must mirror your research questions or hypotheses so readers never wonder why a paragraph is there.
10) Structure options for results
- By research question/hypothesis: best for confirmatory studies. Each subsection = one RQ/H, with primary outcome first.
- Chronological: useful for time-series, experiments with phases, or longitudinal designs.
- Thematic: typical in qualitative work; themes ordered by salience or conceptual logic.
- By method stream: for mixed methods, separate quantitative and qualitative results and integrate in Discussion.
11) Writing quantitative results
- Lead with effects, not tests: report effect size and uncertainty (CI) before p-values.
-
Be exact: provide exact p-values (e.g.,
p = 0.013) unless journal policy says otherwise. - Show distribution: report medians/IQRs when skewed; include N per group.
- State model and covariates once per analysis; avoid repeating technical details.
Model sentence: “Compared with control, the intervention increased mean test scores by 6.2 points (95% CI 3.4–9.0; n=412; adjusted for baseline, age, site); p = 0.001.”
12) Writing qualitative results
- Name themes clearly and link them to your questions; provide a brief analytic definition for each.
- Evidence with quotations or field notes: choose vivid, typical excerpts; attribute anonymized speaker characteristics where relevant.
- Show the pattern: indicate prevalence/variation without turning qualitative work into pseudo-quantification.
- Audit trail: briefly state coding approach, inter-coder checks, and reflexive notes; full codebook in appendix.
13) Tables and figures: micro-conventions that impress reviewers
- Refer to each visual in text (“see Fig. 2”) and tell readers what to see (“Fig. 2 shows the sharp post-policy break”).
- Use consistent units, axis scales, and abbreviations across figures.
- Avoid over-precision (e.g., two decimals unless measurement warrants more).
- Footnote model notes, variable definitions, and multiplicity adjustments within the table.
14) Robustness, sensitivity, and negative results
Credibility rises when you proactively test fragility.
- Robustness: alternative specifications, bandwidths, clustering levels, priors, or exclusion rules.
- Sensitivity: influence diagnostics, missing data methods, alternative outcome definitions.
- Negative/null results: state them plainly; emphasize precision (CIs) and power rather than apologizing.
15) Common pitfalls (and fixes)
- Methods drift: results include new methods not described earlier. Fix: move method detail to Methods and cross-reference.
- Duplication: repeating every table cell in prose. Fix: summarize the pattern; direct readers to the table.
- Over-interpretation: implying causality from descriptive or weakly identified designs. Fix: qualify claims; move mechanism speculation to Discussion.
- P-hacking optics: many tests without multiplicity control. Fix: pre-specify and control FWER/FDR; mark exploratory analyses.
- Opaque figures: unlabeled axes, tiny fonts, ambiguous colors. Fix: redesign with reader-first ergonomics.
16) Mini-templates you can adapt
Methods—design & sample:
“We conducted a cluster-randomized trial across 24 schools (12 intervention, 12 control). Eligibility required [criteria]. We randomized with block size 4 stratified by district; allocation was concealed via [method]. Power analysis indicated n=… to detect Δ=… at α=0.05 (80% power).”
Methods—analysis plan:
“Primary outcome was [measure]. We estimated intent-to-treat effects using linear mixed models with random intercepts for school and fixed effects for baseline score, grade, and district. We assessed assumptions via residual diagnostics and controlled FDR at 5% for secondary outcomes.”
Results—primary outcome:
“Students in intervention schools scored higher than controls by 6.2 points (95% CI 3.4–9.0; n=412; p = 0.001). Effects were consistent across grades (interaction p = 0.41). See Table 2 for model coefficients and Fig. 1 for adjusted means.”
Results—qualitative theme:
“Theme A: Resource Friction. Participants described chronic shortages that constrained adoption (‘We share one device among four’—Teacher, rural). Accounts linked friction to scheduling bottlenecks rather than attitudes, aligning with the quantitative association between device access and uptake (Table 3).”
17) Mixed-methods integration
If you used both quantitative and qualitative strands, report each cleanly, then integrate explicitly.
- Use a weaving paragraph in Discussion: show convergence, complementarity, or divergence.
- Cross-reference: “Quantitative effect on uptake (Table 2) is explained by reported scheduling barriers (Theme A).”
18) Reproducibility and transparency
- Availability statements: tell readers where to find data and code (or why access is restricted) and under what license.
-
Versioning: cite software and package versions; include a
sessionInfo()or environment file in your repository. -
Readme: provide a step-by-step reproduction script (e.g.,
00_clean → 01_analyze → 02_tables_figures).
19) A concise checklist before submission
- Design named and justified; methods reproducible from text + appendix.
- Sampling, eligibility, and n reported; power/saturation addressed.
- All instruments defined with reliability/validity; devices calibrated.
- Randomization/masking and allocation concealment (if applicable) described.
- Primary/secondary outcomes declared; analysis plan and assumptions stated.
- Ethics approvals and consent included; data/code availability stated.
- Tables/figures drafted first; legends are stand-alone; no duplication in prose.
- Quant results include effect sizes, CIs, exact p-values; multiplicity handled.
- Qual results include named themes, quotations, and coding transparency.
- Robustness/sensitivity and negative results reported without apology.
Conclusion
A persuasive paper makes it easy to trust what you did and to see what you found. Keep methods lean but complete, with justifications alongside steps. Let results follow the logic of your questions, expressed through succinct narrative and honest, well-built visuals. If a peer could re-run your study from your text and appendices, and if your results read like a clear answer rather than a cliffhanger, you have achieved the professional standard editors look for.