Writing Style and Voice Principles

technical-writing
style
voice
readability
grammar
Comprehensive guide to writing style, voice, and tone in technical documentation, comparing Microsoft, Google, and Wikipedia approaches with practical before/after examples
Author

Dario Airoldi

Published

January 14, 2026

Writing Style and Voice Principles

Master active voice, sentence structure, readability formulas, and tone to create clear, accessible technical documentation that resonates with your audience

Table of Contents

🎯 Introduction

Voice and style form the personality of your documentation. While content accuracy ensures correctness, voice determines whether readers can understand, trust, and act on that information.

This article explores:

  • Active vs. passive voice - When to use each and why Microsoft, Google, and Wikipedia differ
  • Readability formulas - Understanding Flesch-Kincaid scores and this repository’s 50-70 target
  • Sentence structure - Why 15-25 words per sentence improves comprehension
  • Comparative analysis - How major style guides approach voice differently
  • Practical examples - Before/after transformations showing principles in action

Prerequisites: Understanding of foundational documentation principles is helpful but not required.

✍️ Active vs. passive voice

Voice refers to the relationship between the subject and verb in a sentence. This seemingly simple grammatical choice profoundly affects clarity, tone, and reader engagement.

Definitions and examples

Active voice: Subject performs the action

  • Structure: [Subject] [Verb] [Object]
  • Example: “The developer writes the code.”
  • Emphasis: Who/what is doing something

Passive voice: Subject receives the action

  • Structure: [Object] [is/was] [Verb-ed] [by Subject]
  • Example: “The code is written by the developer.”
  • Emphasis: What is being done

When to use active voice

Clarity and directness - Active voice makes actors explicit

Passive (unclear): > “The API should be initialized before any requests are made.”

Active (clear): > “Initialize the API before you make any requests.”

Responsibility and agency - Active voice assigns accountability

Passive (vague): > “An error was encountered during deployment.”

Active (specific): > “The deployment script encountered an error.” > Or better: “You encountered an error during deployment.”

Brevity - Active constructions are typically shorter

Passive (wordy - 11 words): > “The configuration file must be updated by the administrator.”

Active (concise - 7 words): > “The administrator must update the configuration file.” > Or: “Update the configuration file.”

When passive voice is acceptable

Actor is unknown or irrelevant

Appropriate passive: > “The server was compromised at 3:00 AM.” > (Who compromised it is unknown)

Emphasis on the object, not the actor

Appropriate passive: > “Python was released in 1991.” > (Focus on Python’s history, not Guido van Rossum’s action)

Scientific or encyclopedic tone (Wikipedia preference)

Appropriate passive: > “The data was collected over six months.” > (Scientific detachment, methodology focus)

Avoiding accusatory tone

Appropriate passive: > “The file was deleted accidentally.” > (Less accusatory than “You deleted the file accidentally”)

Style guide comparison: voice usage

Guide Active Voice Preference Passive Voice Guidance Rationale
Microsoft Strong preference Allow when appropriate “Active voice makes writing more direct and vigorous”
Google Very strong preference Use sparingly “Active voice is typically more direct and vigorous”
Apple Strong preference Acceptable in specific contexts “Active voice creates more engaging content”
Wikipedia No strong preference Both widely used “Passive appropriate for encyclopedic tone, avoiding first/second person”

Before/after examples

Example 1: Procedural instruction

Passive and wordy: > “After the installation has been completed by the user, the configuration wizard will be launched automatically and the initial settings should be reviewed carefully before the application is started.”

Active and clear: > “After you install the software, the configuration wizard launches automatically. Review the initial settings before you start the application.”

Example 2: Error message

Passive and vague: > “An invalid parameter was provided.”

Active and specific: > “You provided an invalid parameter. Check the API documentation for valid values.”

Example 3: Technical reference (passive acceptable)

Passive (appropriate for reference material): > “The function is called when the event fires. Three arguments are passed: event type, timestamp, and payload.”

Example 4: Tutorial (active preferred)

Active (appropriate for learning): > “Call the function when the event fires. Pass three arguments: event type, timestamp, and payload.”

Detecting passive voice

Linguistic test: Can you insert “by zombies” after the verb?

  • “The code was written [by zombies]” → Passive
  • “The developer writes the code [by zombies]” → Doesn’t work, so active

Automated detection: This repository’s validation system flags excessive passive voice through grammar analysis.

📊 Readability formulas explained

Readability formulas quantify text complexity, helping ensure documentation matches audience capabilities. This section surveys seven widely used formulas with practical targets and score interpretation for technical writers.

On deliberate overlap with Article 09: This article provides a practical survey—what each formula measures, how to interpret scores, and what targets to use. Article 09: Measuring Readability and Comprehension provides analytical depth—full mathematical treatment, statistical validation context, and how formulas connect to comprehension testing, information scent theory, mental model alignment, and usability measurement. Both perspectives are intentional; see Article 08 for the series redundancy policy.

Flesch Reading Ease score

Formula: 206.835 - (1.015 × ASL) - (84.6 × ASW)

  • ASL = Average Sentence Length (words per sentence)
  • ASW = Average Syllables per Word

Score interpretation:

Score Range Difficulty Typical Reader Example Context
90-100 Very Easy 11-year-old Children’s books
80-89 Easy 13-year-old Young adult fiction
70-79 Fairly Easy 15-year-old Magazines
60-69 Standard 17-18 year-old General documentation
50-59 Fairly Difficult College level Technical documentation
30-49 Difficult College graduate Academic papers
0-29 Very Difficult Professional Legal, scientific journals

This repository’s target: 50-70

  • Rationale: Balances technical precision with accessibility
  • Lower bound (50): Allows necessary technical complexity
  • Upper bound (70): Ensures general technical audience comprehension

Why we validate Flesch scores:

From validation-criteria.md:

readability:
  flesch_reading_ease:
    target_range: [50, 70]
    minimum_acceptable: 40
    rationale: "Technical content requires precision, but must remain accessible"

Flesch-Kincaid grade level

Formula: (0.39 × ASL) + (11.8 × ASW) - 15.59

Score interpretation:

  • Score = US grade level required to understand the text
  • Example: Score of 10 = 10th grade reading level

This repository’s target: 9-10

  • Equivalent to Flesch Reading Ease 50-70
  • Indicates high school to early college reading level

Gunning Fog index

Formula: 0.4 × [(ASL) + 100 × (Complex Words / Total Words)]

  • Complex Words = words with 3+ syllables

Interpretation: Years of formal education required

Typical ranges for technical documentation:

  • 8-10: Accessible technical writing
  • 11-14: Standard technical documentation
  • 15+: Dense academic/specialist writing

Coleman-Liau index

Formula: 0.0588 × L - 0.296 × S - 15.8

  • L = Average number of letters per 100 words
  • S = Average number of sentences per 100 words

Key difference: Uses character counts instead of syllable counts, making it easier to compute programmatically. This makes it well suited for automated pipelines where syllable detection adds complexity.

Score interpretation: US grade level required (like Flesch-Kincaid).

Typical technical documentation range: 10-14

SMOG index

Formula: 3 + √(number of polysyllabic words in 30 sentences)

  • Polysyllabic words = words with 3+ syllables

Key difference: Designed specifically for healthcare and public-facing materials. It’s considered more conservative than Gunning Fog because it uses a square root rather than a linear relationship with complex word count.

Score interpretation: Years of education needed for 100% comprehension (not just partial understanding).

Typical technical documentation range: 10-14

Dale-Chall readability formula

Formula: 0.1579 × (PDW × 100 / words) + 0.0496 × ASL

  • PDW = Number of “difficult” words (not on the Dale-Chall 3,000-word familiar list)
  • ASL = Average Sentence Length

Key difference: Uses a vocabulary list rather than syllable counts. A word qualifies as “difficult” only if it doesn’t appear on the Dale-Chall list of 3,000 words that most fourth-graders understand. This makes it particularly good at catching jargon—technical terms that are short but unfamiliar.

Score interpretation:

Raw Score Comprehension Level
4.9 or below Easily understood by a 4th-grade student
5.0-5.9 5th-6th grade
6.0-6.9 7th-8th grade
7.0-7.9 9th-10th grade
8.0-8.9 11th-12th grade
9.0-9.9 College level

This repository’s sweet spot: 7.0-8.9 (high school reading level, accounting for necessary technical vocabulary)

Automated readability index (ARI)

Formula: 4.71 × (characters / words) + 0.5 × (words / sentences) - 21.43

Key difference: Uses character counts and word counts only—no syllable counting, no vocabulary lists. This makes it the fastest formula to compute and the easiest to implement from scratch.

Score interpretation: US grade level required (same scale as Flesch-Kincaid and Coleman-Liau).

Typical technical documentation range: 10-14

Formula comparison table

The following table compares all seven formulas. Use it to choose which formulas best fit your validation workflow:

Formula Inputs Measures Best for Strength Limitation
Flesch Reading Ease Syllables, sentence length General readability (0-100 scale) Quick readability check Widely recognized; intuitive scale Penalizes necessary multi-syllable technical terms
Flesch-Kincaid Grade Syllables, sentence length US grade level Mapping to audience education level Direct grade-level output Same syllable bias as Flesch RE
Gunning Fog Complex words (3+ syllables), sentence length Years of education Identifying overly complex prose Catches dense writing quickly Flags proper nouns and compounds unfairly
Coleman-Liau Characters, sentence count US grade level Automated pipelines No syllable counting needed Less intuitive than syllable-based measures
SMOG Polysyllabic words in 30 sentences Education for 100% comprehension Healthcare, public-facing docs Conservative; aims for full comprehension Needs exactly 30 sentences for accuracy
Dale-Chall Unfamiliar words (vs. 3,000-word list), sentence length Comprehension level Catching jargon and unfamiliar vocabulary Detects short but unfamiliar words Word list is US-English-centric and dated
ARI Characters, words, sentences US grade level Fast automated scoring Simplest to compute; no NLP needed Least granular; misses vocabulary difficulty

Recommendation for this repository: Use Flesch Reading Ease as the primary score (target 50-70) with Dale-Chall as a secondary check for jargon density. Run both in your validation pipeline for complementary coverage. For a full walkthrough of tool integration, see Article 09.

Why readability matters: cognitive load

Cognitive load theory explains that working memory has limited capacity (typically 7±2 “chunks” of information).

Factors increasing cognitive load:

  • Long sentences (>30 words)
  • Complex vocabulary (multi-syllable technical terms)
  • Dense paragraph structure
  • Passive voice constructions
  • Abstract concepts without examples

Strategies reducing cognitive load:

  • Shorter sentences (15-25 words)
  • Familiar words when possible
  • Transitional phrases between ideas
  • Active voice (fewer words, clearer actors)
  • Concrete examples illustrating abstractions

Readability formula limitations

Formulas cannot measure:

  • Technical accuracy
  • Logical coherence
  • Appropriate level of detail
  • Cultural context
  • Visual design impact

They are indicators, not mandates:

Good use of readability scores: > “This section has a Flesch score of 35 (difficult). Can we simplify the language or add examples?”

Bad use of readability scores: > “This section must have a Flesch score of exactly 60. Replace all multi-syllable words.”

Repository principle: Readability scores inform validation, but technical accuracy always takes precedence. See validation workflow.

Going deeper: Article 09: Measuring Readability and Comprehension covers comprehension testing (cloze tests, recall tests, think-aloud protocols), information scent theory, mental model alignment, documentation usability testing, and quantitative benchmarks by Diátaxis content type.

📐 Sentence structure and length

Sentence length directly affects comprehension. Research shows optimal sentence length for technical documentation is 15-25 words.

Why 15-25 words?

Psychological research: Reading comprehension drops significantly beyond 25 words per sentence

  • Reader must hold more information in working memory
  • Clause relationships become harder to track
  • Readers may need to re-read for understanding

Practical testing: Major tech companies’ analysis shows 15-25 words optimizes:

  • Reading speed
  • Comprehension accuracy
  • User satisfaction with documentation

Sentence type distribution

Effective technical writing mixes sentence types:

Simple sentences (1 independent clause)

  • Best for critical information
  • Example: “Save your work before closing the application.”
  • Use: 40-50% of sentences

Compound sentences (2+ independent clauses, coordinating conjunction)

  • Connect related ideas of equal importance
  • Example: “The function returns an object, and the object contains three properties.”
  • Use: 20-30% of sentences

Complex sentences (1 independent + 1+ dependent clause)

  • Show relationships between ideas
  • Example: “When the user clicks the button, the form validates all input fields.”
  • Use: 20-30% of sentences

Compound-complex sentences (2+ independent + 1+ dependent)

  • Use sparingly, risk comprehension
  • Example: “When you initialize the API, it loads the configuration, and then it establishes a connection.”
  • Use: 0-10% of sentences

Before/after: sentence length optimization

Example 1: Breaking up a 45-word monster

Too long (45 words, Flesch ~35): > “The authentication system provides OAuth 2.0 support which allows users to authenticate using their existing social media accounts including Facebook, Google, and Twitter, and the implementation follows the authorization code flow pattern to ensure security.”

Optimized (3 sentences, 15-16 words each, Flesch ~60): > “The authentication system supports OAuth 2.0. Users can authenticate using Facebook, Google, or Twitter accounts. The implementation follows the authorization code flow for security.”

Example 2: Combining choppy short sentences

Too choppy (4-6 words each, Flesch ~80 but immature tone): > “Open the file. Click Save. The file saves. A confirmation appears.”

Optimized (combined, 18 words total, Flesch ~65): > “Open the file and click Save. The file saves, and a confirmation message appears.”

Example 3: Balancing technical complexity

Dense single sentence (52 words, Flesch ~25): > “To configure the load balancer to distribute incoming HTTP requests across multiple backend servers while maintaining session affinity through cookie-based tracking and implementing health checks that automatically remove unresponsive servers from the rotation, follow these steps.”

Optimized (3 sentences, ~17 words each, Flesch ~55): > “This section explains how to configure the load balancer for HTTP requests. The configuration includes session affinity via cookies and automatic health checks. Health checks remove unresponsive servers from rotation.”

🔍 Voice guidelines: Microsoft vs. Google vs. Wikipedia

Different organizations prioritize different aspects of voice. Understanding these priorities helps you make informed decisions.

Microsoft Writing Style Guide approach

Core philosophy: “Write like you speak”

Voice characteristics:

  • Person: Second person (you) throughout
  • Tone: Conversational but professional
  • Contractions: Acceptable (“don’t” not “do not”)
  • Sentence length: “Short sentences are powerful”

Examples from Microsoft documentation:

Conversational Microsoft style: > “You can use Azure Functions to run code without managing servers. When an event triggers your function, it runs in a fully managed environment.”

Microsoft’s rationale:

  • Users feel directly addressed
  • Reduces formality barrier
  • Increases engagement with content
  • Mirrors how developers actually speak

Google Developer Documentation approach

Core philosophy: “Write for a global audience”

Voice characteristics:

  • Person: Second person (you)
  • Tone: Friendly but precise
  • Contractions: Avoided for international clarity
  • Sentence length: Very short, scannable

Examples from Google documentation:

Clear Google style: > “Use Cloud Functions to deploy code without managing servers. Cloud Functions runs your code when an event triggers it.”

Google’s distinctive elements:

  • No contractions (international readers)
  • Ultra-short sentences (mobile-first)
  • Product names as subjects (not “it”)
  • Explicit over implicit

Deep dive: For comprehensive coverage of writing for international audiences—including translation-friendly patterns, cultural adaptation, and formatting conventions—see 12-writing-for-global-audiences.md.

Google vs. Microsoft comparison:

Aspect Microsoft Google
“You can’t configure…” ✅ Acceptable ❌ Use “You cannot…”
“It automatically scales” ✅ Acceptable ❌ Use “[Service name] automatically scales”
Sentence avg. 18-22 words 12-18 words
Paragraph length 4-6 sentences 2-4 sentences

Wikipedia Manual of Style approach

Core philosophy: “Encyclopedic neutrality”

Voice characteristics:

  • Person: Third person only
  • Tone: Neutral, academic
  • Contractions: Never
  • Sentence length: Varies, completeness valued over brevity

Wikipedia rules:

  • ❌ “You should back up your data”
  • ✅ “Users should back up their data” or “Data should be backed up”
  • ❌ “We recommend…”
  • ✅ “Experts recommend…” or “According to [source]…”

When Wikipedia’s approach is appropriate:

For encyclopedic content: > “The Model-View-Controller (MVC) pattern separates concerns into three interconnected components. The model manages data, the view displays the interface, and the controller handles user input.”

For historical/technical background: > “REST was introduced by Roy Fielding in his 2000 doctoral dissertation. The architectural style emphasizes stateless communication and resource-based interactions.”

Not appropriate for procedural content: > “The user should open the terminal and type the command. The system will display output.” > (Awkward for instructions)

Comparison table: voice across style guides

Element Microsoft 📘 Google 📘 Wikipedia 📘 This Repository
Primary person Second (you) Second (you) Third person Second (procedural), Third (concepts)
Contractions Yes No No Minimal
Sentence avg. length 18-22 words 12-18 words Varies (20-30) 15-25 words
Active voice % 80-90% 85-95% 60-70% 75-85%
Tone Conversational Friendly-precise Neutral-academic Professional-accessible
Technical terms Define on first use Link to glossary Link to related articles Define + link

👤 Person usage (first, second, third)

Person choice fundamentally affects tone and reader relationship with content.

First person (I, we, us)

Appropriate contexts:

Tutorials (plural “we” as guide): > “In this tutorial, we’ll build a REST API from scratch. We’ll start with basic routing, then add authentication.”

Author’s note in explanation: > “I recommend starting with the simpler approach before attempting the optimized version.”

Open-source project documentation (community “we”): > “We welcome contributions from developers worldwide. We review pull requests within 48 hours.”

Inappropriate contexts:

Reference documentation: > “We provide three authentication methods…” (system provides, not “we”)

How-to guides: > “We need to configure the settings…” (reader configures, not collective “we”)

Second person (you, your)

Appropriate contexts:

Procedural instructions: > “You must install Node.js before you run the application.”

Conditional guidance: > “If you need real-time updates, use WebSockets instead of HTTP polling.”

Troubleshooting: > “If you see error 404, check your URL configuration.”

Inappropriate contexts:

Encyclopedic content: > “You can find REST in many modern APIs…” (encyclopedias don’t address readers)

Reference definitions: > “The function accepts three parameters you pass…” (reference describes, doesn’t instruct)

Third person (user, developer, system)

Appropriate contexts:

API reference: > “The function returns null when the user provides an invalid parameter.”

Conceptual explanation: > “Developers choose NoSQL databases when the data model requires flexibility.”

System behavior: > “The application validates input before processing the request.”

Inappropriate contexts:

Step-by-step instructions: > “The user should click the button and enter their password.” (second person clearer)

Repository guidelines

Based on documentation.instructions.md:

Content Type Preferred Person Rationale
Tutorials Second (you) + occasional plural first (we) Direct instruction + guidance
How-to guides Second (you) Action-oriented clarity
Reference Third person Objective description
Explanation Third person + educational first plural (we) Neutral analysis + shared exploration
Validation prompts Second (addressing the model) Clear agent instructions

🎭 Tone and register

Tone conveys attitude; register indicates formality level. Both should match audience expectations and content purpose.

Tone spectrum for technical documentation

Formal ← → Conversational

Tone Example Appropriate Context
Formal “One must ensure data persistence prior to system termination.” Academic papers, standards documents
Professional “Save your work before closing the application.” Enterprise documentation, official guides
Conversational “Don’t forget to save before you close!” Tutorials, blog-style content
Casual “Make sure you hit that save button!” Community forums, chat documentation

Repository standard: Professional with occasional conversational warmth in tutorials

Adjusting tone by content type

Reference material: Austere, factual > “Parameters
> - userId (string, required): Unique identifier
> - options (object, optional): Configuration object
> Returns: User object or null”

How-to guide: Professional, direct > “To authenticate users, configure the authentication provider in appsettings.json. Add your client ID and secret to the configuration object.”

Tutorial: Warmer, encouraging > “Great! You’ve configured your first authentication provider. In the next section, we’ll test it with a sample user login.”

Explanation: Neutral, analytical > “OAuth 2.0 separates authentication from authorization, allowing third-party applications to access user resources without exposing credentials.”

Avoiding problematic tones

Condescending: ❌ “Obviously, you should always validate input.” ❌ “It’s simple: just follow these twelve steps.” ❌ “Even a beginner knows to check for errors.”

Presumptuous: ❌ “You probably want to use React for this.” ❌ “Most developers prefer…” ❌ “Clearly, this is the best approach.”

Apologetic: ❌ “Sorry for the confusion, but…” ❌ “Unfortunately, you’ll have to…” ❌ “We apologize for the complicated setup.”

Overly casual: ❌ “This API is super awesome!” ❌ “Just throw your code in here and you’re good to go.” ❌ “LOL, debugging is the worst, right?”

Wikipedia’s words to watch

Wikipedia identifies problematic words that undermine neutrality:

Peacock terms (unjustified praise):

  • ❌ “elegant solution,” “groundbreaking,” “world-class”
  • ✅ Provide evidence: “The algorithm reduces time complexity from O(n²) to O(n log n)”

Weasel words (vague attribution):

  • ❌ “Some experts think,” “it is widely believed,” “many developers say”
  • ✅ Specific attribution: “According to the 2024 Stack Overflow survey, 38% of developers…”

Editorial comments:

  • ❌ “Note that,” “obviously,” “clearly,” “of course”
  • ✅ State facts directly without meta-commentary

Repository application: Our validation system flags these patterns through logic and fact-checking dimensions.

⚠️ Common voice pitfalls

Pitfall 1: mixing person

Inconsistent: > “You should configure the API key. Users can then authenticate. One must ensure the secret remains private.”

Consistent: > “Configure your API key. You can then authenticate. Ensure your secret remains private.”

Pitfall 2: nominalizations (zombie nouns)

Nominalization: Converting verbs to nouns, creating wordiness

Nominalized (passive, wordy): > “The implementation of the optimization was done to achieve a reduction in latency.”

Active verbs: > “We implemented the optimization to reduce latency.”

Common nominalizations to avoid:

Nominalization Better Verb
“Make a decision” “Decide”
“Perform an analysis” “Analyze”
“Conduct an investigation” “Investigate”
“Achieve implementation” “Implement”

Pitfall 3: hedging language

Excessive hedging undermines authority:

Over-hedged: > “You might possibly want to perhaps consider potentially using the cache, which could maybe improve performance somewhat.”

Confident (still accurate): > “Using the cache improves performance in most scenarios.”

Appropriate hedging (when uncertainty is real): ✅ “Performance may vary depending on network conditions.”

Pitfall 4: anthropomorphizing technology

Anthropomorphized: > “The API wants you to provide credentials.” > “The function is happy to accept three parameters.” > “The system thinks your input is invalid.”

Factual: > “The API requires credentials.” > “The function accepts three parameters.” > “The system rejects invalid input.”

Pitfall 5: buried verbs

Buried verb: Hidden in a noun phrase

Buried: > “We are in agreement that validation is a requirement.”

Direct: > “We agree that validation is required.”

More examples:

Buried Verb Direct
“Came to a realization” “Realized”
“Made an assumption” “Assumed”
“Reached a conclusion” “Concluded”
“Give consideration to” “Consider”

📌 Applying style principles to this repository

Voice standards from documentation instructions

From documentation.instructions.md:

Active voice preference:

Target: 75-85% active voice
Rationale: Clarity and directness while allowing 
           passive voice when appropriate

Sentence length target:

Range: 15-25 words per sentence
Maximum: 30 words (exceptions for complex technical descriptions)
Rationale: Optimizes comprehension without oversimplification

Readability scores:

Flesch Reading Ease: 50-70 (fairly difficult to standard)
Flesch-Kincaid Grade: 9-10 (high school level)
Rationale: Technical precision + accessibility

Validation workflow

Our validation system enforces style principles through automated and human review:

Grammar validation (grammar-review.prompt.md):

  • Active vs. passive voice ratio
  • Person consistency within sections
  • Sentence length distribution

Readability validation (readability-review.prompt.md):

  • Flesch Reading Ease score
  • Flesch-Kincaid Grade Level
  • Average sentence length
  • Complex word percentage

Logic validation (logic-analysis.prompt.md):

  • Hedging language patterns
  • Nominalizations
  • Anthropomorphized technology

See 05-validation-and-quality-assurance.md for complete validation system documentation.

Style evolution

Documentation style evolves based on:

  • User feedback on comprehension
  • Readability metrics from validation
  • Comparative analysis with authoritative sources
  • Community standards in technical writing

Living document principle: Style guidelines adapt as best practices emerge, while maintaining consistency within existing content.

✅ Conclusion

Voice and style fundamentally shape documentation effectiveness. They determine whether readers can understand, trust, and act on your content.

Key takeaways

  • Active voice matters — Use it 75-85% of the time for clarity, but embrace passive voice when appropriate (scientific tone, unknown actor, emphasis on object)
  • Readability is measurable — Flesch scores 50-70 and sentence lengths 15-25 words optimize technical comprehension without oversimplification
  • Style guides differ intentionally — Microsoft favors conversational warmth, Google emphasizes global clarity, Wikipedia requires neutral encyclopedic tone
  • Person consistency matters — Second person (you) for instructions, third person for reference, first person plural (we) sparingly for tutorials
  • Tone should match purpose — Professional for how-to guides, austere for reference, warmer for tutorials, neutral for explanation
  • Common pitfalls are avoidable — Watch for person mixing, nominalizations, excessive hedging, anthropomorphizing, and buried verbs

Next steps

📚 References

Official style guides

Microsoft Writing Style Guide - Top 10 Tips 📘 [Official]
Core voice and style principles including “write like you speak” and active voice preference.

Google Developer Documentation Style Guide - Voice and Tone 📘 [Official]
Guidance on conversational but professional tone for global developer audiences.

📚 Deep Dive: For comprehensive Microsoft voice principles (warm/relaxed, crisp/clear, ready to help), contractions usage, and bias-free language, see the dedicated Microsoft Voice and Tone Analysis.

Wikipedia Manual of Style - Grammar and Usage 📘 [Official]
Encyclopedic voice standards including third person preference and passive voice usage.

Wikipedia Manual of Style - Words to Watch 📘 [Official]
Comprehensive guide to problematic words including peacock terms, weasel words, and editorial language.

Readability and cognitive load

Flesch Reading Ease - Wikipedia 📘 [Official]
Technical explanation of Flesch-Kincaid readability formulas and scoring interpretation.

Plain Language Guidelines - Federal Plain Language 📘 [Official]
US government standards for clear, accessible writing including readability targets.

Cognitive Load Theory - Wikipedia 📗 [Verified Community]
Psychological foundation for understanding why sentence length and structure affect comprehension.

Writing technique resources

The Elements of Style - Strunk & White 📗 [Verified Community]
Classic writing guide emphasizing active voice, brevity, and clarity.

Chicago Manual of Style - Grammar 📘 [Official]
Authoritative grammar reference including detailed voice and mood guidance.

Repository-specific documentation

Documentation Instructions - Voice and Tone [Internal Reference]
This repository’s comprehensive voice guidelines with specific targets and rationale.

Validation Criteria - Readability [Internal Reference]
Flesch score targets (50-70), grade level (9-10), and sentence length standards (15-25 words).

Grammar Review Prompt [Internal Reference]
Validation prompt for checking active/passive voice, person consistency, and sentence structure.

Readability Review Prompt [Internal Reference]
Validation prompt for analyzing Flesch scores, grade level, and readability optimization.