Lema Editorial Policy

At Lema, we publish content for security and risk leaders responsible for third-party exposure. That includes CISOs, heads of security, heads of third-party risk, vendor risk leaders, senior GRC and cyber risk leaders, and practitioners involved in vendor reviews and remediation.

Our goal is to publish content that is accurate and useful. This policy outlines how we produce, review, and maintain content across lema.ai and related public-facing materials.

Our editorial standards

We create a mix of technical and commercial content, including product pages, blog articles, solution pages, comparison pages, guides, one-pagers, decks, webinars, and other resource content. Regardless of format, every piece should do one thing clearly: help the reader better understand a third-party risk problem, why conventional approaches miss it, and what a stronger response looks like.

Accuracy

Any technical, product, workflow, or risk claim published under the Lema name should be grounded in something real. That may include approved internal product documentation, validated platform behavior, direct input from a subject matter expert, customer-approved information, or a trusted external source. We do not publish made-up benchmarks, inflated outcomes or unsupported claims.

Our audience is often accountable for real security and risk outcomes. We aim to be clear, specific, and practical. When appropriate, we include examples, relationship context, scope assumptions, and explanations that connect technical findings to business impact.

Expertise

Content may be drafted by marketers, writers, founders, or product teams. If a page makes claims about artifact analysis, control validation, public intelligence, blast radius, scope drift, fourth-party exposure, hidden access, or remediation workflows, it should be reviewed internally by someone with the right expertise. That review is not a formality; it’s part of the publishing process.

Marketing language

We are confident in what Lema does, but our content should explain it clearly and without overstatement. That means being precise about what the platform is designed to surface, where automation helps, where human judgment still matters, and where outcomes depend on vendor behavior, relationship scope, and available evidence.

How content is reviewed

Our content goes through two rounds of review before it is published: editorial review and subject matter review.

Editorial review focuses on clarity, structure, tone, readability, and whether the piece is genuinely useful. 

Subject matter review focuses on whether the technical, product, and risk details are correct, current, and aligned with approved messaging.

Some content may also require product, legal, security, privacy, or executive review, especially if it references customer examples, named vendors, public allegations, security incidents, regulatory obligations, or competitive claims.

Technical claims and product references

When we describe Lema, we want those descriptions to be accurate, current, and aligned with approved internal materials. We do not position Lema as a generic GRC platform, a compliance automation tool, a questionnaire-first TPRM workflow, a security ratings product, or a simple vendor inventory system. If a product capability has changed, the content should be updated. If a claim is still being discussed internally, it should stay out of public-facing copy until it is confirmed.

Risk findings, examples, and context

If we publish a guide, explainer, or comparison page, we should provide enough context for the reader to understand what risk is being discussed and why it matters. Where relevant, we include assumptions, workflow context, and practical implications.

If something is illustrative rather than a validated customer outcome, that should be stated clearly. We do not present hypothetical findings or generalized vendor scenarios as measured customer results.

Tool comparisons and listicles

When we publish rankings, “best of” lists, or comparisons involving TPRM, vendor risk, GRC, or security tooling, we aim to provide a practical framework that helps buyers make informed decisions.

We evaluate tools against criteria relevant to security and third-party risk teams, such as evidence quality, artifact analysis depth, intelligence coverage, business context, blast radius visibility, scope drift detection, remediation guidance, and workflow burden. We focus on specific analysis rather than promotional language. That includes highlighting meaningful strengths, limitations, tradeoffs, and where a tool may or may not be the right fit.

External sources and links

We sometimes link to third-party sources, documentation, research, standards, or public reporting to support claims or give readers additional context. Those links are included because they are useful, not because they are endorsements unless we explicitly say otherwise. We also look for opportunities to direct readers to relevant Lema resources when those links improve the experience and add useful context.

AI-assisted drafting

We may use AI tools to support parts of the content workflow, including research support, outlining, summarization, and editing. But we do not treat AI output as publish-ready by default.

Anything published under the Lema name should still be reviewed by a human editor and, where needed, by an internal subject matter expert. Responsibility for the final content remains with Lema.

Updating content

We review and refresh content periodically, especially pages that include technical claims, product details, comparison content, regulatory references, or sensitive security language.

Corrections

If we discover that something published under the Lema name is materially inaccurate, we correct it. That may involve fixing the page directly, revising technical language, removing unsupported claims, updating outdated information, or clarifying scope.

Contact us

If you notice an error, have a question, or want to contact us about this policy, please reach out to contact@lema.ai.