Home
/
Blog
/
What is a Risk Engineer?

What is a Risk Engineer?

By
Tomer Roizman
|
February 6, 2026
|
5 minutes
Table of Contents

I've spent my career as an elite security researcher hunting vulnerabilities. My job has always been to think like an attacker: find the gaps and exploit the loopholes.

When we founded Lema, I brought that same mindset to third-party risk. What I found was exactly what I expected: companies were treating their biggest attack surface with spreadsheets and self-reported questionnaires. The discipline that should be engineering risk was stuck doing compliance theater.

This post is about changing that. It's about what happens when you apply vulnerability research thinking to vendor risk.

Why TPRM is Broken

Your vendor passed the assessment. SOC 2 Type II, privacy controls available, approved.

Then you discovered your developers have been sending production secrets to an AI-powered code editor for six months because privacy mode was off by default and nobody knew to turn it on.

Or you learned from a class action lawsuit that your business communications platform has been using customer call recordings to train AI without consent, and they added a Philippines-based transcription service that's processing customer SSNs spoken on support calls.

Or your customer engagement platform quietly removed "we do not sell your data" from their privacy policy after a breach and lawsuit, and you found out months later.

The assessment didn't catch it because it asked if controls exist, not if anyone's using them.

TPRM is an audit process applied to an engineering problem. And audits can't find what vendors don't tell you.

What is Risk Engineering?

A risk engineer finds what could actually go wrong, not what vendors say about their controls.

The output isn't a score. It's: this is broken, this breaks if the vendor fails, here's how to fix it.

It's the difference between an audit and a penetration test. One asks if you're secure. The other proves it and prepares for the moment something breaks.

What Risk Engineering Looks Like in Practice

Risk engineering is required when risk isn't obvious from documentation alone. Sometimes risk emerges because usage changes. Other times, the risk exists from the start but only becomes visible once you understand how the relationship actually works.

Here's how an AI-powered code editor would be treated under TPRM vs. risk engineering.

The Vendor: Provides AI code completion and editing for developers.

Your Environment: 50 backend developers writing code with database credentials, API keys, and customer data queries.

Current TPRM Risk Engineering
What you do Send security questionnaire Read vendor documentation, analyze default settings, correlate with how your developers actually use it
Questions you ask • Do you have data privacy controls?
• Can users control what data is shared?
• Is data encrypted?
• Do you have a SOC 2?
• What does the vendor collect by default?
• Is privacy mode on or off by default?
• What are our developers actually doing in this tool?
• What's in the code they're writing?
What you find ✅ Yes, privacy controls available
✅ Yes, privacy mode available
✅ Yes, TLS 1.3
✅ Yes, SOC 2 Type II
• Privacy mode is OFF by default
• When OFF, vendor collects: all code written, all prompts, all edits, all files opened
• Your developers write backend code with database queries and API integrations
• Developers hardcode API keys during development
• Production AWS credentials likely sent to vendor
Time spent 2-3 hours 90 seconds
Assessment result Low risk. Privacy controls available. Approved. High risk. Production secrets exposed.
What the vendor can honestly say "Yes, we have privacy controls. Users can enable privacy mode." Same thing. But that doesn't matter.
What actually matters to you Whether privacy controls exist Whether anyone is using them
Actions required None. Vendor approved. 1. Audit all developer installations
2. Enable privacy mode organization-wide
3. Rotate any API keys that may have been exposed
4. Add privacy mode to developer onboarding checklist

Why Current TPRM Failed

The vendor answered every question honestly:

  • "Do you have privacy controls?" → Yes
  • "Can users control data sharing?" → Yes
  • "Is data encrypted?" → Yes

All true. All compliant. All useless.

Because the real question isn't "do privacy controls exist?"

The real question is "are your developers sending production secrets to a third party right now?"

Current TPRM can't answer that. Risk engineering can.

The Proof

During a live demo, a CISO asked us to analyze a vendor his team was evaluating, one Lema had never seen before. In 90 seconds, Lema identified that code access meant secrets access, a risk their internal review had completely missed.

His response:

"Your tool already spotted things we did not contemplate in the cursory review. No one really thought about the fact that we're giving it code access and therefore it has secrets access."

This is what risk engineering looks like. Not asking what controls exist. Finding what's actually broken.

What Lema Enables

Current TPRM tools are built for auditors. Lema is built for risk engineers. It gives you three ways to find what vendors don't tell you:

Third-Party Artifacts — Analyzes SOC 2 reports, penetration tests, security policies. Detected that a vendor's privacy policy requires opt-out via email for AI training (you didn't know to send that email).

Public Intelligence — Monitors breaches, lawsuits, policy changes, subprocessor additions. Found the Philippines transcription service. Found when "we do not sell your data" was quietly removed.

Blast Radius Monitor — Connects to Okta, Wiz, Netskope. Shows who's using each vendor and what permissions they have. Caught when someone gave the chatbot email access.

These three sources work together to find actual exposure:

Not "vendor has controls available."

But "privacy mode OFF by default + 50 developers using it + none enabled privacy mode = production secrets exposed right now."

Risk engineers can finally verify what's actually happening, understand their actual exposure, and take specific action.

See It in Action

Watch Lema assess your vendors in seconds. Pick any vendor you're evaluating right now and see what risks your current process is missing.

Key Takeaways

Traditional TPRM catches compliance, not exposure—real risk lives in how tools are actually used.
AI and modern SaaS silently expand blast radius when defaults, usage, and behavior go unchecked.
Risk engineering applies attacker thinking to vendor risk to find what’s actually broken.

Upgrade your TPRM team into Risk Engineers

Get a Demo

FAQs

What is the core difference between Risk Engineering and traditional TPRM?

Traditional TPRM is a "check-the-box" audit process that relies on a vendor’s self-reported data (like SOC 2 reports). Risk Engineering is a proactive security discipline. It focuses on the live interface between a vendor and your organization, using forensic artifact analysis and real-time monitoring to identify actual production exposure, not just theoretical compliance.

How does Risk Engineering handle "Shadow AI" and vendor sprawl?

Static questionnaires can’t catch what your procurement team doesn’t know exists. Risk Engineering integrates with your security stack (e.g., Wiz, Netskope) to detect unsanctioned AI tools and integrations as they happen. By mapping the blast radius of these tools, it allows security teams to mitigate risks before they bypass governance.

Can a vendor be compliant but still pose a high risk?

Absolutely. Compliance is a snapshot of a vendor's past; risk is a reality of your present. A vendor can meet all SOC 2 requirements while shipping a tool with "opt-out" privacy defaults that ingest your IP into their training models. Risk Engineering identifies these configuration drifts that traditional audits miss.

What is "Forensic Artifact Analysis"?

It is the process of using Lema’s patented zero-hallucination engine to ingest and cross-reference a vendor’s legal and technical documents (SOC 2, DPA, Privacy Policies). Unlike a human auditor, Lema can scan 16,000+ artifacts in 24 hours to find hidden clauses, such as subprocessor changes or "silent" privacy policy updates, with 100% accuracy.

Does Risk Engineering replace my GRC tool?

No, it powers it. Risk Engineering replaces the manual, high-latency work inside your GRC. Lema integrates with your existing workflow to turn a static database into a live, automated defense platform that calculates real-time impact rather than just storing PDF files.

How long does a Risk Engineering assessment take?

While traditional TPRM reviews take weeks of back-and-forth communication, a Lema-powered Risk Engineering assessment takes under five minutes. We prioritize the controls that actually matter based on the vendor’s specific access to your critical assets.

About the Author
Tomer Roizman
CTO & Co-Founder, Lema.ai
Tomer Roizman is the CTO and Co-Founder of Lema.ai. With over a decade of experience in cybersecurity, Tomer has a distinguished background in the Israeli Intelligence Community, where he specialized in vulnerability research and led major security research projects. Prior to co-founding Lema, he served as the Research Lead at the API security unicorn Noname Security. Tomer holds an MBA from Tel Aviv University and is a recognized expert in building secure, scalable AI-driven architectures.
OUR RESOURCES

Level up with Lema