.png)
I've spent my career as an elite security researcher hunting vulnerabilities. My job has always been to think like an attacker: find the gaps and exploit the loopholes.
When we founded Lema, I brought that same mindset to third-party risk. What I found was exactly what I expected: companies were treating their biggest attack surface with spreadsheets and self-reported questionnaires. The discipline that should be engineering risk was stuck doing compliance theater.
This post is about changing that. It's about what happens when you apply vulnerability research thinking to vendor risk.
Your vendor passed the assessment. SOC 2 Type II, privacy controls available, approved.
Then you discovered your developers have been sending production secrets to an AI-powered code editor for six months because privacy mode was off by default and nobody knew to turn it on.
Or you learned from a class action lawsuit that your business communications platform has been using customer call recordings to train AI without consent, and they added a Philippines-based transcription service that's processing customer SSNs spoken on support calls.
Or your customer engagement platform quietly removed "we do not sell your data" from their privacy policy after a breach and lawsuit, and you found out months later.
The assessment didn't catch it because it asked if controls exist, not if anyone's using them.
TPRM is an audit process applied to an engineering problem. And audits can't find what vendors don't tell you.
A risk engineer finds what could actually go wrong, not what vendors say about their controls.
The output isn't a score. It's: this is broken, this breaks if the vendor fails, here's how to fix it.
It's the difference between an audit and a penetration test. One asks if you're secure. The other proves it and prepares for the moment something breaks.
Risk engineering is required when risk isn't obvious from documentation alone. Sometimes risk emerges because usage changes. Other times, the risk exists from the start but only becomes visible once you understand how the relationship actually works.
Here's how an AI-powered code editor would be treated under TPRM vs. risk engineering.
The Vendor: Provides AI code completion and editing for developers.
Your Environment: 50 backend developers writing code with database credentials, API keys, and customer data queries.
The vendor answered every question honestly:
All true. All compliant. All useless.
Because the real question isn't "do privacy controls exist?"
The real question is "are your developers sending production secrets to a third party right now?"
Current TPRM can't answer that. Risk engineering can.
During a live demo, a CISO asked us to analyze a vendor his team was evaluating, one Lema had never seen before. In 90 seconds, Lema identified that code access meant secrets access, a risk their internal review had completely missed.
His response:
"Your tool already spotted things we did not contemplate in the cursory review. No one really thought about the fact that we're giving it code access and therefore it has secrets access."
This is what risk engineering looks like. Not asking what controls exist. Finding what's actually broken.
What Lema Enables
Current TPRM tools are built for auditors. Lema is built for risk engineers. It gives you three ways to find what vendors don't tell you:
Third-Party Artifacts — Analyzes SOC 2 reports, penetration tests, security policies. Detected that a vendor's privacy policy requires opt-out via email for AI training (you didn't know to send that email).
Public Intelligence — Monitors breaches, lawsuits, policy changes, subprocessor additions. Found the Philippines transcription service. Found when "we do not sell your data" was quietly removed.
Blast Radius Monitor — Connects to Okta, Wiz, Netskope. Shows who's using each vendor and what permissions they have. Caught when someone gave the chatbot email access.
These three sources work together to find actual exposure:
Not "vendor has controls available."
But "privacy mode OFF by default + 50 developers using it + none enabled privacy mode = production secrets exposed right now."
Risk engineers can finally verify what's actually happening, understand their actual exposure, and take specific action.
Watch Lema assess your vendors in seconds. Pick any vendor you're evaluating right now and see what risks your current process is missing.
Traditional TPRM is a "check-the-box" audit process that relies on a vendor’s self-reported data (like SOC 2 reports). Risk Engineering is a proactive security discipline. It focuses on the live interface between a vendor and your organization, using forensic artifact analysis and real-time monitoring to identify actual production exposure, not just theoretical compliance.
Static questionnaires can’t catch what your procurement team doesn’t know exists. Risk Engineering integrates with your security stack (e.g., Wiz, Netskope) to detect unsanctioned AI tools and integrations as they happen. By mapping the blast radius of these tools, it allows security teams to mitigate risks before they bypass governance.
Absolutely. Compliance is a snapshot of a vendor's past; risk is a reality of your present. A vendor can meet all SOC 2 requirements while shipping a tool with "opt-out" privacy defaults that ingest your IP into their training models. Risk Engineering identifies these configuration drifts that traditional audits miss.
It is the process of using Lema’s patented zero-hallucination engine to ingest and cross-reference a vendor’s legal and technical documents (SOC 2, DPA, Privacy Policies). Unlike a human auditor, Lema can scan 16,000+ artifacts in 24 hours to find hidden clauses, such as subprocessor changes or "silent" privacy policy updates, with 100% accuracy.
No, it powers it. Risk Engineering replaces the manual, high-latency work inside your GRC. Lema integrates with your existing workflow to turn a static database into a live, automated defense platform that calculates real-time impact rather than just storing PDF files.
While traditional TPRM reviews take weeks of back-and-forth communication, a Lema-powered Risk Engineering assessment takes under five minutes. We prioritize the controls that actually matter based on the vendor’s specific access to your critical assets.
.png)
.png)
.png)