Skip to main content
← Back to Blog
KAVACHFebruary 28, 2026 • Updated April 30, 2026 • 5 min read

Manual TARA vs Automated TARA: Why Spreadsheets Don't Scale

By Agnile Engineering Team

Key Takeaways

TL;DR — Manual spreadsheet-based TARA takes 4–8 weeks and 75–145 engineering hours per system and suffers from inconsistent ratings, broken traceability, and incomplete threat coverage. Automated, AI-driven TARA platforms like KAVACH cut that to 7–14 hours per system — roughly a 10× improvement — while identifying 2–3× more threats and generating ISO/SAE 21434 Work Products directly from the analysis.

  1. 1.A single manual TARA typically consumes 3–5 workshops, 4–8 weeks of elapsed time, and 75–145 engineering hours per system — and a vehicle program of 15–30 systems multiplies that burden by an order of magnitude.
  2. 2.Spreadsheets fail at automotive scale along six axes: no structured threat knowledge, inconsistent ratings, broken traceability, version-control chaos, no cross-system reuse, and no structured for audit review output.
  3. 3.Automated TARA per system with KAVACH typically takes 7–14 engineering hours (2–4 for modeling, 4–8 for expert review, 1–2 for Work Product generation) — roughly a 10× reduction over manual.
  4. 4.KAVACH combines three capabilities: AI + RAG-based threat identification, a curated catalog of thousands of automotive-specific threats mapped to UNECE R155 Annex 5 and STRIDE, and automatic ISO/SAE 21434 Work Product generation with full traceability.
  5. 5.Automated platforms consistently surface 2–3× more threats than manual workshops for the same system, closing coverage gaps caused by workshop-participant limitations.

At a Glance

One-Sentence Answer
Manual TARA can work for small scopes, but architecture-aware tooling improves consistency, traceability, reuse, and reviewability for complex vehicle programmes.
Who This Is For
Cybersecurity managers, TARA leads, system architects, engineering leaders, and teams evaluating KAVACH or other ISO/SAE 21434 tools.
Last Reviewed
May 2026
Primary References
ISO/SAE 21434, TARA methods, architecture-aware cybersecurity analysis, engineering workflow automation.
Practical Use
Use this guide to compare spreadsheet-based TARA with structured, engineer-reviewed cybersecurity engineering workflows.

Manual TARA using spreadsheets typically takes 4-8 weeks per system, produces inconsistent risk ratings between engineers, and struggles to maintain traceability across ISO/SAE 21434's 42 required Work Products. Automated TARA platforms like KAVACH reduce this cycle to hours by using AI-powered threat identification, structured risk scoring, and automatic work product generation. For any organization performing TARA at scale, the question is no longer whether to automate, but when.

Dimension
Manual TARA
Automated TARA
Cycle time
4–8 weeks per system
Hours, with continuous re-runs
Threat coverage
Workshop participants' working memory
AI retrieval over a curated automotive corpus
Risk Scoring
Subjective; drifts between engineers
Structured per ISO/SAE 21434 Clause 15
Standard coverage
Clause 15 in isolation, often partial
Clauses 5–15, ISO/SAE 21434 lifecycle work products
Audit readiness
Reformatted at the end, often by hand
Built throughout, exportable any time
Repeatability
Each program restarts from blank Excel
Reusable templates, traceable lineage
Six dimensions where manual spreadsheet workflows break under automotive-program scale, and where structured automation closes the gap.

The Manual TARA Problem

Most automotive engineering teams today perform TARA using a combination of spreadsheets (Excel or Google Sheets), word processing documents, and in-person or virtual workshops. The typical workflow looks like this:

A cybersecurity engineer creates a spreadsheet template with columns for assets, threats, impact ratings, feasibility ratings, risk levels, and treatment decisions. The team schedules a series of workshops — usually 3-5 sessions of 2-4 hours each — where subject matter experts from systems engineering, software, hardware, and cybersecurity review the system architecture and brainstorm threats. Between workshops, the cybersecurity engineer consolidates inputs, resolves conflicting assessments, and updates the spreadsheet.

After the workshops, the engineer documents the results in formal work product templates, cross-references them to the system architecture, and circulates the outputs for review. Reviewers provide comments in email or document markups, requiring further iteration. The entire process, from initial workshop scheduling to final approved work products, typically spans 4-8 weeks per system.

For a vehicle program with 15-30 systems requiring TARA, this manual approach consumes hundreds of engineering weeks and thousands of workshop hours — a staggering resource commitment that directly impacts program timelines and engineering budgets.

Why Spreadsheets Fail at Scale

Spreadsheets are general-purpose tools being forced into a highly specialized role. Their fundamental limitations become critical at automotive scale:

  • No structured threat knowledge:Spreadsheets have no built-in knowledge of automotive threats. Every threat must be manually identified by the workshop participants, meaning the quality of the TARA depends entirely on who is in the room. If the team lacks experience with a specific attack vector — say, USB-based firmware injection or Bluetooth Low Energy protocol exploitation — that threat simply will not appear in the spreadsheet.
  • Inconsistent ratings: When two different engineers assess the same threat on the same system, they frequently arrive at different impact and feasibility ratings. Studies in related domains show that subjective risk assessments between analysts can vary by two or more levels on a five-point scale. Spreadsheets provide no mechanism to enforce consistent scoring criteria.
  • Broken traceability:ISO/SAE 21434 requires full traceability from assets to threats to impacts to risks to treatment decisions to Cybersecurity Goals. In a spreadsheet, traceability is maintained through manual cross-referencing — cell references, row numbers, or naming conventions. As the spreadsheet grows (a single ECU TARA can have 200-500 rows), these references break, become inconsistent, or are simply forgotten.
  • Version control chaos:Spreadsheet-based TARAs inevitably spawn multiple versions as different stakeholders contribute. “TARA_v3_final_FINAL_reviewed_JK.xlsx” is a familiar sight in automotive engineering teams. Without proper version control, it is impossible to maintain a single source of truth.
  • No reuse across systems: Threats identified for one system (e.g., a telematics ECU) are often applicable to similar systems in the same vehicle or across vehicle programs. Spreadsheets provide no mechanism for systematic reuse of threat knowledge, leading to duplicate effort and missed threats when similar systems are analyzed independently.
  • Audit readiness:When audit time comes — whether for CSMS certification, UNECE R155 type approval, or customer review — teams spend days reformatting spreadsheet data into presentable work products. The gap between the working analysis (spreadsheet) and the formal deliverable (document) creates additional effort and opportunities for errors.

What Automated TARA Looks Like

An automated TARA platform transforms each pain point of the manual process into a structured, repeatable workflow:

Instead of blank spreadsheet columns, the platform presents a structured system modeling interface where the analyst defines the system architecture — components, interfaces, data flows, and trust boundaries. This model becomes the foundation for all subsequent analysis.

Instead of brainstorming threats from scratch, the platform applies a curated threat catalog to the system model, automatically identifying applicable threats based on component types, interface protocols, and data classifications. The analyst reviews, refines, and supplements the AI-generated threats rather than building the list from nothing.

Instead of subjective rating discussions, the platform provides structured scoring frameworks with explicit criteria for each rating level. Guided assessment reduces inter-analyst variability and creates documented rationale for every rating decision.

Instead of manual cross-referencing, the platform maintains automatic traceability from assets through threats, impacts, feasibility, risks, treatment decisions, and cybersecurity goals. Every element is linked, and changes propagate consistently.

Instead of reformatting for audits, the platform generates ISO/SAE 21434-compliant Work Products directly from the analysis data — ready for review, approval, and submission.

KAVACH's Approach: AI + RAG + Curated Threat Database

KAVACH is built specifically for automotive TARA automation. Its architecture combines three core capabilities:

  • AI-powered threat identification: KAVACH uses large language models augmented with a Retrieval-Augmented Generation (RAG) architecture to identify threats relevant to each specific system context. The AI understands automotive architectures, communication protocols, and attack patterns, enabling it to surface threats that even experienced engineers might miss.
  • Curated threat catalog:KAVACH's knowledge base includes thousands of curated automotive threat scenarios, mapped to UNECE R155 Annex 5 categories, STRIDE classifications, and ISO/SAE 21434 impact dimensions. This catalog is continuously updated based on new vulnerability disclosures, published research, and real-world automotive security incidents.
  • Structured Work Product generation:KAVACH automatically generates the TARA Work Products required by ISO 21434 — including asset inventories, threat lists, impact assessments, feasibility assessments, Risk Determination results, Risk Treatment decisions, and Cybersecurity Goals — all with full traceability.

Time Comparison: Manual vs Automated

The time savings from automated TARA are substantial and well-documented:

Manual TARA per system: 4-8 weeks of engineering effort, including 15-25 hours of workshops, 40-80 hours of analysis and documentation, and 20-40 hours of review and iteration. Total: 75-145 engineering hours per system.

Automated TARA per system (KAVACH): 2-4 hours for system modeling and initial AI-generated analysis, 4-8 hours for expert review, refinement, and validation, and 1-2 hours for Work Product generation and final review. Total: 7-14 engineering hours per system.

This represents a 10x reduction in engineering effort per system. Across a vehicle program with 20 systems, the difference is between 1,500-2,900 engineering hours (manual) and 140-280 engineering hours (automated) — a savings that directly translates to faster time-to-market and reduced compliance costs.

Quality Comparison: Consistency and Completeness

Beyond time savings, automated TARA delivers measurably higher quality:

  • Threat coverage: Automated platforms with comprehensive threat catalogs consistently identify 2-3x more threats than manual workshops for the same system. The catalog ensures that known attack vectors are never overlooked due to workshop participant limitations.
  • Rating consistency: Structured scoring frameworks reduce inter-analyst variability in impact and feasibility ratings. When the same criteria and guidance are presented consistently, different analysts converge on similar ratings.
  • Traceability completeness: Automatic traceability eliminates the broken cross-references and orphaned entries that plague spreadsheet-based TARAs. Every threat traces back to an asset and forward to a Risk Treatment decision and Cybersecurity Goal.
  • Audit readiness: Work Products generated directly from the analysis data are always current, consistently formatted, and structured for audit review. There is no gap between the working analysis and the formal deliverable.

When to Switch from Manual to Automated TARA

The right time to adopt automated TARA depends on your organization's scale and maturity:

If you are performing your first TARA: Starting with an automated platform is ideal. It provides structure and guidance that helps teams new to ISO/SAE 21434 learn the process while producing compliant outputs from day one.

If you have 1-3 systems to analyze: Manual TARA is feasible but inefficient. An automated platform pays for itself in time savings even at this scale, while establishing a foundation for future programs.

If you have 5+ systems per vehicle program: Automated TARA is essential. The manual approach simply does not scale — the engineering hours, workshop coordination, and consistency challenges make it impractical for multi-system programs.

If you manage multiple vehicle programs: Automated TARA with cross-program reuse capabilities is a strategic necessity. Threats identified in one program should automatically inform analysis of similar systems in other programs, building institutional knowledge over time.

The automotive industry is moving toward mandatory cybersecurity compliance, and the volume of TARA work will only increase. Teams that adopt automated platforms now build capability, efficiency, and institutional knowledge that compounds over time. Those that wait will face an ever-growing backlog of manual TARA work that threatens program timelines and compliance deadlines.

Ready to see what automated TARA looks like? Request a KAVACH demo or explore our ISO/SAE 21434 guide for more on the TARA methodology.

Want to Review This on a Real Vehicle Architecture?

KAVACH and Agnile's cybersecurity engineering team help teams connect architecture, assets, threats, attack paths, controls, and traceable cybersecurity evidence.