Skip to content
Stefano Marzani

Stefano Marzani

Worldwide Head, Emerging Technologies

Amazon Web Services (AWS)
Dani Cherkassky

Dani Cherkassky

CEO

Kardome
Stas Matviyenko

Stas Matviyenko

Vice President, Voice AI Monetization

SoundHound AI
Andy Qiu

Andy Qiu

Senior Manager

SBD Automotive

If you weren’t able to attend the webinar live, don’t worry we’ve got you covered. We recorded it. This session explored practical insights into AI-driven revenue opportunities, from Stefano Marzani (Amazon Web Services), Dani Cherkassky (Kardome), Stas Matviyenko (SoundHound AI), and Andy Qiu (SBD Automotive) as they discussed the path to profitability in automotive AI.

Download Webinar Slides

Contact SBD Automotive

Audience Q&A

Does it means that agentic AI is a kind of "Predictive" AI to improve personnalization of customer experience ?

Not exactly — agentic AI goes beyond prediction. Predictive AI forecasts outcomes (e.g., anticipating maintenance needs). Agentic AI autonomously executes multi-step tasks on behalf of the user: booking appointments, managing charging schedules, completing purchases.

As Stefano Marzani (AWS) framed it in his slides, agents are ""the abstraction layer IoT promised but never delivered"" — they don't just predict, they act.

For personalization specifically, agentic AI can learn user patterns AND act on them autonomously (e.g., pre-ordering coffee on a familiar route).

Right now lot of hesitation to use AI is the lack of trust. Do you think regulating the in car AI features would gain that trust? Why do you think NCAP, NHTSA not regulating in car AI yet? Do you forsee a roadmap?

Trust is indeed a critical barrier, and regulation could help — but not in the way most people expect.

Why NCAP/NHTSA haven't regulated in-car AI yet: Their mandates focus on safety-critical systems (ADAS, crash avoidance, occupant protection). In-car AI features like voice assistants and recommendation engines sit in a regulatory gray zone — not safety-critical enough for NCAP scoring, but they do influence user trust across the entire vehicle ecosystem.

The privacy-trust-monetization link (from our Panel Topic 3): SBD's consumer research shows that when users perceive their data is being exploited, willingness-to-pay drops across the ENTIRE AI feature portfolio — not just the offending feature. One ""Grudge"" experience on privacy can poison every ""Hero"" feature's P&L.

Regulatory roadmap outlook:
• EU AI Act (2025–2026 enforcement) is the closest framework, classifying AI by risk tier
• China has a separate, rapidly evolving regulatory approach
• The US remains fragmented with no unified in-car AI regulation on the horizon

SBD's view: OEMs should not wait for regulation to build trust. The ones treating ""your data never leaves your car"" as a premium value proposition — not a compliance footnote — are already pulling ahead. Edge-first architectures (as Dani Cherkassky / Kardome presented) become a trust asset, not just a cost play. As we argued in the panel: ""Privacy isn't a constraint on monetization. It's a monetization variable."

In the classification some feature could have "indirect value" , predictive maintenance is a way to bring customer to OEM workshop, am i right ? in this case is it included in the classification criteria ?

Absolutely correct — and this is precisely where SBD's Triangulation Model applies.

In our Hero/Zombie framework, predictive maintenance is classified as a ""Utility"": high user value, but low direct willingness-to-pay. Users value it, but won't pay a separate subscription for it. The strategic recommendation: keep it free/standard, let it carry your brand.

However, the indirect value is captured in our classification — specifically in the Revenue leg of the Triangulation Model. As stated in Andy's Slide 3 script: ""Not just subscription dollars. Indirect value, too — what does this feature do to residual value, to trim up-sell, to churn?""

For predictive maintenance specifically, the indirect revenue streams include:
1. Workshop traffic — driving customers to OEM-authorized service centers (exactly as you suggest)
2. Service revenue uplift — proactive repairs vs. reactive breakdowns
3. Improved residual value — a well-maintained vehicle retains higher resale value
4. Reduced warranty costs — early detection prevents expensive failures
5. Brand loyalty & retention — users who trust the car's intelligence are more likely to repurchase

The strategic implication: Utilities like predictive maintenance should be kept free/standard, but their indirect P&L contribution must be quantified to justify the ongoing inference cost. That quantification is exactly what ""Feature Lifecycle Margin"" is designed to capture.

1. When will we start seeing AI based features (hardware specifically) being called out as Standard Equipment or Option/Package on a Window Sticker/Monroney Label or will that be listed as a Connected Service? 2. It was mentioned on one of the slides that BMW tops charts with 22 separate AI features. How does BMW compare to other OEMs in terms of Connected Services, feature sets and costs associated with their subscription bundles? Trying to understand their AI feature push and if they are leading the charge to stand out from the competition, or for other reasons/KPI targets?

1. AI features on the Monroney Label / Window Sticker:
We don't expect to see AI-specific line items on the Monroney Label in the near term — and the reason is more fundamental than labeling conventions. Today's in-car AI features are still maturing rapidly across multiple dimensions: privacy safeguards, reliability, user interaction design, and edge-cloud orchestration. They are, in many respects, still in their early innings.

Before AI can appear as a standard equipment callout, two things need to happen. First, individual AI capabilities need to stabilize to the point where OEMs and consumers alike recognize them as proven, expected functionality — not experimental add-ons. Second, AI-driven features require deep software-hardware integration (dedicated compute, sensor fusion, on-device models) to become a true platform capability rather than a cloud-dependent service layer.

That maturation process — from experimental feature to industry-standard configuration — will take considerable time. Realistically, we see this as a post-2030 development at the earliest. In the interim, most AI capabilities will continue to live under Connected Services subscriptions or bundled software packages.

2. BMW's 22 AI features — context and strategic reading:
This data comes from SBD's Automotive AI research, and it tells a compelling story about BMW's positioning:

R&D investment and readiness: BMW is one of the earliest and most committed OEMs in automotive AI. They entered this space ahead of most competitors, which means they have a significant head start in technology validation, application testing, and — critically — talent and organizational capability.

Breadth of experimentation: The 22 AI features we tracked reflect a deliberate strategy of broad exploration. BMW is systematically testing where AI adds value across a wide range of in-cabin scenarios — from voice interaction to personalization to predictive functions. This breadth is itself a signal of strategic commitment.

Strategic conviction: According to BMW's own public communications, they have identified over 600 distinct AI use cases internally. It's important to note that a significant portion of these relate to internal operations — R&D processes, manufacturing optimization, supply chain management — rather than customer-facing in-vehicle features. But the scale of investment signals that BMW views AI as a core capability, not an incremental feature set.

In short, BMW's high feature count reflects early-mover advantage and deliberate experimentation rather than a race to pad the spec sheet. Whether those 22 features are all ""Heroes"" in our framework — or whether some are ""Zombies"" awaiting rationalization — is exactly the kind of question our Feature Lifecycle Margin analysis is designed to answer.

For a more detailed competitive comparison of BMW's AI portfolio against other OEMs — including subscription structures, feature engagement, and cost positioning — we'd be happy to share further from our research. Please don't hesitate to reach out.

OEM's are accelerating SDV and AI integration under commercial pressure, while regulation remains process drive, not implementation complete all due to AI-driven development.In centralised service-oriented architecture AI driven L7 expansion increases interface density and dependency chains without resolving underlying architectural constraints, shifting risk to integration correctness, trust-boundary enforcement, and update orchestration, where failures can propagate across domains. This also introduces non-trivial cost overhead (in the likes of compute hardware, data pipelines, validation and OTA infra) and high cost of failure exposure (recalls, downtime, reliability, cybersecurity incidents eg. Jeep Cherokee hack that I can recall) which can translate into margin compression and profit volatility if system robustness doesn't scale with the complexity that scales each passing moment in tech. My argument is not against the tech its just a concern on the pace of development without foresight on the probable cost that is attached with the tech.

This is an exceptionally well-articulated concern, and it aligns directly with our core thesis. The attendee is describing — in precise architectural language — the exact P&L problem we presented: complexity scaling faster than robustness, with cost overhead (compute, data pipelines, validation, OTA infra) that compounds with every new AI feature added.

Our response addresses each dimension:

1. The cost visibility gap (from Andy's presentation):
The ""foresight on probable cost"" this attendee calls for is exactly what the industry lacks. Most OEMs today see only an aggregate cloud bill — they have no mechanism to attribute spend back to individual AI features or measure cost-per-user at the feature level. The leading OEMs are just now building their first internal dashboards to map cloud cost against feature-level usage. Until that measurement infrastructure exists, the cost risks described here — margin compression, profit volatility — remain invisible until they're catastrophic. As we stated: ""You cannot manage what you cannot see.""

2. The architectural answer (from Dani Cherkassky / Kardome):
Dani's System 1 / System 2 architecture directly addresses the ""interface density and dependency chain"" concern. By running fast, always-on tasks at the edge (System 1) and reserving cloud for complex reasoning (System 2), you reduce both the dependency chains and the attack surface. The Jeep Cherokee hack reference is apt — cybersecurity exposure is a particularly severe form of what we call ""hidden liabilities"" in the Zombie framework.

3. The commercial structure answer (from Andy's Panel Topic 2):
Outcome-based pricing — where suppliers share the downside risk tied to actual usage and engagement — is how you prevent runaway cost from invisible Zombie features. When a supplier only gets paid if the feature is actually used and valued, both parties have an incentive to kill what doesn't work.

The attendee's instinct is correct. The biggest risk isn't the technology — it's deploying it without the measurement infrastructure to know what it's actually costing you per feature, per user, per vehicle. That's exactly the gap our Feature Lifecycle Margin framework is designed to close.

For soundhound, is the voice commerce system live in vehicles and if so which ones?

We currently have pilots underway with two German OEMs (across the U.S. and EU markets) as well as a leading TV manufacturer in the U.S. A broader commercial rollout across vehicle fleets is planned for later this year, so we’re not able to share partner names just yet.