FAQ
Quick answers on product, data and deployment
Quick navigation:
NOA · Compliance · Testing & Validation · Competitors & Versions · Experience & Marketing · Insurance · Product & Download · Methodology & Scope · Glossary
NOA
Is NOA any good? How to evaluate?
- Conclusion: trust long‑window multi‑city data, not one‑offs
- How: ≥1,000 km/model, ≥3 cities, day/night/peak under one standard
- Delivery: method notes, sample disclosure, replayable evidence
How to compare urban/highway coverage?
- Conclusion: align samples first
- How: slice by city/road/weather; unify windows and mileage
- View: trends/heatmaps with sample sizes and windows
How many overrides are safe?
- Conclusion: use hazardous km/override, not raw counts
- How: scene thresholds (congestion/ramps), filter prompt‑only
- Reference: disclose distributions and percentiles
How to count hazardous overrides?
- Conclusion: automated + human; fully replayable
- How: tag hard brake/steer/forced; align videos/trajectory
- Review: human check edges; frame‑level replay/export
Compliance
How to test 2025 L2 compliance?
- Conclusion: checklist, item by item
- How: signal‑level DMS/ADAS + behavior‑level video
- Output: per‑item pass rates + non‑compliant clips
How to validate DMS effectiveness?
- Align eye/fatigue features with scene video
- Quantify TPR/FPR by cohort/time/environment
- Provide thresholds and tuning guidance
Testing & Validation
How to validate ADAS quickly?
- Conclusion: templated, auto stats, closed loop
- How: standard templates + detectors; auto pass/fail + causes
- Landing: integrate ticketing (perception/planning/control)
Long‑mileage cost too high?
- Conclusion: crowdsourcing + automation lowers cost
- How: signed drivers; plan by sample×city×time
- Scope: km‑scale samples for stable estimates
Is public‑road crowdsourcing compliant?
- Conclusion: yes, with controls
- How: signed drivers, RBAC, full audit trail
- Before disclosure: anonymize/minimize
How to review corner cases?
- Conclusion: find common failures; make a replay set
- How: scene library + video slicing alignment
- Focus: miss/double‑detect, rule conflicts
Can we replay issues?
- Conclusion: fully replayable
- How: sync interior/exterior video, sensors, event timeline
- Landing: frame‑level RCA and exportable packages
Competitors & Versions
Compare competitor NOA directly?
- Conclusion: yes; highlights and lowlights
- How: unify by brand/model/version; radar/compare + clips
- Scope: disclose sample sizes and windows
How to accept vendor stacks?
- Conclusion: checklist acceptance with grading
- How: multi‑vehicle sync capture; KPI checklist rating
- Output: remediation list and re‑test plan
How to compare versions?
- Conclusion: align samples, then compare
- How: per model/version sample alignment; compare NOA/ADAS KPIs
- Output: regression checklist and priorities
Experience & Marketing
How to extract launch selling points?
- Conclusion: auto‑generate a selling‑point kit
- How: representative KPI boards + visuals; one‑click exports
- Scope: attach methods and sample disclosure
Use results in marketing?
- Conclusion: yes, directly
- How: one‑click chart/video exports for long/short content
- Scope: attach sample/time/region; keep wording consistent
Why do reviews differ from KPIs?
- Conclusion: comfort drives perception gaps
- How: explain via accel curves and following variance
- Landing: parameter/strategy tuning guidance
Insurance
How does data support UBI pricing?
- Conclusion: risk features go straight into models
- How: hazardous km/override; hard brake/turn features
- Landing: segment by scene to increase discrimination
How do insurers identify low‑risk drivers?
- Conclusion: use personas to find low‑risk groups
- How: build scene/behavior‑based risk personas
- Landing: incentives (points/cashback) reduce loss ratios
Product & Download
How to download and install the CHEK app?
- On the Products page tap Android or scan the QR
- iOS/IVI will be announced upon release
How is my data privacy protected?
- Minimization, anonymization/minimization before disclosure
- RBAC and full audit trail; itemized compliance acceptance
How to book a platform demo?
- Use ‘Book a demo’ on the Products page
- Or call us via the footer phone; we will arrange a walkthrough
How to become a partner / join the Open Alliance?
- Click ‘Join now’ in the Open Alliance section to submit
- Explore Open APIs links for data/integration options
How to contact support?
- Phone: (86) 18410870301 (see footer)
- Or leave a note via demo/partner forms and we will reach out
Methodology & Scope
- Window: rolling 90 days by default; major releases split before/after.
- Representativeness: slice by time (peak/off/night) and weather; cross‑city quotas.
- Hazardous override: safety‑relevant overrides with hard brake/steer signature; prompts excluded.
- De‑noising: remove device anomalies/missing footage; keep audit logs.
- Auditability: every conclusion links to time window, route and clip IDs; third‑party witness supported.
Glossary
- MPI (Miles per Intervention): mileage per override; higher is better.
- Hazardous override mileage: average km per safety‑relevant override (km/override).
- Coverage/Availability: share of time/mileage where NOA is operable under given conditions.
- Success rate: proportion of scene‑defined goals (e.g., U‑turn, merge) successfully completed.
- Comfort metrics: longitudinal/lateral acceleration distribution, following speed variance.