Tools
A range of software tools, from simple productivity enhancers to comprehensive project solutions, catering to various project needs.- Figma
changedFigma remains our standard design tool with full organizational adoption. The recent addition of Figma AI features for auto-layout suggestions and asset generation has further accelerated design workflows. Our Figma-to-code pipeline now covers web, iOS, and Android platforms with consistent design token application.
How We Use Figma
Figma serves as the single source of truth for all visual design decisions:
Component library — our Porsche Design System is maintained as a Figma library with 120+ components, each mirroring the code component API (props → Figma properties)
Variables — all design tokens (colors, spacing, typography) are defined as Figma Variables, exported via the Variables API to feed our Design Tokens pipeline
Prototyping — interactive prototypes replace static specs for user flows; developers reference prototypes directly during implementation
Dev Mode — designers publish components with annotations, and developers inspect spacing, colors, and assets without leaving Figma
Design-to-Code Pipeline
The pipeline connects Figma to our development workflow:
Designer updates a component or token in Figma
Figma Variables API webhook triggers a CI job
Token Transformer converts Figma variables to platform-specific formats
Published packages update automatically in Storybook and application codebases
Chromatic visual regression tests flag any unintended visual changes
This automation eliminated the manual handoff that previously caused 2-3 day delays between design updates and code implementation.
Team Workflow
All design reviews happen in Figma. Designers share prototypes with "can comment" access to the full engineering team, and feedback threads are resolved before implementation begins. This front-loads design decisions and reduces rework during code review.
Each team maintains a dedicated Figma project, but all teams share the central PDS component library. Custom components built for one team are evaluated for promotion to the shared library during monthly design system reviews.read more… - GitHub Copilot
changedGitHub Copilot has reached adopt status organization-wide. All developers have access, and measured productivity gains average 20% for implementation tasks. Our internal guidelines have matured to cover prompt engineering best practices, mandatory code review for AI-generated code, and exclusion patterns for sensitive repositories.
How We Measure Impact
We track Copilot's impact through three metrics:
Acceptance rate — 35% of suggestions are accepted as-is, 20% accepted with modifications
Time-to-PR — average time from branch creation to PR submission dropped 18% after rollout
Code review feedback — AI-generated code receives the same review rigor; rejection rate is comparable to human-written code
These numbers come from quarterly developer surveys and GitHub's built-in Copilot metrics dashboard.
Internal Guidelines
Our Copilot usage policy covers:
Always review — Copilot-generated code must pass the same review standards as hand-written code. "Copilot wrote it" is not a justification for skipping review.
Security-sensitive repos excluded — repositories handling PII, payment processing, and authentication have Copilot disabled via organization policy
Test generation — Copilot excels at generating test boilerplate. Developers are encouraged to use it for test scaffolding but must verify assertions are meaningful.
No blind acceptance — developers must understand every line they accept. Our onboarding includes a "Copilot literacy" module covering common failure modes.
Integration with Pair Programming
With Copilot handling routine implementation, our pair programming sessions have shifted focus toward architecture discussions and complex debugging rather than typing speed. Teams report that pairing with Copilot active feels like "pair programming with a fast junior" — useful for boilerplate, but requiring experienced judgment for design decisions.read more… - Storybook
changedStorybook remains adopted. The 911 team has consolidated their component development directly into the shared design system workspace, no longer maintaining a separate Storybook instance. The Taycan and Cayenne teams continue to use Storybook as their primary React component development and documentation tool, with Design Tokens applied via a shared preset and designs sourced from Figma.
How We Use Storybook
Storybook serves three distinct purposes in our workflow:
Component development — isolated development environment where developers build and iterate on components without running the full application
Living documentation — auto-generated docs pages with prop tables, usage examples, and design guidelines that stay in sync with the code
Visual regression testing — Chromatic integration catches unintended visual changes before they reach production
Configuration
All teams extend a shared Storybook preset published as @porsche-digital/storybook-preset:
// .storybook/main.ts
import type { StorybookConfig } from "@storybook/react-vite";
const config: StorybookConfig = {
stories: ["../src/**/*.stories.@(ts|tsx)"],
addons: [
"@storybook/addon-essentials",
"@storybook/addon-a11y",
"@porsche-digital/storybook-preset",
],
framework: "@storybook/react-vite",
};
export default config;
The shared preset configures:
PDS Provider decorator — wraps every story in PorscheDesignSystemProvider with theme toggle
Viewport presets — mobile, tablet, desktop matching our breakpoint system
Dark mode toggle — switches between light and dark PDS themes
Design token documentation — auto-renders available tokens per component
Story Patterns
We enforce a consistent story structure across teams:
// Button.stories.tsx
import { Button } from "./Button";
import type { Meta, StoryObj } from "@storybook/react";
const meta: Meta<typeof Button> = {
component: Button,
tags: ["autodocs"],
argTypes: {
variant: {
control: "select",
options: ["primary", "secondary", "ghost"],
},
},
};
export default meta;
type Story = StoryObj<typeof Button>;
export const Primary: Story = {
args: { variant: "primary", children: "Click me" },
};
export const AllVariants: Story = {
render: () => (
<div style={{ display: "flex", gap: "1rem" }}>
<Button variant="primary">Primary</Button>
<Button variant="secondary">Secondary</Button>
<Button variant="ghost">Ghost</Button>
</div>
),
};
Every component must have:
A default story showing the most common usage
An all variants story showing every visual permutation
Edge cases — long text, empty state, loading state, error state
Visual Regression Testing
We use Chromatic (by the Storybook team) for visual regression:
Every PR triggers a Chromatic build that screenshots all stories
Changed stories require explicit approval from a designer or frontend lead
Baseline updates happen automatically on merge to main
Average build: ~200 stories, completes in 2-3 minutes
The Chromatic integration has caught 30+ unintended visual regressions in the past 6 months that would have shipped to production otherwise.
Accessibility Testing
The @storybook/addon-a11y plugin runs axe-core checks on every story. Our CI pipeline fails if any story has accessibility violations at the "critical" or "serious" level. This catches:
Missing alt text on images
Insufficient color contrast
Missing form labels
Incorrect ARIA attributes
Keyboard navigation issues
Deployment
Storybook is deployed as a static site to our internal CDN on every merge to main:
URL: https://storybook.internal.porsche.digital
Versioned: each release tag publishes to /v{version}/
Search: Algolia DocSearch indexes all story documentation
Product managers and designers use the deployed Storybook as a reference when writing specs, ensuring they reference components that actually exist with their real prop APIs.
What's Next
We are evaluating Storybook 9's tag-based filtering for better organization of our growing story library, and exploring Storybook Test for component-level integration tests that replace some of our Playwright component tests.read more… - Vitest
changedVitest is now adopted by all teams. The Macan team uses it for testing their data visualization components. Browser mode has enabled us to consolidate component integration tests that previously required Cypress, simplifying our testing infrastructure alongside Storybook's visual regression testing. All tests are written in TypeScript. Average CI test time across all projects is under 90 seconds.
Why Vitest Over Jest
We migrated from Jest to Vitest across all projects in Q2 2024. The primary drivers:
Vite-native — our frontend projects already use Vite for bundling; Vitest reuses the same config and transform pipeline, eliminating the dual-config problem
Speed — Vitest's worker-based architecture and smart file watching reduced local test runs by ~60% compared to Jest
ESM-first — no more moduleNameMapper hacks for ESM packages; Vitest handles modern module formats natively
Compatible API — migration was mostly mechanical; describe/it/expect patterns carried over directly
Testing Strategy
Our testing pyramid across all projects:
Layer
Tool
Scope
Count (avg per project)
Unit
Vitest
Functions, utilities, hooks
~200
Component
Vitest + React Testing Library
Isolated component behavior
~100
Integration
Vitest Browser Mode
Multi-component flows
~30
Visual
Chromatic
Screenshot comparison
~200 stories
E2E
Playwright
Critical user journeys
~15
Browser mode (powered by Playwright under the hood) runs component tests in a real browser, catching DOM-specific issues that jsdom misses — particularly around CSS media queries, intersection observers, and Web Component rendering.
CI Integration
Every PR triggers the full test suite via GitHub Actions. Our reusable workflow:
Runs Vitest with --reporter=junit for GitHub Actions test summary
Collects coverage with @vitest/coverage-v8
Posts coverage diff as a PR comment (minimum 80% for new code)
Fails on any test.skip or test.todo — these must be resolved before mergeread more…
- Nx
changedNx has moved to trial. The Taycan team has migrated from Turborepo and the 911 team has started adoption. Remote caching via Nx Cloud has reduced average CI build times by 65%. The module boundary enforcement feature helps maintain clean architecture in our growing monorepo, preventing unintended cross-package dependencies.
Why We Switched From Turborepo
The Taycan team initially adopted Turborepo for monorepo orchestration. After 6 months, limitations drove the switch to Nx:
Task graph intelligence — Nx understands project dependencies at the code level, not just the package.json level. This means it can identify affected projects from a single file change, something Turborepo's hash-based approach missed.
Module boundary enforcement — Nx's @nx/enforce-module-boundaries lint rule prevents architectural violations at PR time. Our tag system (scope:shared, scope:team-taycan, type:feature, type:util) ensures teams don't accidentally import each other's internal code.
Code generation — nx generate scaffolds new packages, components, and services with consistent structure, reducing boilerplate and enforcing conventions.
Remote Caching Results
Nx Cloud caches task outputs (build, test, lint) across the entire team and CI:
Metric
Before (Turborepo)
After (Nx Cloud)
Avg CI build time
12 min
4.2 min
Cache hit rate
~40%
~78%
Local dev rebuild
45s
8s
Flaky test reruns
Manual
Automatic (Nx Agents)
The higher cache hit rate comes from Nx's finer-grained task hashing — it only invalidates caches when inputs that actually affect the output change, ignoring irrelevant file modifications like README updates.
Monorepo Structure
Our Nx workspace organizes ~40 packages:
apps/
configurator/ (Taycan team — Next.js)
dealer-portal/ (911 team — Next.js)
admin-dashboard/ (911 team — Vite + React)
libs/
shared/ui/ (Porsche Design System wrappers)
shared/utils/ (Common utilities)
shared/types/ (Cross-project TypeScript types)
team-taycan/ (Taycan-scoped libraries)
team-911/ (911-scoped libraries)
Module boundaries ensure apps/configurator can import from libs/shared/* and libs/team-taycan/* but never from libs/team-911/*.read more…