Why Testing Is Your Ticket Into a Software House
The Question That Changed My Perspective
In every software house interview I've studied, one question keeps appearing: "How do you test your code?"
Not "do you write tests?" — they assume you do. The question is how. What's your strategy? When do you write them? How do you decide what to test? The answer reveals whether you're a solo developer who ships and hopes, or a team player who ships with confidence.
Testing isn't just a technical skill. It's a signal that you understand how professional software teams operate.
Why Software Houses Care About Testing
A software house is fundamentally different from a startup or a solo project. You're working on multiple client projects, often rotating between teams. The code you write today will be maintained by someone else tomorrow.
In this environment, tests serve three purposes:
- Documentation. A well-written test suite tells new developers what the code is supposed to do — faster than any README.
- Confidence. When you refactor a module or upgrade a dependency, tests tell you immediately if you broke something.
- Speed. Paradoxically, writing tests makes you faster. Without them, every change requires manual verification across every affected feature. With them, you run a command and get a definitive answer.
Software houses bill clients for working software, not debugging time. A developer who ships tested code saves the company money on every project.
The Testing Pyramid
Not all tests are equal. The testing pyramid is a framework for deciding what to test and how:
/ E2E \ Few, slow, expensive
/----------\
/ Integration \ Some, moderate speed
/----------------\
/ Unit Tests \ Many, fast, cheap
/____________________\
Unit Tests: The Foundation
Unit tests verify individual functions and components in isolation. They're fast (milliseconds), cheap to write, and should cover the majority of your test suite.
For a Next.js project, I'd use Vitest — it's fast, TypeScript-native, and compatible with the Jest API most developers already know:
// contact-schema.test.ts
import { describe, it, expect } from "vitest";
import { contactSchema } from "@/lib/contact-schema";
describe("contactSchema", () => {
it("accepts valid contact form data", () => {
const result = contactSchema.safeParse({
name: "Jane Smith",
email: "jane@example.com",
subject: "Job Opportunity",
message: "I'd like to discuss a role on our team.",
});
expect(result.success).toBe(true);
});
it("rejects empty name", () => {
const result = contactSchema.safeParse({
name: "",
email: "jane@example.com",
subject: "Job Opportunity",
message: "A valid message here.",
});
expect(result.success).toBe(false);
});
it("rejects invalid email format", () => {
const result = contactSchema.safeParse({
name: "Jane",
email: "not-an-email",
subject: "Collaboration",
message: "Let's work together on something.",
});
expect(result.success).toBe(false);
});
it("rejects messages over 2000 characters", () => {
const result = contactSchema.safeParse({
name: "Jane",
email: "jane@example.com",
subject: "General Inquiry",
message: "a".repeat(2001),
});
expect(result.success).toBe(false);
});
});
Notice the pattern: each test verifies one behavior. The test names describe what should happen, not how the code works internally. This makes the test suite readable as documentation.
Integration Tests: The Middle Layer
Integration tests verify that multiple pieces work together — a component renders with its dependencies, an API route handles a full request-response cycle, or a database query returns expected results.
For React components, React Testing Library encourages testing from the user's perspective:
// contact-form.test.tsx
import { render, screen } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { ContactForm } from "@/components/contact/contact-form";
describe("ContactForm", () => {
it("shows validation errors for empty submission", async () => {
render(<ContactForm />);
const submitButton = screen.getByRole("button", { name: /send/i });
await userEvent.click(submitButton);
expect(
await screen.findByText(/name must be at least/i)
).toBeInTheDocument();
});
it("disables submit button while sending", async () => {
render(<ContactForm />);
// Fill valid data...
const submitButton = screen.getByRole("button", { name: /send/i });
await userEvent.click(submitButton);
expect(submitButton).toBeDisabled();
});
});
The key principle: test what the user sees and does, not internal component state. If a user can't tell the difference, the test shouldn't either.
E2E Tests: The Safety Net
End-to-end tests verify complete user journeys through your application. They're slow and expensive to maintain, so use them sparingly — only for critical paths.
Playwright is the current gold standard for E2E testing in the Next.js ecosystem:
// contact-flow.spec.ts
import { test, expect } from "@playwright/test";
test("user can submit the contact form", async ({ page }) => {
await page.goto("/contact");
await page.fill('[name="name"]', "Test User");
await page.fill('[name="email"]', "test@example.com");
await page.selectOption('[name="subject"]', "General Inquiry");
await page.fill('[name="message"]', "This is a test message for the contact form.");
await page.click('button[type="submit"]');
await expect(
page.getByText(/message sent/i)
).toBeVisible({ timeout: 10000 });
});
For my projects, I think about E2E tests for flows that cross multiple system boundaries. In the Personal AI Employee project, for example, the critical path involves Gmail ingestion → agent processing → Git commit. A unit test can't verify that chain — only an E2E test can.
Real-World Testing Decisions
Let me share how I think about testing in two of my projects:
Personal AI Employee
This system runs 24/7 on Oracle Cloud, processing tasks autonomously. Testing is critical because failures happen when nobody is watching.
- Circuit breaker logic gets thorough unit tests. If the circuit breaker fails, the agent might retry a broken operation endlessly, burning API credits and filling logs.
- Email parsing gets integration tests with real email fixtures. Edge cases in email formatting are endless — HTML emails, plain text, forwarded chains, attachments.
- The full agent loop gets a lightweight E2E test that processes a known-good task and verifies the output.
Flow (AI-Powered Todo App)
Flow has a rich UI with real-time updates. The testing strategy focuses on user interactions:
- API routes get unit tests for validation and error handling — the same Zod pattern from my contact form, applied to task CRUD operations.
- React components get integration tests for interactive behaviors — creating tasks, toggling completion, filtering views.
- The AI chatbot gets E2E tests for critical conversations — "create a task called X" should actually create the task.
How to Start Testing an Existing Project
If you have a project with zero tests, don't try to reach 100% coverage overnight. Here's a practical approach:
Step 1: Set Up the Toolchain
For a Next.js project, install Vitest and React Testing Library:
pnpm add -D vitest @testing-library/react @testing-library/jest-dom @vitejs/plugin-react jsdom
Add a minimal Vitest config:
// vitest.config.ts
import { defineConfig } from "vitest/config";
import react from "@vitejs/plugin-react";
import path from "path";
export default defineConfig({
plugins: [react()],
test: {
environment: "jsdom",
setupFiles: ["./vitest.setup.ts"],
},
resolve: {
alias: {
"@": path.resolve(__dirname, "./src"),
},
},
});
Step 2: Test the Boundaries First
Start with the code that sits at system boundaries — validation schemas, API routes, utility functions. These are the easiest to test and provide the highest return on investment.
Your Zod schemas are a perfect starting point. They're pure functions with no side effects, no dependencies, and clear expected behavior.
Step 3: Add Tests When You Fix Bugs
Every bug is a test case waiting to happen. When you fix a bug, write a test that would have caught it. This gradually builds coverage around the areas that actually break.
Step 4: Test New Features as You Build Them
Going forward, write tests alongside new code. You don't need test-driven development (TDD) — writing tests immediately after the implementation works fine. The habit matters more than the order.
Testing as a Career Differentiator
Here's what I've observed: most junior developers applying to software houses have similar portfolios. They've built todo apps, e-commerce sites, and dashboards. The technical skills are comparable.
What separates candidates is how they think about quality:
- Can you explain your testing strategy for a project?
- Can you write a test for a bug you just fixed?
- Do you understand the trade-offs between unit, integration, and E2E tests?
- Can you set up a testing pipeline from scratch?
These skills signal that you're ready for team-based development. You understand that code isn't done when it works on your machine — it's done when it works reliably in production, and you can prove it.
Your Testing Action Plan
If you're preparing to join a software house, here's what I'd recommend:
- Pick one project and add Vitest + React Testing Library to it.
- Write 5 unit tests for your validation logic or utility functions.
- Write 2 integration tests for your most important React components.
- Add a test script to your
package.jsonand run it in CI. - Mention testing in interviews. Talk about what you tested, why, and what you learned.
You don't need 90% coverage. You need to demonstrate that you understand why testing matters and that you can do it effectively. In a software house environment, that understanding is worth more than any framework certification.
Testing isn't a checkbox on a job application. It's a mindset that makes you a more reliable engineer — and that's exactly what software houses are looking for.