# testme
**Repository Path**: mirrors_embedthis/testme
## Basic Information
- **Project Name**: testme
- **Description**: Multi-language Test Runner
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-09-29
- **Last Updated**: 2025-10-18
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# TestMe - Test Runner for System Projects
TestMe is a specialized test runner designed for **embedded systems**, **C/C++/Rust**, and **core infrastructure projects** that use Make or CMake build systems. It discovers, compiles, and executes tests across multiple programming languages with configurable patterns and parallel execution -- ideal for low-level and performance-critical codebases.
## ๐ฏ Ideal Use Cases
TestMe is purpose-built for:
- **Embedded systems** - Cross-platform firmware and IoT device testing
- **C/C++/Rust projects** - Native compilation with GCC/Clang/MSVC, direct binary execution
- **Make/CMake-based projects** - Seamless integration with traditional build systems
- **Core infrastructure** - System-level components, libraries, and low-level tools
- **Multi-language tests** - Write tests in C, C++, shell scripts, Python, Go or Javascript/Typescript
## โ ๏ธ When to Consider Alternatives
For cloud-native or application-level projects written in JavaScript/TypeScript, or Python, **[Jest](https://jestjs.io/)** or **[Vitest](https://vitest.dev/)** may be better choices. They provide:
- Superior TypeScript transpilation and module resolution
- Rich ecosystem of plugins, matchers, and integrations
- Advanced watch mode with hot module reload
- Integrated code coverage, snapshot testing, and mocking
- First-class framework support (React, Vue, Angular, Svelte)
**TestMe focuses on simplicity and direct execution** for system-level projects.
---
TestMe is under very active development at this time and may be a little unstable. Please report any issues you find and we will try to fix them quickly.
## ๐ Features
- **Multi-language Support**: Shell (`.tst.sh`), PowerShell (`.tst.ps1`), Batch (`.tst.bat`, `.tst.cmd`), C (`.tst.c`), JavaScript (`.tst.js`), TypeScript (`.tst.ts`), Python (`.tst.py`), and Go (`.tst.go`).
- **Jest/Vitest-Compatible API**: Use familiar `expect()` syntax alongside traditional test functions for JavaScript/TypeScript tests
- **Automatic Compilation**: C programs are compiled automatically with platform-appropriate compilers (GCC/Clang/MSVC)
- **Cross-platform**: Full support for Windows, macOS, and Linux with native test types for each platform
- **Recursive Discovery**: Finds test files at any depth in directory trees
- **Pattern Matching**: Filter tests using glob patterns, file names, or directory names
- **Parallel Execution**: Run tests concurrently for better performance
- **Artifact Management**: Organized build artifacts in `.testme` directories
- **Hierarchical Configuration**: `testme.json5` files with tree traversal lookup
- **Environment Variables**: Dynamic environment setup with glob expansion support
- **Test Control**: Skip scripts, depth requirements, and enable/disable flags
- **Multiple Output Formats**: Simple, detailed, and JSON reporting
- **Integrated Debugging**: Multi-language debug support with platform-specific debuggers
- C: GDB, LLDB, Xcode, Visual Studio, VS Code, Cursor
- JavaScript/TypeScript: Bun inspector, VS Code, Cursor
- Python: pdb, VS Code, Cursor
- Go: Delve, VS Code, Cursor
## ๐ Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [API Reference](#api-reference)
- [Usage](#usage)
- [Test File Types](#test-file-types)
- [Configuration](#configuration)
- [Common Use Cases](#common-use-cases)
- [Output Formats](#output-formats)
- [Development](#development)
- [Tips and Best Practices](#tips-and-best-practices)
- [Publishing](#publishing)
## ๐ง Installation
### Prerequisites
TestMe requires **[Bun](https://bun.sh)** - a fast JavaScript runtime with built-in TypeScript support, to be installed.
For installation, visit [bun.sh](https://bun.sh) for installation instructions.
Ensure the Bun bin directory is in your path.
For Unix/Linux/macOS:
```sh
export "PATH=~/.bun/bin:$PATH"
```
on Windows with PowerShell:
```powershell
setx PATH "$($env:PATH);$env:USERPROFILE\.bun\bin"
```
### Quick TestMe Install
You can install TestMe from the npm registry using the following command:
**Important:** TestMe requires the `--trust` flag to run postinstall scripts.
```bash
# Using Bun (recommended)
bun install -g --trust @embedthis/testme
# Using npm
npm install -g @embedthis/testme
```
**Verify installation:**
```bash
tm --version
```
**Troubleshooting:** If `tm` command is not found after installation with Bun:
- You may have forgotten the `--trust` flag
- Run the check script: `node node_modules/@embedthis/testme/bin/check-install.mjs`
- Or manually install: `cd node_modules/@embedthis/testme && bun bin/install.mjs`
### Manual Installation from GitHub
#### Unix/Linux/macOS
1. Clone or download the TestMe project
2. Install dependencies:
```bash
bun install
```
3. Build the project:
```bash
bun run build
```
4. Install the project (on MacOS/Linux only):
```bash
sudo bun run install
```
#### Windows
##### 1. Build and Install
```powershell
# Install dependencies
bun install
# Build
bun run build
```
## ๐ Using TestMe Quick Start
### Initialize a New Project
```bash
# Create a testme.json5 configuration file
tm --init
# Create test files from templates
tm --new math.c # Creates math.tst.c
tm --new api.js # Creates api.tst.js
tm --new test.sh # Creates test.tst.sh
tm --new types.ts # Creates types.tst.ts
```
### Manual Setup
1. **Create test files** with the appropriate extensions:
```bash
# C test
echo '#include "testme.h"
int main() { teq(2+2, 4, "math works"); return 0; }' > math.tst.c
# JavaScript test
echo 'console.log("โ Test passed"); process.exit(0);' > test.tst.js
```
2. **Run tests**:
```bash
# Run all tests
tm
# Run specific tests
tm "*.tst.c"
# Run tests in a directory
tm integration
# List available tests
tm --list
```
3. **Clean up build artifacts**:
```bash
tm --clean
```
## ๐ API Reference
TestMe provides comprehensive testing APIs for C, JavaScript, and TypeScript:
### Testing API Documentation
- **[README-TESTS.md](README-TESTS.md)** - General test requirements, exit codes, output streams, environment variables
- **[README-C.md](README-C.md)** - Complete C testing API reference (`testme.h` functions)
- **[README-JS.md](README-JS.md)** - Complete JavaScript/TypeScript testing API reference
- **[doc/JEST_API.md](doc/JEST_API.md)** - Jest/Vitest-compatible API examples and migration guide
### Quick API Overview
**C Tests:**
```c
#include "testme.h"
teqi(2 + 2, 4, "Addition test"); // Assert equality
ttrue(value > 0, "Value should be positive"); // Assert condition
tinfo("Test progress..."); // Print info message
```
**JavaScript/TypeScript Tests:**
```javascript
import {expect, describe, test} from 'testme'
// Jest/Vitest-compatible API
await describe('Math operations', () => {
test('addition', () => {
expect(2 + 2).toBe(4)
})
})
// Traditional API
teqi(2 + 2, 4, 'Addition test')
ttrue(value > 0, 'Value should be positive')
```
For complete API documentation including all functions, matchers, and behaviors, see the API reference documents above.
## ๐ Test File Types
TestMe supports multiple test file types across platforms. All tests should exit with code 0 for success, non-zero for failure.
### Shell Tests (`.tst.sh`)
Shell script tests that are executed directly. Exit code 0 indicates success.
```bash
#!/bin/bash
# test_example.tst.sh
echo "Running shell test..."
result=$((2 + 2))
if [ $result -eq 4 ]; then
echo "โ Math test passed"
exit 0
else
echo "โ Math test failed"
exit 1
fi
```
### PowerShell Tests (`.tst.ps1`) - Windows
PowerShell script tests for Windows environments.
```powershell
# test_example.tst.ps1
Write-Host "Running PowerShell test..."
$result = 2 + 2
if ($result -eq 4) {
Write-Host "โ Math test passed"
exit 0
} else {
Write-Host "โ Math test failed"
exit 1
}
```
### Batch Tests (`.tst.bat`, `.tst.cmd`) - Windows
Windows batch script tests.
```batch
@echo off
REM test_example.tst.bat
echo Running batch test...
set /a result=2+2
if %result% == 4 (
echo Test passed
exit /b 0
) else (
echo Test failed
exit /b 1
)
```
### C Tests (`.tst.c`)
C programs that are compiled automatically before execution. Include `testme.h` for built-in testing utilities, or use standard assertions and exit codes.
```c
// test_math.tst.c
#include "testme.h"
int add(int a, int b) {
return a + b;
}
int main() {
tinfo("Running C math tests...\n");
// Test basic arithmetic
teq(add(2, 3), 5, "Addition test");
tneq(add(2, 3), 6, "Addition inequality test");
ttrue(add(5, 0) == 5, "Identity test");
// Test environment variable access
const char *binPath = tget("BIN", "/default/bin");
ttrue(binPath != NULL, "BIN environment variable available");
// Check if running in verbose mode
if (thas("TESTME_VERBOSE")) {
tinfo("Verbose mode enabled\n");
}
return 0;
}
```
#### C Testing Functions (testme.h)
**Equality Tests:**
- `teqi(a, b, msg)` - Assert two int values are equal
- `teql(a, b, msg)` - Assert two long values are equal
- `teqll(a, b, msg)` - Assert two long long values are equal
- `teqz(a, b, msg)` - Assert two size_t/ssize values are equal
- `tequ(a, b, msg)` - Assert two unsigned int values are equal
- `teqp(a, b, msg)` - Assert two pointer values are equal
- `tmatch(str, pattern, msg)` - Assert string matches exactly
**Inequality Tests:**
- `tneqi(a, b, msg)` - Assert two int values are not equal
- `tneql(a, b, msg)` - Assert two long values are not equal
- `tneqll(a, b, msg)` - Assert two long long values are not equal
- `tneqz(a, b, msg)` - Assert two size_t/ssize values are not equal
- `tnequ(a, b, msg)` - Assert two unsigned int values are not equal
- `tneqp(a, b, msg)` - Assert two pointer values are not equal
**Comparison Tests (Greater Than):**
- `tgti(a, b, msg)` - Assert a > b (int)
- `tgtl(a, b, msg)` - Assert a > b (long)
- `tgtll(a, b, msg)` - Assert a > b (long long)
- `tgtz(a, b, msg)` - Assert a > b (size_t/ssize)
- `tgtei(a, b, msg)` - Assert a >= b (int)
- `tgtel(a, b, msg)` - Assert a >= b (long)
- `tgtell(a, b, msg)` - Assert a >= b (long long)
- `tgtez(a, b, msg)` - Assert a >= b (size_t/ssize)
**Comparison Tests (Less Than):**
- `tlti(a, b, msg)` - Assert a < b (int)
- `tltl(a, b, msg)` - Assert a < b (long)
- `tltll(a, b, msg)` - Assert a < b (long long)
- `tltz(a, b, msg)` - Assert a < b (size_t/ssize)
- `tltei(a, b, msg)` - Assert a <= b (int)
- `tltel(a, b, msg)` - Assert a <= b (long)
- `tltell(a, b, msg)` - Assert a <= b (long long)
- `tltez(a, b, msg)` - Assert a <= b (size_t/ssize)
**Boolean and String Tests:**
- `ttrue(expr, msg)` - Assert expression is true
- `tfalse(expr, msg)` - Assert expression is false
- `tcontains(str, substr, msg)` - Assert string contains substring
- `tnull(ptr, msg)` - Assert pointer is NULL
- `tnotnull(ptr, msg)` - Assert pointer is not NULL
**Control Functions:**
- `tfail(msg)` - Unconditionally fail test with message
**Environment Functions:**
- `tget(key, default)` - Get environment variable with default
- `tgeti(key, default)` - Get environment variable as integer
- `thas(key)` - Check if environment variable exists
- `tdepth()` - Get current test depth
**Output Functions:**
- `tinfo(fmt, ...)` - Print informational message (with auto-newline)
- `tdebug(fmt, ...)` - Print debug message (with auto-newline)
- `tskip(fmt, ...)` - Print skip message (with auto-newline)
- `twrite(fmt, ...)` - Print output message (with auto-newline)
**Legacy Functions (deprecated):**
- `teq(a, b, msg)` - Use `teqi()` instead
- `tneq(a, b, msg)` - Use `tneqi()` instead
- `tassert(expr, msg)` - Use `ttrue()` instead
All test macros support optional printf-style format strings and arguments for custom messages.
### JavaScript Tests (`.tst.js`)
JavaScript tests are executed with the Bun runtime. Import the `testme` module for built-in testing utilities, or use standard assertions.
**Note**: TestMe automatically installs and links the testme module when running JS tests if not already linked.
```javascript
// test_array.tst.js
import {teq, tneq, ttrue, tinfo, tget, thas} from 'testme'
tinfo('Running JavaScript tests...')
const arr = [1, 2, 3]
const sum = arr.reduce((a, b) => a + b, 0)
// Test using testme utilities
teq(sum, 6, 'Array sum test')
tneq(sum, 0, 'Array sum is not zero')
ttrue(arr.length === 3, 'Array has correct length')
// Test environment variable access
const binPath = tget('BIN', '/default/bin')
ttrue(binPath !== null, 'BIN environment variable available')
// Check if running in verbose mode
if (thas('TESTME_VERBOSE')) {
tinfo('Verbose mode enabled')
}
```
#### JavaScript Testing Functions (testme module)
**Traditional API:**
- `teq(received, expected, msg)` - Assert two values are equal
- `tneq(received, expected, msg)` - Assert two values are not equal
- `ttrue(expr, msg)` - Assert expression is true
- `tfalse(expr, msg)` - Assert expression is false
- `tmatch(str, pattern, msg)` - Assert string matches regex pattern
- `tcontains(str, substr, msg)` - Assert string contains substring
- `tfail(msg)` - Fail test with message
- `tget(key, default)` - Get environment variable with default
- `thas(key)` - Check if environment variable exists (as number)
- `tdepth()` - Get current test depth
- `tverbose()` - Check if verbose mode is enabled
- `tinfo(...)`, `tdebug(...)` - Print informational messages
- `tassert(expr, msg)` - Alias for `ttrue`
**Jest/Vitest-Compatible API:**
TestMe supports a Jest/Vitest-compatible `expect()` API alongside the traditional `t*` functions. This allows developers familiar with modern JavaScript testing frameworks to use their preferred syntax:
```javascript
import {expect} from 'testme'
// Basic assertions
expect(1 + 1).toBe(2) // Strict equality (===)
expect({a: 1}).toEqual({a: 1}) // Deep equality
expect('hello').toContain('ell') // String/array contains
expect([1, 2, 3]).toHaveLength(3) // Length check
// Negation with .not
expect(5).not.toBe(10)
expect('test').not.toContain('xyz')
// Truthiness
expect(true).toBeTruthy()
expect(0).toBeFalsy()
expect(null).toBeNull()
expect(undefined).toBeUndefined()
expect('value').toBeDefined()
// Type checking
expect(new Date()).toBeInstanceOf(Date)
expect('hello').toBeTypeOf('string')
// Numeric comparisons
expect(10).toBeGreaterThan(5)
expect(10).toBeGreaterThanOrEqual(10)
expect(5).toBeLessThan(10)
expect(5).toBeLessThanOrEqual(5)
expect(0.1 + 0.2).toBeCloseTo(0.3) // Floating point comparison
// String/Regex matching
expect('hello world').toMatch(/world/)
expect('test@example.com').toMatch(/^[\w.]+@[\w.]+$/)
// Object matchers
expect({name: 'Alice', age: 30}).toHaveProperty('name', 'Alice')
expect({a: 1, b: 2, c: 3}).toMatchObject({a: 1, b: 2})
expect([{id: 1}, {id: 2}]).toContainEqual({id: 1})
// Error handling
expect(() => {
throw new Error('fail')
}).toThrow('fail')
expect(() => JSON.parse('invalid')).toThrowError(SyntaxError)
// Async/Promise support
await expect(Promise.resolve(42)).resolves.toBe(42)
await expect(Promise.reject(new Error('fail'))).rejects.toThrow()
await expect(fetchData()).resolves.toHaveProperty('status', 'ok')
```
**Available Matchers:**
- **Equality**: `toBe()`, `toEqual()`, `toStrictEqual()`
- **Truthiness**: `toBeTruthy()`, `toBeFalsy()`, `toBeNull()`, `toBeUndefined()`, `toBeDefined()`, `toBeNaN()`
- **Type Checking**: `toBeInstanceOf()`, `toBeTypeOf()`
- **Numeric**: `toBeGreaterThan()`, `toBeGreaterThanOrEqual()`, `toBeLessThan()`, `toBeLessThanOrEqual()`, `toBeCloseTo()`
- **Strings/Collections**: `toMatch()`, `toContain()`, `toContainEqual()`, `toHaveLength()`
- **Objects**: `toHaveProperty()`, `toMatchObject()`
- **Errors**: `toThrow()`, `toThrowError()`
- **Modifiers**: `.not` (negation), `.resolves` (promise resolution), `.rejects` (promise rejection)
**Choosing Between APIs:**
- **Use `expect()` API** if you're familiar with Jest/Vitest or prefer expressive, chainable assertions
- **Use `t*` functions** if you prefer traditional assertion functions or are writing C-style tests
Both APIs are fully supported and can be mixed in the same project. See [doc/JEST_API.md](doc/JEST_API.md) for complete API documentation and migration guide.
### TypeScript Tests (`.tst.ts`)
TypeScript tests are executed with the Bun runtime (includes automatic transpilation). Import the `testme` module for built-in testing utilities.
**Note**: TestMe automatically installs and links the `testme` module when running TS tests if not already linked.
**Traditional API:**
```typescript
// test_types.tst.ts
import {teq, ttrue, tinfo, tget} from 'testme'
tinfo('Running TypeScript tests...')
interface User {
name: string
age: number
}
const user: User = {name: 'John', age: 30}
// Test using testme utilities with TypeScript types
teq(user.name, 'John', 'User name test')
teq(user.age, 30, 'User age test')
ttrue(typeof user.name === 'string', 'Name is string type')
ttrue(typeof user.age === 'number', 'Age is number type')
// Test environment variable access with types
const binPath: string | null = tget('BIN', '/default/bin')
ttrue(binPath !== null, 'BIN environment variable available')
```
**Jest/Vitest API (with full TypeScript type inference):**
```typescript
// test_api.tst.ts
import {expect} from 'testme'
interface ApiResponse {
status: 'ok' | 'error'
data?: unknown
error?: string
}
const response: ApiResponse = {
status: 'ok',
data: {users: [{id: 1, name: 'Alice'}]},
}
// Type-safe assertions with IntelliSense support
expect(response.status).toBe('ok')
expect(response).toHaveProperty('data')
expect(response.data).toBeDefined()
expect(response).not.toHaveProperty('error')
// Works seamlessly with async/await
async function fetchUser(id: number): Promise {
return {name: 'Alice', age: 30}
}
await expect(fetchUser(1)).resolves.toMatchObject({name: 'Alice'})
```
**Test Organization with describe() and test():**
TestMe supports organizing tests using `describe()` blocks and `test()` functions, compatible with Jest/Vitest workflows:
```typescript
// test_calculator.tst.ts
import {describe, test, it, expect, beforeEach, afterEach} from 'testme'
await describe('Calculator operations', async () => {
let calculator
beforeEach(() => {
calculator = {value: 0}
})
afterEach(() => {
calculator = null
})
test('starts with zero', () => {
expect(calculator.value).toBe(0)
})
it('it() is an alias for test()', () => {
expect(true).toBeTruthy()
})
test('async operations work', async () => {
await new Promise((resolve) => setTimeout(resolve, 10))
expect(calculator.value).toBe(0)
})
await describe('addition', () => {
test('adds positive numbers', () => {
calculator.value = 2 + 3
expect(calculator.value).toBe(5)
})
test('adds negative numbers', () => {
calculator.value = -2 + -3
expect(calculator.value).toBe(-5)
})
})
})
```
**Key Features:**
- Top-level `describe()` blocks must be awaited
- Nested `describe()` blocks must be awaited within async describe functions
- `test()` functions execute sequentially within a describe block
- `beforeEach()` and `afterEach()` hooks run before/after each test in the current describe scope
- Hooks are scoped to their describe block and restored when the block exits
- When `expect()` is used inside `test()`, failures throw errors caught by the test runner
- When `expect()` is used outside `test()`, failures exit immediately (backward compatible)
**Note**: TypeScript tests support both the traditional `t*` functions and the Jest/Vitest `expect()` API with `describe()`/`test()` structure. Both run on the Bun runtime with full TypeScript type checking and IntelliSense support.
### Python Tests (`.tst.py`)
Python tests executed with the Python runtime. Exit code 0 indicates success.
```python
#!/usr/bin/env python3
# test_example.tst.py
import sys
def test_math():
result = 2 + 2
assert result == 4, "Math test failed"
print("โ Math test passed")
if __name__ == "__main__":
try:
test_math()
sys.exit(0) # Success
except AssertionError as e:
print(f"โ {e}")
sys.exit(1) # Failure
```
### Go Tests (`.tst.go`)
Go programs that are compiled and executed automatically. Exit code 0 indicates success.
```go
// test_math.tst.go
package main
import (
"fmt"
"os"
)
func add(a, b int) int {
return a + b
}
func main() {
// Test basic arithmetic
if add(2, 3) != 5 {
fmt.Println("โ Addition test failed")
os.Exit(1)
}
fmt.Println("โ Addition test passed")
// Test identity
if add(5, 0) != 5 {
fmt.Println("โ Identity test failed")
os.Exit(1)
}
fmt.Println("โ Identity test passed")
os.Exit(0)
}
```
## ๐ฏ Usage
### Command Syntax
```bash
tm [OPTIONS] [PATTERNS...]
```
### Pattern Matching
Filter tests using various pattern types:
- **File patterns**: `"*.tst.c"`, `"math.tst.js"`
- **Base names**: `"math"` (matches math.tst.c, math.tst.js, etc.)
- **Directory names**: `"integration"`, `"unit/api"` (runs all tests in directory)
- **Path patterns**: `"**/math*"`, `"test/unit/*.tst.c"`
### Command Line Options
All available options sorted alphabetically:
| Option | Description |
| ---------------------- | ------------------------------------------------------------------------------------------------------ |
| `--chdir ` | Change to directory before running tests |
| `--clean` | Remove all `.testme` artifact directories |
| `-c, --config ` | Use specific configuration file |
| `--continue` | Continue running tests even if some fail, always exit with code 0 |
| `-d, --debug` | Launch debugger (GDB on Linux, Xcode/LLDB on macOS, VS on Windows) |
| `--depth ` | Run tests with depth requirement โค N (default: 0) |
| `-h, --help` | Show help message |
| `--init` | Create `testme.json5` configuration file in current directory |
| `-i, --iterations ` | Set iteration count (exports `TESTME_ITERATIONS` for tests to use internally, does not repeat tests) |
| `-k, --keep` | Keep `.testme` artifacts after running tests |
| `-l, --list` | List discovered tests without running them |
| `--new ` | Create new test file from template (e.g., `--new math.c` creates `math.tst.c`) |
| `-n, --no-services` | Skip all service commands (skip, prep, setup, cleanup) |
| `-p, --profile ` | Set build profile (overrides config and `PROFILE` environment variable) |
| `-q, --quiet` | Run silently with no output, only exit codes |
| `-s, --show` | Display test configuration and environment variables |
| `--step` | Run tests one at a time with prompts (forces serial mode) |
| `-v, --verbose` | Enable verbose mode with detailed output (sets `TESTME_VERBOSE=1`) |
| `-V, --version` | Show version information |
| `-w, --workers ` | Number of parallel workers (overrides config) |
### Usage Examples
```bash
# Basic usage
tm # Run all tests
tm --list # List tests without running
tm "*.tst.c" # Run only C tests
# Pattern filtering
tm integration # Run all tests in integration/ directory
tm test/unit # Run tests in test/unit/ directory
tm "math*" # Run tests starting with 'math'
tm "**/api*" # Run tests with 'api' in path
# Advanced options
tm -v integration # Verbose output for integration tests
tm --depth 2 # Run tests requiring depth โค 2
tm --debug math.tst.c # Debug specific C test
tm -s "*.tst.c" # Show test configuration and environment
tm --keep "*.tst.c" # Keep build artifacts
tm --no-services # Skip service commands (run services externally)
# Configuration
tm -c custom.json5 # Use custom config
tm --chdir /path/to/tests # Change directory first
```
### Working Directory Behavior
All tests execute with their working directory set to the directory containing the test file, allowing reliable access to relative files and resources.
## โ๏ธ Configuration
### Configuration File (`testme.json5`)
TestMe supports hierarchical configuration using nested `testme.json5` files throughout your project structure. Each test file gets its own configuration by walking up from the test file's directory to find the nearest configuration file.
#### Configuration Discovery Priority (highest to lowest):
1. CLI arguments
2. Test-specific `testme.json5` (nearest to test file)
3. Project `testme.json5` (walking up directory tree)
4. Built-in defaults
This enables:
- Project-wide defaults at the repository root
- Module-specific overrides in subdirectories
- Test-specific configuration closest to individual tests
- Automatic merging with CLI arguments preserved
```json5
{
enable: true,
depth: 0,
compiler: {
c: {
// Compiler selection: 'default' (auto-detect), string (e.g., 'gcc'), or platform map
compiler: {
windows: 'msvc',
macosx: 'clang',
linux: 'gcc',
},
gcc: {
flags: ['-I${../include}'],
libraries: ['m', 'pthread'],
},
clang: {
flags: ['-I${../include}'],
libraries: ['m', 'pthread'],
},
msvc: {
flags: ['/I${../include}'],
libraries: [],
},
},
es: {
require: 'testme',
},
},
execution: {
timeout: 30000,
parallel: true,
workers: 4,
},
output: {
verbose: false,
format: 'simple',
colors: true,
},
patterns: {
// Base patterns for all platforms
include: ['**/*.tst.c', '**/*.tst.js', '**/*.tst.ts'],
// Platform-specific additions (merged with base patterns)
windows: {
include: ['**/*.tst.ps1', '**/*.tst.bat', '**/*.tst.cmd'],
},
macosx: {
include: ['**/*.tst.sh'],
},
linux: {
include: ['**/*.tst.sh'],
},
},
services: {
skip: './check-requirements.sh',
prep: 'make build',
setup: 'docker-compose up -d',
cleanup: 'docker-compose down',
skipTimeout: 30000,
prepTimeout: 30000,
setupTimeout: 30000,
cleanupTimeout: 10000,
delay: 3000,
},
env: {
// Common environment variables for all platforms
TEST_MODE: 'integration',
// Platform-specific environment variables (merged with base)
windows: {
PATH: '${../build/*/bin};%PATH%',
},
linux: {
LD_LIBRARY_PATH: '${../build/*/bin}:$LD_LIBRARY_PATH',
},
macosx: {
DYLD_LIBRARY_PATH: '${../build/*/bin}:$DYLD_LIBRARY_PATH',
},
},
}
```
### Configuration Options
#### Test Control Settings
- `enable` - Enable or disable tests in this directory (default: true)
- `depth` - Minimum depth required to run tests (default: 0, requires `--depth N` to run)
#### Compiler Settings
##### C Compiler Configuration
TestMe automatically detects and configures the appropriate C compiler for your platform:
- **Windows**: MSVC (Visual Studio), MinGW, or Clang
- **macOS**: Clang or GCC
- **Linux**: GCC or Clang
**Default Flags (automatically applied):**
- **GCC/Clang**: `-Wall -Wextra -Wno-unused-parameter -Wno-strict-prototypes -O0 -g -I. -I~/.local/include -L~/.local/lib` (plus `-I/opt/homebrew/include -L/opt/homebrew/lib` on macOS)
- **MSVC**: `/std:c11 /W4 /Od /Zi /nologo`
**Note**: No `-std=` flag is specified by default for GCC/Clang, allowing the compiler to use its default standard (typically `gnu17` or `gnu11`) which includes POSIX extensions like `strdup()`. This makes test code more permissive and easier to write. You can specify a specific standard in your `testme.json5` if needed (e.g., `-std=c99`, `-std=c11`).
**Configuration Options:**
- `compiler.c.compiler` - C compiler path (optional, use 'default' to auto-detect, or specify 'gcc', 'clang', or full path)
- `compiler.c.gcc.flags` - GCC-specific flags (merged with GCC defaults)
- `compiler.c.gcc.libraries` - GCC-specific libraries (e.g., `['m', 'pthread']`)
- `compiler.c.gcc.windows.flags` - Additional Windows-specific GCC flags
- `compiler.c.gcc.windows.libraries` - Additional Windows-specific GCC libraries
- `compiler.c.gcc.macosx.flags` - Additional macOS-specific GCC flags
- `compiler.c.gcc.macosx.libraries` - Additional macOS-specific GCC libraries
- `compiler.c.gcc.linux.flags` - Additional Linux-specific GCC flags
- `compiler.c.gcc.linux.libraries` - Additional Linux-specific GCC libraries
- `compiler.c.clang.flags` - Clang-specific flags (merged with Clang defaults)
- `compiler.c.clang.libraries` - Clang-specific libraries
- `compiler.c.clang.windows.flags` - Additional Windows-specific Clang flags
- `compiler.c.clang.windows.libraries` - Additional Windows-specific Clang libraries
- `compiler.c.clang.macosx.flags` - Additional macOS-specific Clang flags
- `compiler.c.clang.macosx.libraries` - Additional macOS-specific Clang libraries
- `compiler.c.clang.linux.flags` - Additional Linux-specific Clang flags
- `compiler.c.clang.linux.libraries` - Additional Linux-specific Clang libraries
- `compiler.c.msvc.flags` - MSVC-specific flags (merged with MSVC defaults)
- `compiler.c.msvc.libraries` - MSVC-specific libraries
- `compiler.c.msvc.windows.flags` - Additional Windows-specific MSVC flags
- `compiler.c.msvc.windows.libraries` - Additional Windows-specific MSVC libraries
**Note:** Platform-specific settings (`windows`, `macosx`, `linux`) are **additive** - they are appended to the base compiler settings, allowing you to specify common settings once and add platform-specific flags/libraries only where needed.
**Variable Expansion:**
Environment variables in compiler flags and paths support `${...}` expansion:
- `${PLATFORM}` - Current platform (e.g., macosx-arm64, linux-x64, win-x64)
- `${OS}` - Operating system (macosx, linux, windows)
- `${ARCH}` - CPU architecture (arm64, x64, x86)
- `${PROFILE}` - Build profile (debug, release, dev, prod, etc.)
- `${CC}` - Compiler name (gcc, clang, msvc)
- `${CONFIGDIR}` - Directory containing the testme.json5 file
- `${TESTDIR}` - Relative path from executable to test file directory
- `${pattern}` - Glob patterns (e.g., `${../build/*/bin}` expands to matching paths)
**Example (Basic):**
```json5
{
compiler: {
c: {
// Auto-detect best compiler per platform
compiler: {
windows: 'msvc',
macosx: 'clang',
linux: 'gcc',
},
gcc: {
flags: ['-I${../build/${PLATFORM}-${PROFILE}/inc}', '-L${../build/${PLATFORM}-${PROFILE}/bin}'],
libraries: ['m', 'pthread'],
},
clang: {
flags: ['-I${../build/${PLATFORM}-${PROFILE}/inc}', '-L${../build/${PLATFORM}-${PROFILE}/bin}'],
libraries: ['m', 'pthread'],
},
msvc: {
flags: ['/I${../build/${PLATFORM}-${PROFILE}/inc}', '/LIBPATH:${../build/${PLATFORM}-${PROFILE}/bin}'],
libraries: [],
},
},
},
}
```
**Example (Platform-Specific Settings):**
```json5
{
compiler: {
c: {
gcc: {
// Common flags for all platforms
flags: ['-I..'],
libraries: ['m', 'pthread'],
// Additional macOS-specific settings
macosx: {
flags: ['-framework', 'IOKit', '-framework', 'CoreFoundation'],
libraries: ['objc'],
},
// Additional Linux-specific settings
linux: {
flags: ['-D_GNU_SOURCE'],
libraries: ['rt', 'dl'],
},
},
},
},
}
```
#### Execution Settings
- `execution.timeout` - Test timeout in seconds (default: 30)
- `execution.parallel` - Enable parallel execution (default: true)
- `execution.workers` - Number of parallel workers (default: 4)
#### Output Settings
- `output.verbose` - Enable verbose output (default: false)
- `output.format` - Output format: "simple", "detailed", "json" (default: "simple")
- `output.colors` - Enable colored output (default: true)
#### Pattern Settings
Pattern configuration supports platform-specific patterns that are deep blended with base patterns:
- `patterns.include` - Array of include patterns applied to all platforms
- `patterns.exclude` - Array of exclude patterns applied to all platforms
- `patterns.windows.include` - Additional patterns for Windows (merged with base)
- `patterns.windows.exclude` - Additional exclude patterns for Windows
- `patterns.macosx.include` - Additional patterns for macOS (merged with base)
- `patterns.macosx.exclude` - Additional exclude patterns for macOS
- `patterns.linux.include` - Additional patterns for Linux (merged with base)
- `patterns.linux.exclude` - Additional exclude patterns for Linux
**Pattern Merging Behavior:**
Platform-specific patterns are added to base patterns, not replaced:
1. Start with base `include` and `exclude` patterns
2. On the current platform, add platform-specific patterns to the base
3. Result is the union of base patterns and platform-specific patterns
**Example:**
```json5
{
patterns: {
// Base patterns for all platforms
include: ['**/*.tst.c', '**/*.tst.js', '**/*.tst.ts'],
exclude: ['**/node_modules/**'],
// Windows-specific additions
windows: {
include: ['**/*.tst.ps1', '**/*.tst.bat'], // Added on Windows only
exclude: ['**/wsl/**'], // Excluded on Windows only
},
// macOS-specific additions
macosx: {
include: ['**/*.tst.sh'], // Shell tests on macOS
},
// Linux-specific additions
linux: {
include: ['**/*.tst.sh'], // Shell tests on Linux
},
},
}
```
On Windows, the effective patterns would be:
- Include: `**/*.tst.c`, `**/*.tst.js`, `**/*.tst.ts`, `**/*.tst.ps1`, `**/*.tst.bat`
- Exclude: `**/node_modules/**`, `**/wsl/**`
On macOS/Linux, the effective patterns would be:
- Include: `**/*.tst.c`, `**/*.tst.js`, `**/*.tst.ts`, `**/*.tst.sh`
- Exclude: `**/node_modules/**`
#### Service Settings
- `services.skip` - Script to check if tests should run (exit 0=run, non-zero=skip)
- `services.prep` - Command to run once before all tests begin (waits for completion)
- `services.setup` - Command to start background service during test execution
- `services.cleanup` - Command to run after all tests complete for cleanup
- `services.skipTimeout` - Skip script timeout in seconds (default: 30)
- `services.prepTimeout` - Prep command timeout in seconds (default: 30)
- `services.setupTimeout` - Setup command timeout in seconds (default: 30)
- `services.cleanupTimeout` - Cleanup command timeout in seconds (default: 10)
- `services.delay` - Delay after setup before running tests in seconds (default: 0)
#### Environment Variables
- `env` - Object defining environment variables to set during test execution
- Environment variable values support `${...}` expansion using glob patterns
- Paths are resolved relative to the configuration file's directory
- Supports platform-specific overrides via `windows`, `macosx`, and `linux` keys
- Platform-specific variables are merged with base variables (platform values override base)
- Useful for providing dynamic paths to build artifacts, libraries, and test data
**Special Variables Automatically Exported:**
TestMe automatically exports these special environment variables to all tests and service scripts (skip, prep, setup, cleanup):
- `TESTME_PLATFORM` - Combined OS and architecture (e.g., `macosx-arm64`, `linux-x64`, `windows-x64`)
- `TESTME_PROFILE` - Build profile from `--profile` flag, config `profile` setting, `PROFILE` env var, or default `dev`
- `TESTME_OS` - Operating system (`macosx`, `linux`, `windows`)
- `TESTME_ARCH` - Architecture (`x64`, `arm64`, `ia32`)
- `TESTME_CC` - C compiler detected or configured (`gcc`, `clang`, `msvc`)
- `TESTME_TESTDIR` - Relative path from executable directory to test file directory
- `TESTME_CONFIGDIR` - Relative path from executable directory to configuration file directory
- `TESTME_VERBOSE` - Set to `1` when `--verbose` flag is used
- `TESTME_DEPTH` - Current depth value from `--depth` flag
- `TESTME_ITERATIONS` - Iteration count from `--iterations` flag (defaults to `1`)
- **Note**: TestMe does NOT automatically repeat test execution. This variable is provided for tests to implement their own iteration logic internally if needed.
These variables are available in all test and service script environments and can be used in shell scripts (e.g., `$TESTME_PLATFORM`), C code (via `getenv("TESTME_PLATFORM")`), or JavaScript/TypeScript (via `process.env.TESTME_PLATFORM`).
**Platform Configuration Formats:**
1. **Simple string values** - Apply to all platforms
2. **Platform override sections** - Legacy format with platform-specific env sections
- Supports `windows`, `macosx`, `linux`, and `default` sections
- `default` section provides fallback values for all platforms
- Platform-specific sections override default values
3. **Default/platform pattern** - Per-variable platform overrides with default fallback
**Examples:**
```json5
{
env: {
// Simple string values (all platforms)
TEST_MODE: 'integration',
BIN: '${../build/*/bin}',
// Default/platform pattern (per-variable platform-specific values)
LIB_EXT: {
default: '.so', // Used if platform not specified
windows: '.dll',
macosx: '.dylib',
linux: '.so',
},
// Legacy platform override sections
// Processed in order: base vars โ default โ platform-specific
default: {
PATH: '${../build/${PLATFORM}-${PROFILE}/bin}:${PATH}',
},
windows: {
PATH: '${../build/${PLATFORM}-${PROFILE}/bin};%PATH%',
},
macosx: {
DYLD_LIBRARY_PATH: '${../build/${PLATFORM}-${PROFILE}/bin}:$DYLD_LIBRARY_PATH',
},
linux: {
LD_LIBRARY_PATH: '${../build/${PLATFORM}-${PROFILE}/bin}:$LD_LIBRARY_PATH',
},
},
}
```
**Priority Order (later overrides earlier):**
1. Base environment variables (simple string values and default/platform pattern)
2. `env.default` section (legacy format)
3. Platform-specific section: `env.windows`, `env.macosx`, or `env.linux`
**Accessing Environment Variables:**
- C tests: `getenv("BIN")` or use `tget("BIN", default)` from testme.h
- Shell tests: `$BIN` or `${BIN}`
- JavaScript/TypeScript: `process.env.BIN` or use `tget("BIN", default)` from testme.js
## ๐ Common Use Cases
### Testing a Multi-Platform C Project
```json5
{
compiler: {
c: {
gcc: {
flags: [
'-I${../include}',
'-L${../build/${PLATFORM}-${PROFILE}/lib}',
'-Wl,-rpath,@executable_path/${CONFIGDIR}/../build/${PLATFORM}-${PROFILE}/lib',
],
libraries: ['mylib', 'm', 'pthread'],
},
msvc: {
flags: ['/I${../include}', '/LIBPATH:${../build/${PLATFORM}-${PROFILE}/lib}'],
libraries: ['mylib'],
},
},
},
env: {
MY_LIB_PATH: '${../build/${PLATFORM}-${PROFILE}/lib}',
},
}
```
### Running Tests with Docker Services
```json5
{
services: {
prep: 'docker-compose build',
setup: 'docker-compose up -d',
cleanup: 'docker-compose down',
delay: 5000, // Wait 5 seconds for services to start
},
env: {
DATABASE_URL: 'postgresql://localhost:5432/testdb',
REDIS_URL: 'redis://localhost:6379',
},
}
```
### Conditional Test Execution
```json5
{
services: {
skip: './check-requirements.sh', // Exit 0 to run, non-zero to skip
},
}
```
check-requirements.sh:
```bash
#!/bin/bash
# Skip tests if required tools are missing
if ! command -v docker &> /dev/null; then
echo "Docker not found - skipping integration tests"
exit 1
fi
exit 0
```
### Organizing Tests by Depth
Root testme.json5 (quick unit tests):
```json5
{
depth: 0,
execution: {timeout: 5000},
}
```
integration/testme.json5 (slow integration tests):
```json5
{
depth: 1,
execution: {timeout: 60000},
services: {
setup: './start-services.sh',
cleanup: './stop-services.sh',
},
}
```
Run with: `tm --depth 1` to include integration tests, or just `tm` for unit tests only.
## ๐ Artifact Management
TestMe automatically creates `.testme` directories alongside test files for C compilation artifacts:
```
project/tests/
โโโ math.tst.c
โโโ .testme/
โโโ math # Compiled binary
โโโ compile.log # Compilation output
```
Use `tm --clean` to remove all artifact directories, or `tm --keep` to preserve them after tests run.
## ๐ Debugging Tests
TestMe includes integrated debugging support for all test languages. Use the `--debug` flag to launch tests in debug mode.
### C Test Debugging
```bash
# Debug a specific C test
tm --debug math.tst.c
# Configured debuggers (auto-selected by platform):
# - macOS: xcode (default), lldb, gdb, vscode
# - Linux: gdb (default), lldb, vscode
# - Windows: vs (default), vscode
```
**Configuration:**
```json5
{
debug: {
c: 'xcode', // Use specific debugger
// Or use platform map:
c: {
macosx: 'xcode',
linux: 'gdb',
windows: 'vs',
},
},
}
```
### JavaScript/TypeScript Debugging
```bash
# Debug JavaScript test
tm --debug api.tst.js
# Debug TypeScript test
tm --debug types.tst.ts
```
**VSCode Workflow (default):**
Prerequisites:
- Install the [Bun VSCode extension](https://marketplace.visualstudio.com/items?itemName=oven.bun-vscode)
Steps:
1. Run `tm --debug test.tst.js` (or `.tst.ts`)
2. TestMe automatically creates `.vscode/launch.json` in test directory
3. VSCode opens with the workspace
4. Open your test file and set breakpoints
5. Press F5 or Run > Start Debugging
6. Select "Debug Bun Test" configuration
**Configuration:**
```json5
{
debug: {
js: 'vscode', // JavaScript debugger (default)
// js: 'cursor', // Or use Cursor editor
ts: 'vscode', // TypeScript debugger (default)
// ts: 'cursor', // Or use Cursor editor
},
}
```
**Supported Debuggers:**
- `vscode` - Visual Studio Code (default)
- `cursor` - Cursor editor (VSCode fork)
- Custom path to debugger executable
### Python Test Debugging
```bash
# Debug with pdb (default)
tm --debug test.tst.py
# Configure to use VSCode
# Edit testme.json5: debug: { py: 'vscode' }
```
**pdb commands:**
- `h` - help
- `b ` - set breakpoint
- `c` - continue
- `n` - next line
- `s` - step into
- `p ` - print variable
- `q` - quit
**Configuration:**
```json5
{
debug: {
py: 'pdb', // Use Python debugger (default)
// py: 'vscode', // Or use VSCode
},
}
```
### Go Test Debugging
```bash
# Debug with delve (default)
tm --debug calc.tst.go
# Configure to use VSCode
# Edit testme.json5: debug: { go: 'vscode' }
```
**delve commands:**
- `help` - show help
- `break ` - set breakpoint
- `continue` - continue execution
- `next` - step over
- `step` - step into
- `print ` - print variable
- `exit` - exit debugger
**Configuration:**
```json5
{
debug: {
go: 'delve', // Use Delve debugger (default)
// go: 'vscode', // Or use VSCode
},
}
```
### Custom Debugger
You can specify a custom debugger executable path:
```json5
{
debug: {
c: '/usr/local/bin/my-gdb',
py: '/opt/debugger/pdb-enhanced',
},
}
```
## ๐ Output Formats
### Simple Format (Default)
```
๐งช Test runner starting in: /path/to/project
Running tests...
โ PASS string.tst.ts (11ms)
โ PASS array.tst.js (10ms)
โ PASS hello.tst.sh (4ms)
โ PASS math.tst.c (204ms)
============================================================
TEST SUMMARY
============================================================
โ Passed: 4
โ Failed: 0
! Errors: 0
- Skipped: 0
Total: 4
Duration: 229ms
Result: PASSED
```
### Detailed Format
Shows full output from each test including compilation details for C tests.
### JSON Format
Machine-readable output for integration with other tools:
```json
{
"summary": {
"total": 4,
"passed": 4,
"failed": 0,
"errors": 0,
"skipped": 0,
"totalDuration": 229
},
"tests": [
{
"file": "/path/to/test.tst.js",
"type": "javascript",
"status": "passed",
"duration": 10,
"exitCode": 0
}
]
}
```
## ๐งช Development
### Building
```bash
bun run build
```
### Running Tests
```bash
bun test
```
### Development Mode
```bash
bun --hot src/index.ts
```
### Code Style
- 4-space indentation
- TypeScript with strict mode
- ESLint and Prettier configured
- Comprehensive JSDoc comments
## ๐ก Tips and Best Practices
### Writing Effective Tests
1. **Use descriptive names**: `user_authentication.tst.ts` not `test1.tst.ts`
2. **Keep tests focused**: One concept per test file
3. **Exit with proper codes**: 0 for success, non-zero for failure
4. **Document test purpose**: Add comments explaining what's being validated
### Performance Optimization
- Use parallel execution for independent tests (default behavior)
- Adjust worker count based on system resources: `tm -w 8`
- Clean artifacts regularly: `tm --clean`
### Troubleshooting
| Problem | Solution |
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Tests not discovered** | Check file extensions (`.tst.sh`, `.tst.c`, `.tst.js`, `.tst.ts`)
Use `--list` to see what's found
Check `enable: false` in config
Verify depth requirements |
| **C compilation failing** | Check GCC/Clang is in PATH
Review `.testme/compile.log`
Use `-s` to see compile commands |
| **Permission errors** | Make shell scripts executable: `chmod +x *.tst.sh`
Check directory permissions |
| **Tests skipped** | Check skip script exit code
Verify `--depth` is sufficient
Run with `-v` for details |
### Windows-Specific Issues
| Problem | Solution |
| ------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| **`tm` not recognized** | Add `%USERPROFILE%\.local\bin` to PATH
Or run from installation directory |
| **PowerShell execution policy** | TestMe uses `-ExecutionPolicy Bypass` automatically
Or set: `Set-ExecutionPolicy RemoteSigned -Scope CurrentUser` |
| **Compiler not found** | Run from "Developer Command Prompt for VS 2022"
Or add compiler to PATH manually |
| **MSVC vs GCC flags** | Check `testme.json5` compiler config
MSVC: `/W4 /std:c11`
GCC/MinGW: `-Wall -Wextra` (no -std by default) |
| **Shell tests fail on Windows** | Install Git for Windows (includes bash)
Or convert to `.tst.ps1` or `.tst.bat` |
---
## ๐ฆ Publishing
If you're a package maintainer or want to contribute TestMe to additional repositories:
- **[Publishing Guide](doc/PUBLISHING.md)** - Complete step-by-step instructions for publishing to all package repositories
- **[Quick Reference](doc/PUBLISHING-QUICKREF.md)** - One-page reference for package publishing
- **[Installation Packages](installs/README.md)** - Package configurations and build instructions
---
For detailed documentation, see `man tm` or the [Design Documentation](doc/DESIGN.md).