EU-Utility/tests/README.md

347 lines
7.5 KiB
Markdown

"""
EU-Utility Test Suite - Summary
================================
This comprehensive test suite provides full coverage for EU-Utility v2.0.
## Test Structure
```
tests/
├── __init__.py # Test package initialization
├── conftest.py # Shared fixtures and configuration
├── run_tests.py # Test runner script
├── unit/ # Unit tests
│ ├── test_plugin_manager.py
│ ├── test_window_manager.py
│ ├── test_api_integration.py
│ └── test_core_services.py
├── integration/ # Integration tests
│ └── test_plugin_workflows.py
├── ui/ # UI automation tests
│ └── test_ui_automation.py
└── performance/ # Performance benchmarks
└── test_benchmarks.py
```
## Test Coverage
### Unit Tests (30+ tests)
1. **Plugin Manager Tests**
- Initialization and configuration
- Plugin discovery and loading
- Enable/disable functionality
- Settings persistence
- Dependency management
2. **Window Manager Tests**
- Singleton pattern
- Window detection
- Focus tracking
- Multi-monitor support
- Activity bar functionality
3. **API Integration Tests**
- Plugin API singleton
- Service registration
- Log reading
- Window operations
- OCR functionality
- Screenshot capture
- Nexus API
- HTTP client
- Audio/Notifications
- Clipboard operations
- Event bus
- Data store
- Background tasks
4. **Core Services Tests**
- Event bus (subscribe/unsubscribe/publish)
- Data store (CRUD operations, persistence)
- Settings (get/set, persistence)
- Logger
- Hotkey manager
- Theme manager
- Performance optimizations
### Integration Tests (20+ tests)
1. **Plugin Lifecycle Tests**
- Full plugin lifecycle
- Enable/disable workflow
- Settings persistence across sessions
2. **API Workflow Tests**
- Log reading and parsing
- Window detection and overlay positioning
- OCR and notification workflow
- Nexus search and data storage
- Event subscription and publishing
3. **UI Integration Tests**
- Overlay show/hide workflow
- Plugin switching
- Dashboard widget workflow
4. **Settings Workflow Tests**
- Save/load workflow
- Plugin settings isolation
5. **Error Handling Tests**
- Plugin load error handling
- API service unavailable handling
- Graceful degradation
### UI Automation Tests (25+ tests)
1. **Dashboard UI Tests**
- Dashboard opens correctly
- Widget interaction
- Navigation tabs
2. **Overlay Window Tests**
- Window opens correctly
- Toggle visibility
- Plugin navigation
3. **Activity Bar Tests**
- Opens correctly
- Search functionality
- Auto-hide behavior
4. **Settings Dialog Tests**
- Dialog opens
- Save functionality
5. **Responsive UI Tests**
- Window resize handling
- Minimum size enforcement
- Sidebar responsiveness
6. **Theme UI Tests**
- Theme toggle
- Stylesheet application
7. **Accessibility Tests**
- Accessibility names
- Keyboard navigation
8. **Tray Icon Tests**
- Icon exists
- Context menu
### Performance Benchmarks (15+ tests)
1. **Plugin Manager Performance**
- Plugin discovery speed
- Plugin load speed
2. **API Performance**
- Log reading
- Nexus search
- Data store operations
3. **UI Performance**
- Overlay creation
- Dashboard render
- Plugin switching
4. **Memory Performance**
- Plugin loading memory
- Data storage memory
5. **Startup Performance**
- Application startup
- Component initialization
6. **Cache Performance**
- HTTP caching
- Data store caching
7. **Concurrent Performance**
- Event publishing
## Running Tests
### Run All Tests
```bash
python run_tests.py --all
```
### Run Specific Categories
```bash
# Unit tests only
python run_tests.py --unit
# Integration tests
python run_tests.py --integration
# UI tests
python run_tests.py --ui
# Performance benchmarks
python run_tests.py --performance
```
### With Coverage
```bash
python run_tests.py --all --coverage --html
```
### Using pytest directly
```bash
# All tests
python -m pytest tests/ -v
# With coverage
python -m pytest tests/ --cov=core --cov=plugins --cov-report=html
# Specific test file
python -m pytest tests/unit/test_plugin_manager.py -v
# Specific test
python -m pytest tests/unit/test_plugin_manager.py::TestPluginManager::test_plugin_manager_initialization -v
# By marker
python -m pytest tests/ -m "not slow" # Skip slow tests
python -m pytest tests/ -m integration
python -m pytest tests/ -m ui
```
## Test Markers
- `slow`: Tests that take longer to run
- `integration`: Integration tests
- `ui`: UI automation tests
- `windows_only`: Windows-specific tests
## Fixtures
### Available Fixtures
- `temp_dir`: Temporary directory for test files
- `mock_overlay`: Mock overlay window
- `mock_plugin_manager`: Mock plugin manager
- `mock_qt_app`: Mock Qt application
- `sample_config`: Sample configuration
- `mock_nexus_response`: Sample Nexus API response
- `mock_window_info`: Mock window information
- `mock_ocr_result`: Sample OCR result
- `sample_log_lines`: Sample game log lines
- `event_bus`: Fresh event bus instance
- `data_store`: Temporary data store
- `mock_http_client`: Mock HTTP client
- `test_logger`: Test logger
## CI/CD Integration
### GitHub Actions Example
```yaml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.11', '3.12']
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run tests
run: python run_tests.py --unit --coverage --xml
- name: Upload coverage
uses: codecov/codecov-action@v3
```
## Test Maintenance
### Adding New Tests
1. Create test file in appropriate directory
2. Use descriptive test names
3. Add docstrings explaining what is tested
4. Use fixtures from conftest.py
5. Add markers if appropriate
6. Run tests to verify
### Test Naming Convention
- `test_<component>_<scenario>_<expected_result>`
- Example: `test_plugin_manager_enable_plugin_success`
### Best Practices
1. **Isolation**: Each test should be independent
2. **Determinism**: Tests should produce same results every time
3. **Speed**: Keep tests fast (use mocks)
4. **Clarity**: Tests should be easy to understand
5. **Coverage**: Aim for high code coverage
## Coverage Goals
| Component | Target Coverage |
|-----------|-----------------|
| Core Services | 90%+ |
| Plugin Manager | 85%+ |
| API Layer | 80%+ |
| UI Components | 70%+ |
| Overall | 80%+ |
## Known Limitations
1. UI tests require display (Xvfb on headless systems)
2. Some tests are Windows-only (window manager)
3. OCR tests require OCR backend installed
4. Performance benchmarks may vary by hardware
## Troubleshooting Tests
### Tests Fail to Import
```bash
# Ensure you're in project root
cd /path/to/EU-Utility
python -m pytest tests/ -v
```
### Qt Display Issues (Linux)
```bash
# Install Xvfb
sudo apt install xvfb
# Run with virtual display
xvfb-run python -m pytest tests/ui/ -v
```
### Permission Errors
```bash
chmod -R u+rw tests/
```
---
For more information, see:
- [User Guide](./docs/USER_GUIDE.md)
- [Troubleshooting Guide](./docs/TROUBLESHOOTING.md)
- [API Documentation](./docs/API_DOCUMENTATION.md)