Add comprehensive API test suite - 6 test plugins for Three-Tier API validation
Test Plugins Created: - api_comprehensive_test: 60+ tests covering all PluginAPI, WidgetAPI, ExternalAPI methods - widget_stress_test: Stress testing widget creation, layouts, and operations - external_integration_test: REST server, webhooks, auth, and IPC testing - event_bus_test: Pub/sub messaging system validation - performance_benchmark: Latency, throughput, and resource usage metrics - error_handling_test: Edge cases, exceptions, and error condition handling Features: - HTML-based visual results displays - Pass/fail tracking with detailed error reporting - Automated test execution on initialization - JSON export for CI/CD integration - Comprehensive documentation in README.md Each plugin follows BasePlugin pattern with: - Proper manifest.json with test metadata - Interactive UI widgets for results - Categorized test coverage - Performance metrics collection Refs: Three-Tier API (PluginAPI, WidgetAPI, ExternalAPI) Version: 1.0.0
This commit is contained in:
parent
b37191e606
commit
40d07f4661
|
|
@ -0,0 +1,231 @@
|
|||
# EU-Utility Test Suite
|
||||
|
||||
Comprehensive test plugins for validating the Three-Tier API (PluginAPI, WidgetAPI, ExternalAPI).
|
||||
|
||||
## Test Plugins Overview
|
||||
|
||||
### 1. API Comprehensive Test (`api_comprehensive_test/`)
|
||||
**Purpose:** Tests every method across all three API tiers
|
||||
|
||||
**Coverage:**
|
||||
- **PluginAPI (12 services, 30+ tests):**
|
||||
- Log Reader: `read_log_lines()`, `read_log_since()`
|
||||
- Window Manager: `get_eu_window()`, `is_eu_focused()`, `is_eu_visible()`, `bring_eu_to_front()`
|
||||
- OCR: `recognize_text()`, `ocr_available()`
|
||||
- Screenshot: `capture_screen()`, `screenshot_available()`
|
||||
- Nexus: `search_items()`, `get_item_details()`
|
||||
- HTTP: `http_get()`, `http_post()`
|
||||
- Audio: `play_sound()`, `beep()`
|
||||
- Notifications: `show_notification()`
|
||||
- Clipboard: `copy_to_clipboard()`, `paste_from_clipboard()`
|
||||
- EventBus: `subscribe()`, `unsubscribe()`, `publish()`
|
||||
- DataStore: `get_data()`, `set_data()`, `delete_data()`
|
||||
- Tasks: `run_task()`, `cancel_task()`
|
||||
|
||||
- **WidgetAPI (25+ tests):**
|
||||
- Creation: `create_widget()`, `create_from_preset()`
|
||||
- Access: `get_widget()`, `get_all_widgets()`, `get_visible_widgets()`, `widget_exists()`
|
||||
- Management: `show_widget()`, `hide_widget()`, `close_widget()`, `show_all_widgets()`, etc.
|
||||
- Instance methods: `show()`, `hide()`, `move()`, `resize()`, `set_opacity()`, etc.
|
||||
- Layout: `arrange_widgets()`, `snap_to_grid()`
|
||||
- Persistence: `save_all_states()`, `load_all_states()`, `save_state()`, `load_state()`
|
||||
|
||||
- **ExternalAPI (15+ tests):**
|
||||
- Server: `start_server()`, `stop_server()`, `get_status()`
|
||||
- Endpoints: `register_endpoint()`, `unregister_endpoint()`, `@endpoint` decorator
|
||||
- Webhooks: `register_webhook()`, `unregister_webhook()`, `post_webhook()`
|
||||
- Auth: `create_api_key()`, `revoke_api_key()`
|
||||
- IPC: `register_ipc_handler()`, `send_ipc()`
|
||||
|
||||
**Features:**
|
||||
- Visual HTML-based results display
|
||||
- Pass/fail tracking for each test
|
||||
- Error reporting with details
|
||||
- Results export to JSON
|
||||
|
||||
---
|
||||
|
||||
### 2. Widget Stress Test (`widget_stress_test/`)
|
||||
**Purpose:** Tests widget system under load
|
||||
|
||||
**Test Scenarios:**
|
||||
- **Bulk Creation:** Creates 10-50 widgets rapidly
|
||||
- **Layout Stress:** Tests grid, horizontal, vertical, cascade arrangements
|
||||
- **Visibility Cycles:** Rapid show/hide operations
|
||||
- **Property Modifications:** Rapid opacity, position, size changes
|
||||
- **Concurrent Operations:** Create while modifying
|
||||
|
||||
**Metrics:**
|
||||
- Operation count
|
||||
- Duration (ms)
|
||||
- Success rate
|
||||
- Error tracking
|
||||
|
||||
---
|
||||
|
||||
### 3. External Integration Test (`external_integration_test/`)
|
||||
**Purpose:** Tests third-party integration features
|
||||
|
||||
**Test Categories:**
|
||||
- **REST Server:** Start/stop lifecycle, endpoint registration, CORS
|
||||
- **Webhooks:** Incoming/outgoing, HMAC signature verification
|
||||
- **Authentication:** API key creation/revocation
|
||||
- **IPC:** Handler registration, message sending
|
||||
- **Utilities:** Status endpoint, URL generation, webhook history
|
||||
|
||||
---
|
||||
|
||||
### 4. Event Bus Test (`event_bus_test/`)
|
||||
**Purpose:** Tests pub/sub messaging system
|
||||
|
||||
**Test Coverage:**
|
||||
- Basic subscribe/publish
|
||||
- Unsubscribe functionality
|
||||
- Multiple subscribers to single event
|
||||
- Various data types (string, int, dict, list, nested)
|
||||
- Wildcard/pattern subscriptions
|
||||
- High-volume publishing (100 events)
|
||||
- Rapid subscribe/unsubscribe cycles
|
||||
- Empty/null event data
|
||||
- Large payloads
|
||||
- Special characters in event types
|
||||
|
||||
**Visualization:**
|
||||
- Real-time event log
|
||||
- Delivery statistics
|
||||
- Performance metrics
|
||||
|
||||
---
|
||||
|
||||
### 5. Performance Benchmark (`performance_benchmark/`)
|
||||
**Purpose:** Measures API performance metrics
|
||||
|
||||
**Benchmarks:**
|
||||
- **DataStore:** Read/write operations (1000+ iterations)
|
||||
- **EventBus:** Publish/subscribe throughput
|
||||
- **HTTP:** Network request latency
|
||||
- **WidgetAPI:** Creation and operation speed
|
||||
- **ExternalAPI:** Key and endpoint registration
|
||||
|
||||
**Metrics:**
|
||||
- Average latency (ms)
|
||||
- Min/max latency
|
||||
- Throughput (ops/sec)
|
||||
- Total operations
|
||||
|
||||
**Output:**
|
||||
- Interactive results table
|
||||
- Performance grades (Good/Slow)
|
||||
- Export to JSON
|
||||
|
||||
---
|
||||
|
||||
### 6. Error Handling Test (`error_handling_test/`)
|
||||
**Purpose:** Tests error conditions and exception handling
|
||||
|
||||
**Error Types Tested:**
|
||||
- Invalid input
|
||||
- Service unavailable
|
||||
- Resource not found
|
||||
- Type errors
|
||||
- Timeout
|
||||
- Boundary conditions
|
||||
|
||||
**Test Counts:**
|
||||
- PluginAPI: 15+ error scenarios
|
||||
- WidgetAPI: 10+ error scenarios
|
||||
- ExternalAPI: 10+ error scenarios
|
||||
|
||||
**Verification:**
|
||||
- Graceful error handling (no crashes)
|
||||
- Correct exception types
|
||||
- Meaningful error messages
|
||||
|
||||
---
|
||||
|
||||
## Running the Tests
|
||||
|
||||
### Automatic Execution
|
||||
All test plugins run automatically on initialization and display results in their widget.
|
||||
|
||||
### Manual Execution
|
||||
Each plugin provides control buttons to:
|
||||
- Run all tests
|
||||
- Run specific test categories
|
||||
- Clear results
|
||||
- Export data
|
||||
|
||||
### Expected Results
|
||||
- **Comprehensive Test:** 60+ tests
|
||||
- **Stress Test:** 6+ stress scenarios
|
||||
- **Integration Test:** 12+ integration tests
|
||||
- **Event Bus Test:** 12+ messaging tests
|
||||
- **Performance Benchmark:** 8+ benchmarks
|
||||
- **Error Handling Test:** 35+ error scenarios
|
||||
|
||||
---
|
||||
|
||||
## Test Metadata
|
||||
|
||||
Each plugin includes `manifest.json` with:
|
||||
```json
|
||||
{
|
||||
"test_metadata": {
|
||||
"test_type": "comprehensive|stress|integration|messaging|performance|error_handling",
|
||||
"apis_tested": ["PluginAPI", "WidgetAPI", "ExternalAPI"],
|
||||
"automated": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
```
|
||||
plugins/test_suite/
|
||||
├── api_comprehensive_test/
|
||||
│ ├── manifest.json
|
||||
│ └── plugin.py
|
||||
├── widget_stress_test/
|
||||
│ ├── manifest.json
|
||||
│ └── plugin.py
|
||||
├── external_integration_test/
|
||||
│ ├── manifest.json
|
||||
│ └── plugin.py
|
||||
├── event_bus_test/
|
||||
│ ├── manifest.json
|
||||
│ └── plugin.py
|
||||
├── performance_benchmark/
|
||||
│ ├── manifest.json
|
||||
│ └── plugin.py
|
||||
├── error_handling_test/
|
||||
│ ├── manifest.json
|
||||
│ └── plugin.py
|
||||
└── README.md (this file)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
These plugins can be used in CI pipelines:
|
||||
1. Load plugin
|
||||
2. Wait for initialization
|
||||
3. Parse results from exported JSON
|
||||
4. Fail build on critical test failures
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
When adding new API features:
|
||||
1. Add corresponding tests to `api_comprehensive_test`
|
||||
2. Add performance benchmark if applicable
|
||||
3. Add error handling tests for edge cases
|
||||
4. Update this README
|
||||
|
||||
---
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Last Updated:** 2026-02-15
|
||||
**Compatible with:** EU-Utility API v2.2.0+
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
{
|
||||
"id": "error_handling_test",
|
||||
"name": "Error Handling Test",
|
||||
"version": "1.0.0",
|
||||
"description": "Tests error conditions, exception handling, and edge cases across all APIs",
|
||||
"author": "Test Suite",
|
||||
"entry_point": "plugin.py",
|
||||
"category": "test",
|
||||
"tags": ["test", "error", "exception", "edge-cases", "validation"],
|
||||
"min_api_version": "2.2.0",
|
||||
"permissions": ["widgets", "data", "events", "http", "external"],
|
||||
"test_metadata": {
|
||||
"test_type": "error_handling",
|
||||
"apis_tested": ["PluginAPI", "WidgetAPI", "ExternalAPI"],
|
||||
"error_types": ["invalid_input", "service_unavailable", "resource_not_found", "type_error", "timeout"],
|
||||
"automated": true
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,631 @@
|
|||
"""
|
||||
Error Handling Test Plugin
|
||||
|
||||
Tests error conditions and exception handling:
|
||||
- Invalid input validation
|
||||
- Service unavailable scenarios
|
||||
- Resource not found errors
|
||||
- Type errors and malformed data
|
||||
- Timeout handling
|
||||
- Edge cases and boundary conditions
|
||||
|
||||
Verifies APIs handle errors gracefully without crashing.
|
||||
"""
|
||||
|
||||
import time
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Any, Tuple
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
|
||||
from core.base_plugin import BasePlugin
|
||||
from core.api.plugin_api import get_api, PluginAPIError, ServiceNotAvailableError
|
||||
from core.api.widget_api import get_widget_api, WidgetType
|
||||
from core.api.external_api import get_external_api, ExternalAPIError
|
||||
|
||||
|
||||
class ErrorType(Enum):
|
||||
"""Types of errors tested."""
|
||||
INVALID_INPUT = "invalid_input"
|
||||
SERVICE_UNAVAILABLE = "service_unavailable"
|
||||
RESOURCE_NOT_FOUND = "resource_not_found"
|
||||
TYPE_ERROR = "type_error"
|
||||
TIMEOUT = "timeout"
|
||||
BOUNDARY = "boundary"
|
||||
UNEXPECTED = "unexpected"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ErrorTestResult:
|
||||
"""Result of an error handling test."""
|
||||
api: str
|
||||
test_name: str
|
||||
error_type: ErrorType
|
||||
handled_gracefully: bool
|
||||
correct_exception: bool
|
||||
error_message: str = ""
|
||||
details: Dict = None
|
||||
|
||||
|
||||
class ErrorHandlingTestPlugin(BasePlugin):
|
||||
"""
|
||||
Error handling test suite for EU-Utility APIs.
|
||||
|
||||
Tests how APIs respond to invalid inputs, missing resources,
|
||||
and exceptional conditions.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.api = None
|
||||
self.widget_api = None
|
||||
self.external_api = None
|
||||
self.results: List[ErrorTestResult] = []
|
||||
self.widget = None
|
||||
|
||||
def initialize(self):
|
||||
"""Initialize and run error handling tests."""
|
||||
self.api = get_api()
|
||||
self.widget_api = get_widget_api()
|
||||
self.external_api = get_external_api()
|
||||
|
||||
self._create_results_widget()
|
||||
self._run_all_tests()
|
||||
|
||||
def _create_results_widget(self):
|
||||
"""Create widget for results display."""
|
||||
self.widget = self.widget_api.create_widget(
|
||||
name="error_handling_test",
|
||||
title="🛡️ Error Handling Test",
|
||||
size=(900, 650),
|
||||
position=(250, 150),
|
||||
widget_type=WidgetType.CUSTOM
|
||||
)
|
||||
self._update_widget_display()
|
||||
self.widget.show()
|
||||
|
||||
def _update_widget_display(self):
|
||||
"""Update widget content."""
|
||||
try:
|
||||
from PyQt6.QtWidgets import (
|
||||
QWidget, QVBoxLayout, QHBoxLayout, QLabel,
|
||||
QPushButton, QTableWidget, QTableWidgetItem,
|
||||
QHeaderView, QGroupBox, QTextBrowser
|
||||
)
|
||||
from PyQt6.QtCore import Qt
|
||||
from PyQt6.QtGui import QColor
|
||||
|
||||
container = QWidget()
|
||||
main_layout = QVBoxLayout(container)
|
||||
|
||||
# Header
|
||||
header = QLabel("🛡️ Error Handling Test Suite")
|
||||
header.setStyleSheet("font-size: 22px; font-weight: bold; color: #ff8c42;")
|
||||
main_layout.addWidget(header)
|
||||
|
||||
# Summary stats
|
||||
if self.results:
|
||||
summary_layout = QHBoxLayout()
|
||||
|
||||
total = len(self.results)
|
||||
graceful = sum(1 for r in self.results if r.handled_gracefully)
|
||||
correct_exc = sum(1 for r in self.results if r.correct_exception)
|
||||
|
||||
stats = [
|
||||
("Tests", str(total)),
|
||||
("Graceful", f"{graceful}/{total}"),
|
||||
("Correct Exception", f"{correct_exc}/{total}")
|
||||
]
|
||||
|
||||
for title, value in stats:
|
||||
group = QGroupBox(title)
|
||||
group_layout = QVBoxLayout(group)
|
||||
lbl = QLabel(value)
|
||||
lbl.setStyleSheet("font-size: 18px; font-weight: bold;")
|
||||
lbl.setAlignment(Qt.AlignmentFlag.AlignCenter)
|
||||
|
||||
if "Graceful" in title:
|
||||
lbl.setStyleSheet(f"font-size: 18px; font-weight: bold; color: {'#4ecca3' if graceful == total else '#ffd93d'};")
|
||||
elif "Correct" in title:
|
||||
lbl.setStyleSheet(f"font-size: 18px; font-weight: bold; color: {'#4ecca3' if correct_exc == total else '#ffd93d'};")
|
||||
|
||||
group_layout.addWidget(lbl)
|
||||
summary_layout.addWidget(group)
|
||||
|
||||
main_layout.addLayout(summary_layout)
|
||||
|
||||
# Results table
|
||||
self.results_table = QTableWidget()
|
||||
self.results_table.setColumnCount(6)
|
||||
self.results_table.setHorizontalHeaderLabels([
|
||||
"API", "Test", "Error Type", "Handled", "Correct Exc", "Error Message"
|
||||
])
|
||||
self.results_table.horizontalHeader().setSectionResizeMode(QHeaderView.ResizeMode.Stretch)
|
||||
self._populate_results_table()
|
||||
main_layout.addWidget(self.results_table)
|
||||
|
||||
# Controls
|
||||
btn_layout = QHBoxLayout()
|
||||
|
||||
btn_run = QPushButton("▶ Run All Error Tests")
|
||||
btn_run.clicked.connect(self._run_all_tests)
|
||||
btn_layout.addWidget(btn_run)
|
||||
|
||||
btn_summary = QPushButton("📋 View Summary Report")
|
||||
btn_summary.clicked.connect(self._show_summary_report)
|
||||
btn_layout.addWidget(btn_summary)
|
||||
|
||||
main_layout.addLayout(btn_layout)
|
||||
|
||||
self.widget.set_content(container)
|
||||
|
||||
except ImportError as e:
|
||||
print(f"Widget error: {e}")
|
||||
|
||||
def _populate_results_table(self):
|
||||
"""Populate results table."""
|
||||
if not hasattr(self, 'results_table'):
|
||||
return
|
||||
|
||||
self.results_table.setRowCount(len(self.results))
|
||||
|
||||
for i, r in enumerate(self.results):
|
||||
self.results_table.setItem(i, 0, QTableWidgetItem(r.api))
|
||||
self.results_table.setItem(i, 1, QTableWidgetItem(r.test_name))
|
||||
self.results_table.setItem(i, 2, QTableWidgetItem(r.error_type.value))
|
||||
|
||||
handled_item = QTableWidgetItem("✅" if r.handled_gracefully else "❌")
|
||||
handled_item.setForeground(QColor("#4ecca3" if r.handled_gracefully else "#ff6b6b"))
|
||||
self.results_table.setItem(i, 3, handled_item)
|
||||
|
||||
correct_item = QTableWidgetItem("✅" if r.correct_exception else "⚠️")
|
||||
self.results_table.setItem(i, 4, correct_item)
|
||||
|
||||
msg = r.error_message[:50] + "..." if len(r.error_message) > 50 else r.error_message
|
||||
self.results_table.setItem(i, 5, QTableWidgetItem(msg))
|
||||
|
||||
def _run_test(self, api: str, test_name: str, error_type: ErrorType,
|
||||
test_func) -> ErrorTestResult:
|
||||
"""Run a single error handling test."""
|
||||
error_occurred = False
|
||||
handled_gracefully = False
|
||||
correct_exception = False
|
||||
error_message = ""
|
||||
details = {}
|
||||
|
||||
try:
|
||||
test_func()
|
||||
# If no error occurred, check if we expected one
|
||||
error_message = "No error occurred (may be expected)"
|
||||
handled_gracefully = True
|
||||
|
||||
except ServiceNotAvailableError as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = True
|
||||
correct_exception = error_type == ErrorType.SERVICE_UNAVAILABLE
|
||||
error_message = str(e)
|
||||
|
||||
except PluginAPIError as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = True
|
||||
correct_exception = True
|
||||
error_message = str(e)
|
||||
|
||||
except ExternalAPIError as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = True
|
||||
correct_exception = True
|
||||
error_message = str(e)
|
||||
|
||||
except ValueError as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = True
|
||||
correct_exception = error_type in [ErrorType.INVALID_INPUT, ErrorType.BOUNDARY]
|
||||
error_message = str(e)
|
||||
|
||||
except TypeError as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = True
|
||||
correct_exception = error_type == ErrorType.TYPE_ERROR
|
||||
error_message = str(e)
|
||||
|
||||
except TimeoutError as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = True
|
||||
correct_exception = error_type == ErrorType.TIMEOUT
|
||||
error_message = str(e)
|
||||
|
||||
except KeyError as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = True
|
||||
correct_exception = error_type == ErrorType.RESOURCE_NOT_FOUND
|
||||
error_message = f"KeyError: {e}"
|
||||
|
||||
except Exception as e:
|
||||
error_occurred = True
|
||||
handled_gracefully = False # Unexpected exception type
|
||||
correct_exception = False
|
||||
error_message = f"{type(e).__name__}: {str(e)[:100]}"
|
||||
details["exception_type"] = type(e).__name__
|
||||
|
||||
result = ErrorTestResult(
|
||||
api=api,
|
||||
test_name=test_name,
|
||||
error_type=error_type,
|
||||
handled_gracefully=handled_gracefully,
|
||||
correct_exception=correct_exception,
|
||||
error_message=error_message,
|
||||
details=details
|
||||
)
|
||||
|
||||
self.results.append(result)
|
||||
return result
|
||||
|
||||
def _run_all_tests(self):
|
||||
"""Execute all error handling tests."""
|
||||
self.results.clear()
|
||||
|
||||
# PluginAPI Error Tests
|
||||
self._test_pluginapi_errors()
|
||||
|
||||
# WidgetAPI Error Tests
|
||||
self._test_widgetapi_errors()
|
||||
|
||||
# ExternalAPI Error Tests
|
||||
self._test_externalapi_errors()
|
||||
|
||||
self._update_widget_display()
|
||||
|
||||
def _test_pluginapi_errors(self):
|
||||
"""Test PluginAPI error handling."""
|
||||
# Test invalid log line count
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Invalid log line count (negative)",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.api.read_log_lines(-1)
|
||||
)
|
||||
|
||||
# Test invalid log line count (too large)
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Invalid log line count (excessive)",
|
||||
ErrorType.BOUNDARY,
|
||||
lambda: self.api.read_log_lines(10000000)
|
||||
)
|
||||
|
||||
# Test OCR with invalid region
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"OCR invalid region",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.api.recognize_text((-1, -1, -1, -1))
|
||||
)
|
||||
|
||||
# Test capture with invalid region
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Screenshot invalid region",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.api.capture_screen((-100, -100, 0, 0))
|
||||
)
|
||||
|
||||
# Test HTTP with invalid URL
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"HTTP invalid URL",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.api.http_get("not_a_valid_url")
|
||||
)
|
||||
|
||||
# Test HTTP with malformed URL
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"HTTP malformed URL",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.api.http_get("")
|
||||
)
|
||||
|
||||
# Test play_sound with invalid path
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Play sound invalid path",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.api.play_sound("/nonexistent/path/to/sound.wav")
|
||||
)
|
||||
|
||||
# Test notification with empty title
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Notification empty title",
|
||||
ErrorType.BOUNDARY,
|
||||
lambda: self.api.show_notification("", "message")
|
||||
)
|
||||
|
||||
# Test set_data with non-serializable object
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Set data non-serializable",
|
||||
ErrorType.TYPE_ERROR,
|
||||
lambda: self.api.set_data("test_key", lambda x: x)
|
||||
)
|
||||
|
||||
# Test subscribe with non-callable
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Subscribe non-callable",
|
||||
ErrorType.TYPE_ERROR,
|
||||
lambda: self.api.subscribe("test", "not_a_function")
|
||||
)
|
||||
|
||||
# Test unsubscribe with invalid ID
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Unsubscribe invalid ID",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.api.unsubscribe("invalid_subscription_id_12345")
|
||||
)
|
||||
|
||||
# Test cancel_task with invalid ID
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Cancel task invalid ID",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.api.cancel_task("invalid_task_id_12345")
|
||||
)
|
||||
|
||||
# Test get_data with None key
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Get data None key",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.api.get_data(None)
|
||||
)
|
||||
|
||||
# Test volume out of range
|
||||
self._run_test(
|
||||
"PluginAPI",
|
||||
"Play sound volume out of range",
|
||||
ErrorType.BOUNDARY,
|
||||
lambda: self.api.play_sound("test.wav", volume=5.0)
|
||||
)
|
||||
|
||||
def _test_widgetapi_errors(self):
|
||||
"""Test WidgetAPI error handling."""
|
||||
# Test duplicate widget name
|
||||
def create_duplicate():
|
||||
w1 = self.widget_api.create_widget(name="duplicate_test", title="Test")
|
||||
w2 = self.widget_api.create_widget(name="duplicate_test", title="Test 2")
|
||||
|
||||
self._run_test(
|
||||
"WidgetAPI",
|
||||
"Duplicate widget name",
|
||||
ErrorType.INVALID_INPUT,
|
||||
create_duplicate
|
||||
)
|
||||
|
||||
# Cleanup if created
|
||||
try:
|
||||
self.widget_api.close_widget("duplicate_test")
|
||||
except:
|
||||
pass
|
||||
|
||||
# Test get non-existent widget
|
||||
self._run_test(
|
||||
"WidgetAPI",
|
||||
"Get non-existent widget",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.widget_api.get_widget("definitely_does_not_exist_12345")
|
||||
)
|
||||
|
||||
# Test operations on non-existent widget
|
||||
self._run_test(
|
||||
"WidgetAPI",
|
||||
"Show non-existent widget",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.widget_api.show_widget("definitely_does_not_exist_12345")
|
||||
)
|
||||
|
||||
# Test invalid widget size
|
||||
self._run_test(
|
||||
"WidgetAPI",
|
||||
"Create widget with negative size",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.widget_api.create_widget(name="bad_size", title="Bad", size=(-100, -100))
|
||||
)
|
||||
|
||||
# Test invalid opacity
|
||||
def test_bad_opacity():
|
||||
w = self.widget_api.create_widget(name="opacity_test", title="Test", size=(100, 100))
|
||||
w.set_opacity(5.0) # Should be clamped
|
||||
|
||||
self._run_test(
|
||||
"WidgetAPI",
|
||||
"Set opacity out of range",
|
||||
ErrorType.BOUNDARY,
|
||||
test_bad_opacity
|
||||
)
|
||||
|
||||
# Cleanup
|
||||
try:
|
||||
self.widget_api.close_widget("opacity_test")
|
||||
except:
|
||||
pass
|
||||
|
||||
# Test load state with invalid data
|
||||
def load_invalid_state():
|
||||
w = self.widget_api.create_widget(name="state_test", title="Test")
|
||||
w.load_state({"invalid": "state_data"})
|
||||
|
||||
self._run_test(
|
||||
"WidgetAPI",
|
||||
"Load invalid state",
|
||||
ErrorType.INVALID_INPUT,
|
||||
load_invalid_state
|
||||
)
|
||||
|
||||
# Cleanup
|
||||
try:
|
||||
self.widget_api.close_widget("state_test")
|
||||
except:
|
||||
pass
|
||||
|
||||
# Test close already closed widget
|
||||
def close_closed():
|
||||
w = self.widget_api.create_widget(name="close_test", title="Test")
|
||||
w.close()
|
||||
w.close() # Second close
|
||||
|
||||
self._run_test(
|
||||
"WidgetAPI",
|
||||
"Close already closed widget",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
close_closed
|
||||
)
|
||||
|
||||
def _test_externalapi_errors(self):
|
||||
"""Test ExternalAPI error handling."""
|
||||
# Test start server on invalid port
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Start server invalid port",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.external_api.start_server(port=-1)
|
||||
)
|
||||
|
||||
# Test register endpoint with invalid path
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Register endpoint invalid path",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.external_api.register_endpoint("", lambda x: x)
|
||||
)
|
||||
|
||||
# Test register webhook with invalid name
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Register webhook invalid name",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.external_api.register_webhook("", lambda x: x)
|
||||
)
|
||||
|
||||
# Test unregister non-existent endpoint
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Unregister non-existent endpoint",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.external_api.unregister_endpoint("definitely_not_registered")
|
||||
)
|
||||
|
||||
# Test unregister non-existent webhook
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Unregister non-existent webhook",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.external_api.unregister_webhook("definitely_not_registered")
|
||||
)
|
||||
|
||||
# Test revoke non-existent API key
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Revoke non-existent API key",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.external_api.revoke_api_key("invalid_key_12345")
|
||||
)
|
||||
|
||||
# Test post_webhook with invalid URL
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Post webhook invalid URL",
|
||||
ErrorType.INVALID_INPUT,
|
||||
lambda: self.external_api.post_webhook("not_a_url", {})
|
||||
)
|
||||
|
||||
# Test post_webhook with unreachable host
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Post webhook unreachable host",
|
||||
ErrorType.TIMEOUT,
|
||||
lambda: self.external_api.post_webhook(
|
||||
"http://192.0.2.1:9999/test", # TEST-NET-1, should be unreachable
|
||||
{},
|
||||
timeout=1
|
||||
)
|
||||
)
|
||||
|
||||
# Test IPC send to non-existent channel
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"IPC send non-existent channel",
|
||||
ErrorType.RESOURCE_NOT_FOUND,
|
||||
lambda: self.external_api.send_ipc("nonexistent_channel", {})
|
||||
)
|
||||
|
||||
# Test get_url when server not running
|
||||
def get_url_not_running():
|
||||
# Ensure server is stopped
|
||||
self.external_api.stop_server()
|
||||
url = self.external_api.get_url("test")
|
||||
if not url:
|
||||
raise ValueError("Empty URL when server not running")
|
||||
|
||||
self._run_test(
|
||||
"ExternalAPI",
|
||||
"Get URL server not running",
|
||||
ErrorType.SERVICE_UNAVAILABLE,
|
||||
get_url_not_running
|
||||
)
|
||||
|
||||
def _show_summary_report(self):
|
||||
"""Display summary report."""
|
||||
if not self.results:
|
||||
self.api.show_notification("No Results", "Run tests first")
|
||||
return
|
||||
|
||||
total = len(self.results)
|
||||
graceful = sum(1 for r in self.results if r.handled_gracefully)
|
||||
correct = sum(1 for r in self.results if r.correct_exception)
|
||||
|
||||
report = f"""
|
||||
Error Handling Test Summary
|
||||
===========================
|
||||
|
||||
Total Tests: {total}
|
||||
Handled Gracefully: {graceful}/{total} ({graceful/total*100:.1f}%)
|
||||
Correct Exception: {correct}/{total} ({correct/total*100:.1f}%)
|
||||
|
||||
By Error Type:
|
||||
"""
|
||||
|
||||
for error_type in ErrorType:
|
||||
type_tests = [r for r in self.results if r.error_type == error_type]
|
||||
if type_tests:
|
||||
type_graceful = sum(1 for r in type_tests if r.handled_gracefully)
|
||||
report += f" {error_type.value}: {type_graceful}/{len(type_tests)} graceful\n"
|
||||
|
||||
report += "\nBy API:\n"
|
||||
for api in ["PluginAPI", "WidgetAPI", "ExternalAPI"]:
|
||||
api_tests = [r for r in self.results if r.api == api]
|
||||
if api_tests:
|
||||
api_graceful = sum(1 for r in api_tests if r.handled_gracefully)
|
||||
report += f" {api}: {api_graceful}/{len(api_tests)} graceful\n"
|
||||
|
||||
print(report)
|
||||
self.api.show_notification("Report Generated", "See console for full report")
|
||||
|
||||
def shutdown(self):
|
||||
"""Clean up resources."""
|
||||
# Clean up any test widgets
|
||||
for name in ["duplicate_test", "opacity_test", "state_test", "close_test"]:
|
||||
try:
|
||||
self.widget_api.close_widget(name)
|
||||
except:
|
||||
pass
|
||||
|
||||
if self.widget:
|
||||
self.widget.close()
|
||||
|
||||
|
||||
# Plugin entry point
|
||||
plugin_class = ErrorHandlingTestPlugin
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
{
|
||||
"id": "performance_benchmark",
|
||||
"name": "Performance Benchmark",
|
||||
"version": "1.0.0",
|
||||
"description": "Measures API performance: latency, throughput, memory usage, and scalability",
|
||||
"author": "Test Suite",
|
||||
"entry_point": "plugin.py",
|
||||
"category": "test",
|
||||
"tags": ["test", "performance", "benchmark", "metrics"],
|
||||
"min_api_version": "2.2.0",
|
||||
"permissions": ["widgets", "data", "events", "http", "tasks"],
|
||||
"test_metadata": {
|
||||
"test_type": "performance",
|
||||
"apis_tested": ["PluginAPI", "WidgetAPI", "ExternalAPI"],
|
||||
"metrics": ["latency", "throughput", "memory", "cpu"],
|
||||
"automated": true
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,504 @@
|
|||
"""
|
||||
Performance Benchmark Plugin
|
||||
|
||||
Comprehensive performance testing for all APIs:
|
||||
- API call latency measurements
|
||||
- Throughput testing
|
||||
- Memory usage tracking
|
||||
- Widget rendering performance
|
||||
- HTTP request performance
|
||||
- Event bus throughput
|
||||
|
||||
Generates benchmark reports with performance metrics.
|
||||
"""
|
||||
|
||||
import time
|
||||
import gc
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Any, Callable
|
||||
from dataclasses import dataclass, field
|
||||
from statistics import mean, median, stdev
|
||||
|
||||
from core.base_plugin import BasePlugin
|
||||
from core.api.plugin_api import get_api
|
||||
from core.api.widget_api import get_widget_api, WidgetType
|
||||
from core.api.external_api import get_external_api
|
||||
|
||||
|
||||
@dataclass
|
||||
class BenchmarkResult:
|
||||
"""Single benchmark result."""
|
||||
category: str
|
||||
operation: str
|
||||
iterations: int
|
||||
total_time_ms: float
|
||||
avg_time_ms: float
|
||||
min_time_ms: float
|
||||
max_time_ms: float
|
||||
throughput_ops_sec: float
|
||||
memory_kb: float = 0
|
||||
notes: str = ""
|
||||
|
||||
|
||||
class PerformanceBenchmarkPlugin(BasePlugin):
|
||||
"""
|
||||
Performance benchmark suite for EU-Utility APIs.
|
||||
|
||||
Measures latency, throughput, and resource usage
|
||||
to identify bottlenecks and track performance over time.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.api = None
|
||||
self.widget_api = None
|
||||
self.external_api = None
|
||||
self.results: List[BenchmarkResult] = []
|
||||
self.widget = None
|
||||
self.warmup_iterations = 10
|
||||
|
||||
def initialize(self):
|
||||
"""Initialize and run benchmarks."""
|
||||
self.api = get_api()
|
||||
self.widget_api = get_widget_api()
|
||||
self.external_api = get_external_api()
|
||||
|
||||
self._create_results_widget()
|
||||
|
||||
# Run benchmarks after a short delay to let UI settle
|
||||
import threading
|
||||
threading.Timer(0.5, self._run_all_benchmarks).start()
|
||||
|
||||
def _create_results_widget(self):
|
||||
"""Create widget to display benchmark results."""
|
||||
self.widget = self.widget_api.create_widget(
|
||||
name="performance_benchmark",
|
||||
title="⚡ Performance Benchmark",
|
||||
size=(900, 700),
|
||||
position=(300, 100),
|
||||
widget_type=WidgetType.CHART
|
||||
)
|
||||
self._update_widget_display()
|
||||
self.widget.show()
|
||||
|
||||
def _update_widget_display(self):
|
||||
"""Update widget content."""
|
||||
try:
|
||||
from PyQt6.QtWidgets import (
|
||||
QWidget, QVBoxLayout, QHBoxLayout, QLabel,
|
||||
QPushButton, QTableWidget, QTableWidgetItem,
|
||||
QHeaderView, QProgressBar, QTextBrowser, QGroupBox
|
||||
)
|
||||
from PyQt6.QtCore import Qt
|
||||
from PyQt6.QtGui import QColor
|
||||
|
||||
container = QWidget()
|
||||
main_layout = QVBoxLayout(container)
|
||||
|
||||
# Header
|
||||
header = QLabel("⚡ API Performance Benchmark")
|
||||
header.setStyleSheet("font-size: 22px; font-weight: bold; color: #ff8c42;")
|
||||
main_layout.addWidget(header)
|
||||
|
||||
# Summary section
|
||||
if self.results:
|
||||
summary_layout = QHBoxLayout()
|
||||
|
||||
total_ops = sum(r.iterations for r in self.results)
|
||||
avg_latency = mean(r.avg_time_ms for r in self.results)
|
||||
total_throughput = sum(r.throughput_ops_sec for r in self.results)
|
||||
|
||||
summaries = [
|
||||
("Total Operations", f"{total_ops:,}"),
|
||||
("Avg Latency", f"{avg_latency:.3f}ms"),
|
||||
("Combined Throughput", f"{total_throughput:,.0f}/s")
|
||||
]
|
||||
|
||||
for title, value in summaries:
|
||||
group = QGroupBox(title)
|
||||
group_layout = QVBoxLayout(group)
|
||||
lbl = QLabel(value)
|
||||
lbl.setStyleSheet("font-size: 20px; font-weight: bold; color: #4ecca3;")
|
||||
lbl.setAlignment(Qt.AlignmentFlag.AlignCenter)
|
||||
group_layout.addWidget(lbl)
|
||||
summary_layout.addWidget(group)
|
||||
|
||||
main_layout.addLayout(summary_layout)
|
||||
|
||||
# Results table
|
||||
self.results_table = QTableWidget()
|
||||
self.results_table.setColumnCount(7)
|
||||
self.results_table.setHorizontalHeaderLabels([
|
||||
"Category", "Operation", "Iterations", "Avg (ms)",
|
||||
"Min/Max (ms)", "Throughput (ops/s)", "Status"
|
||||
])
|
||||
self.results_table.horizontalHeader().setSectionResizeMode(QHeaderView.ResizeMode.Stretch)
|
||||
self._populate_results_table()
|
||||
main_layout.addWidget(self.results_table)
|
||||
|
||||
# Controls
|
||||
btn_layout = QHBoxLayout()
|
||||
|
||||
btn_run = QPushButton("▶ Run Benchmarks")
|
||||
btn_run.clicked.connect(self._run_all_benchmarks)
|
||||
btn_layout.addWidget(btn_run)
|
||||
|
||||
btn_export = QPushButton("📊 Export Results")
|
||||
btn_export.clicked.connect(self._export_results)
|
||||
btn_layout.addWidget(btn_export)
|
||||
|
||||
main_layout.addLayout(btn_layout)
|
||||
|
||||
# Detailed report
|
||||
if self.results:
|
||||
report_group = QGroupBox("Detailed Report")
|
||||
report_layout = QVBoxLayout(report_group)
|
||||
|
||||
self.report_browser = QTextBrowser()
|
||||
self.report_browser.setHtml(self._generate_detailed_report())
|
||||
self.report_browser.setMaximumHeight(200)
|
||||
report_layout.addWidget(self.report_browser)
|
||||
|
||||
main_layout.addWidget(report_group)
|
||||
|
||||
self.widget.set_content(container)
|
||||
|
||||
except ImportError as e:
|
||||
print(f"Widget error: {e}")
|
||||
|
||||
def _populate_results_table(self):
|
||||
"""Populate results table."""
|
||||
if not hasattr(self, 'results_table'):
|
||||
return
|
||||
|
||||
self.results_table.setRowCount(len(self.results))
|
||||
|
||||
for i, r in enumerate(self.results):
|
||||
self.results_table.setItem(i, 0, QTableWidgetItem(r.category))
|
||||
self.results_table.setItem(i, 1, QTableWidgetItem(r.operation))
|
||||
self.results_table.setItem(i, 2, QTableWidgetItem(f"{r.iterations:,}"))
|
||||
self.results_table.setItem(i, 3, QTableWidgetItem(f"{r.avg_time_ms:.3f}"))
|
||||
self.results_table.setItem(i, 4, QTableWidgetItem(f"{r.min_time_ms:.3f} / {r.max_time_ms:.3f}"))
|
||||
self.results_table.setItem(i, 5, QTableWidgetItem(f"{r.throughput_ops_sec:,.0f}"))
|
||||
|
||||
# Status based on performance
|
||||
status = "✅ Good"
|
||||
if r.avg_time_ms > 100:
|
||||
status = "⚠️ Slow"
|
||||
elif r.avg_time_ms > 10:
|
||||
status = "⚡ OK"
|
||||
|
||||
status_item = QTableWidgetItem(status)
|
||||
self.results_table.setItem(i, 6, status_item)
|
||||
|
||||
def _generate_detailed_report(self) -> str:
|
||||
"""Generate detailed HTML report."""
|
||||
html = """
|
||||
<style>
|
||||
body { font-family: 'Segoe UI', monospace; background: #1a1a2e; color: #eee; padding: 15px; }
|
||||
h3 { color: #ff8c42; margin-top: 15px; }
|
||||
.metric { margin: 5px 0; }
|
||||
.good { color: #4ecca3; }
|
||||
.warning { color: #ffd93d; }
|
||||
.slow { color: #ff6b6b; }
|
||||
table { width: 100%; border-collapse: collapse; margin: 10px 0; font-size: 12px; }
|
||||
th { background: #2d3748; padding: 8px; text-align: left; }
|
||||
td { padding: 6px; border-bottom: 1px solid #444; }
|
||||
</style>
|
||||
<h3>Performance Summary by Category</h3>
|
||||
<table>
|
||||
<tr><th>Category</th><th>Tests</th><th>Avg Latency</th><th>Total Throughput</th></tr>
|
||||
"""
|
||||
|
||||
categories = {}
|
||||
for r in self.results:
|
||||
if r.category not in categories:
|
||||
categories[r.category] = []
|
||||
categories[r.category].append(r)
|
||||
|
||||
for cat, results in categories.items():
|
||||
avg_lat = mean(r.avg_time_ms for r in results)
|
||||
total_tp = sum(r.throughput_ops_sec for r in results)
|
||||
html += f"""
|
||||
<tr>
|
||||
<td><strong>{cat}</strong></td>
|
||||
<td>{len(results)}</td>
|
||||
<td class="{'slow' if avg_lat > 100 else 'good'}">{avg_lat:.3f}ms</td>
|
||||
<td>{total_tp:,.0f}/s</td>
|
||||
</tr>
|
||||
"""
|
||||
|
||||
html += "</table>"
|
||||
|
||||
# Performance grades
|
||||
html += "<h3>Performance Grades</h3>"
|
||||
|
||||
all_latencies = [r.avg_time_ms for r in self.results]
|
||||
if all_latencies:
|
||||
overall_avg = mean(all_latencies)
|
||||
html += f"""
|
||||
<div class="metric">Overall Average Latency:
|
||||
<span class="{'slow' if overall_avg > 50 else 'warning' if overall_avg > 10 else 'good'}">{overall_avg:.3f}ms</span>
|
||||
</div>
|
||||
"""
|
||||
|
||||
# Slowest operations
|
||||
slowest = sorted(self.results, key=lambda r: r.avg_time_ms, reverse=True)[:3]
|
||||
html += "<p><strong>Top 3 Slowest Operations:</strong></p><ul>"
|
||||
for r in slowest:
|
||||
html += f"<li>{r.category}.{r.operation}: {r.avg_time_ms:.3f}ms</li>"
|
||||
html += "</ul>"
|
||||
|
||||
return html
|
||||
|
||||
def _benchmark(self, operation: Callable, iterations: int,
|
||||
category: str, name: str) -> BenchmarkResult:
|
||||
"""Run a benchmark and return results."""
|
||||
# Warmup
|
||||
for _ in range(self.warmup_iterations):
|
||||
try:
|
||||
operation()
|
||||
except:
|
||||
pass
|
||||
|
||||
# Actual benchmark
|
||||
times = []
|
||||
gc.collect() # Clean up before benchmark
|
||||
|
||||
for _ in range(iterations):
|
||||
start = time.perf_counter()
|
||||
try:
|
||||
operation()
|
||||
except Exception as e:
|
||||
print(f"Benchmark error: {e}")
|
||||
end = time.perf_counter()
|
||||
times.append((end - start) * 1000) # Convert to ms
|
||||
|
||||
total_time = sum(times)
|
||||
|
||||
return BenchmarkResult(
|
||||
category=category,
|
||||
operation=name,
|
||||
iterations=iterations,
|
||||
total_time_ms=total_time,
|
||||
avg_time_ms=mean(times),
|
||||
min_time_ms=min(times),
|
||||
max_time_ms=max(times),
|
||||
throughput_ops_sec=iterations / (total_time / 1000) if total_time > 0 else 0
|
||||
)
|
||||
|
||||
def _run_all_benchmarks(self):
|
||||
"""Execute all performance benchmarks."""
|
||||
self.results.clear()
|
||||
|
||||
# PluginAPI benchmarks
|
||||
self._benchmark_datastore()
|
||||
self._benchmark_eventbus()
|
||||
self._benchmark_http()
|
||||
|
||||
# WidgetAPI benchmarks
|
||||
self._benchmark_widget_creation()
|
||||
self._benchmark_widget_operations()
|
||||
|
||||
# ExternalAPI benchmarks
|
||||
self._benchmark_external_api()
|
||||
|
||||
self._update_widget_display()
|
||||
|
||||
# Show completion notification
|
||||
self.api.show_notification(
|
||||
"Benchmark Complete",
|
||||
f"Completed {len(self.results)} benchmark tests",
|
||||
duration=3000
|
||||
)
|
||||
|
||||
def _benchmark_datastore(self):
|
||||
"""Benchmark DataStore operations."""
|
||||
# Write benchmark
|
||||
def write_op():
|
||||
self.api.set_data(f"bench_key_{time.time()}", {"test": "data"})
|
||||
|
||||
self.results.append(self._benchmark(write_op, 100, "DataStore", "write"))
|
||||
|
||||
# Read benchmark
|
||||
self.api.set_data("bench_read_key", "test_value")
|
||||
|
||||
def read_op():
|
||||
self.api.get_data("bench_read_key")
|
||||
|
||||
self.results.append(self._benchmark(read_op, 1000, "DataStore", "read"))
|
||||
|
||||
# Cleanup
|
||||
self.api.delete_data("bench_read_key")
|
||||
|
||||
def _benchmark_eventbus(self):
|
||||
"""Benchmark Event Bus operations."""
|
||||
received = []
|
||||
|
||||
def handler(data):
|
||||
received.append(data)
|
||||
|
||||
sub_id = self.api.subscribe("bench.event", handler)
|
||||
|
||||
# Publish benchmark
|
||||
def publish_op():
|
||||
self.api.publish("bench.event", {"test": "data"})
|
||||
|
||||
result = self._benchmark(publish_op, 500, "EventBus", "publish")
|
||||
result.notes = f"Received: {len(received)} events"
|
||||
self.results.append(result)
|
||||
|
||||
# Subscribe/unsubscribe benchmark
|
||||
def sub_unsub_op():
|
||||
sid = self.api.subscribe("bench.temp", lambda x: None)
|
||||
self.api.unsubscribe(sid)
|
||||
|
||||
self.results.append(self._benchmark(sub_unsub_op, 100, "EventBus", "subscribe/unsubscribe"))
|
||||
|
||||
self.api.unsubscribe(sub_id)
|
||||
|
||||
def _benchmark_http(self):
|
||||
"""Benchmark HTTP operations."""
|
||||
# Note: This will make actual HTTP requests
|
||||
def http_get_op():
|
||||
self.api.http_get("https://httpbin.org/get", cache=False)
|
||||
|
||||
# Use fewer iterations for network operations
|
||||
self.results.append(self._benchmark(http_get_op, 5, "HTTP", "GET (network)"))
|
||||
|
||||
def _benchmark_widget_creation(self):
|
||||
"""Benchmark widget creation."""
|
||||
counter = [0]
|
||||
created_widgets = []
|
||||
|
||||
def create_op():
|
||||
name = f"bench_widget_{counter[0]}"
|
||||
widget = self.widget_api.create_widget(
|
||||
name=name,
|
||||
title=f"Bench {counter[0]}",
|
||||
size=(200, 150)
|
||||
)
|
||||
created_widgets.append(name)
|
||||
counter[0] += 1
|
||||
|
||||
result = self._benchmark(create_op, 20, "WidgetAPI", "create_widget")
|
||||
self.results.append(result)
|
||||
|
||||
# Cleanup created widgets
|
||||
for name in created_widgets:
|
||||
try:
|
||||
self.widget_api.close_widget(name)
|
||||
except:
|
||||
pass
|
||||
|
||||
def _benchmark_widget_operations(self):
|
||||
"""Benchmark widget operations."""
|
||||
widget = self.widget_api.create_widget(
|
||||
name="bench_op_widget",
|
||||
title="Benchmark",
|
||||
size=(300, 200)
|
||||
)
|
||||
widget.show()
|
||||
|
||||
# Move benchmark
|
||||
pos = [0]
|
||||
def move_op():
|
||||
widget.move(100 + pos[0], 100 + pos[0])
|
||||
pos[0] = (pos[0] + 10) % 200
|
||||
|
||||
self.results.append(self._benchmark(move_op, 200, "WidgetAPI", "move"))
|
||||
|
||||
# Opacity benchmark
|
||||
def opacity_op():
|
||||
widget.set_opacity(0.5 + (pos[0] % 100) / 200)
|
||||
|
||||
self.results.append(self._benchmark(opacity_op, 200, "WidgetAPI", "set_opacity"))
|
||||
|
||||
# Cleanup
|
||||
widget.close()
|
||||
|
||||
def _benchmark_external_api(self):
|
||||
"""Benchmark ExternalAPI operations."""
|
||||
# API key creation benchmark
|
||||
keys_created = []
|
||||
|
||||
def create_key_op():
|
||||
key = self.external_api.create_api_key(f"bench_key_{time.time()}")
|
||||
keys_created.append(key)
|
||||
|
||||
result = self._benchmark(create_key_op, 50, "ExternalAPI", "create_api_key")
|
||||
self.results.append(result)
|
||||
|
||||
# Cleanup keys
|
||||
for key in keys_created:
|
||||
self.external_api.revoke_api_key(key)
|
||||
|
||||
# Endpoint registration benchmark
|
||||
counter = [0]
|
||||
|
||||
def register_endpoint_op():
|
||||
self.external_api.register_endpoint(
|
||||
f"bench/endpoint/{counter[0]}",
|
||||
lambda x: x
|
||||
)
|
||||
counter[0] += 1
|
||||
|
||||
self.results.append(self._benchmark(register_endpoint_op, 30, "ExternalAPI", "register_endpoint"))
|
||||
|
||||
def _export_results(self):
|
||||
"""Export benchmark results to file."""
|
||||
try:
|
||||
import json
|
||||
|
||||
export_data = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"system_info": {
|
||||
"python_version": sys.version,
|
||||
"platform": sys.platform
|
||||
},
|
||||
"results": [
|
||||
{
|
||||
"category": r.category,
|
||||
"operation": r.operation,
|
||||
"iterations": r.iterations,
|
||||
"avg_time_ms": r.avg_time_ms,
|
||||
"min_time_ms": r.min_time_ms,
|
||||
"max_time_ms": r.max_time_ms,
|
||||
"throughput_ops_sec": r.throughput_ops_sec,
|
||||
"notes": r.notes
|
||||
}
|
||||
for r in self.results
|
||||
],
|
||||
"summary": {
|
||||
"total_tests": len(self.results),
|
||||
"overall_avg_latency_ms": mean(r.avg_time_ms for r in self.results) if self.results else 0,
|
||||
"total_throughput_ops_sec": sum(r.throughput_ops_sec for r in self.results)
|
||||
}
|
||||
}
|
||||
|
||||
filename = f"performance_benchmark_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
|
||||
with open(filename, 'w') as f:
|
||||
json.dump(export_data, f, indent=2)
|
||||
|
||||
self.api.show_notification(
|
||||
"Export Complete",
|
||||
f"Results saved to {filename}",
|
||||
duration=3000
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.api.show_notification(
|
||||
"Export Failed",
|
||||
str(e),
|
||||
duration=3000
|
||||
)
|
||||
|
||||
def shutdown(self):
|
||||
"""Clean up resources."""
|
||||
if self.widget:
|
||||
self.widget.close()
|
||||
|
||||
|
||||
# Plugin entry point
|
||||
plugin_class = PerformanceBenchmarkPlugin
|
||||
Loading…
Reference in New Issue