Testing SmartGraph Applications

Testing is a crucial part of developing robust and reliable SmartGraph applications. This guide will walk you through various testing strategies and techniques specific to SmartGraph, helping you ensure the quality and correctness of your AI pipelines.

Unit Testing Components

Start by testing individual components in isolation. Here’s how you can unit test a custom component:

import pytest
from smartgraph import ReactiveComponent

class AddOneComponent(ReactiveComponent):
    async def process(self, input_data: int) -> int:
        return input_data + 1

@pytest.mark.asyncio
async def test_add_one_component():
    component = AddOneComponent("AddOne")
    result = await component.process(5)
    assert result == 6

Use pytest’s asyncio marker to handle asynchronous tests.

Testing Stateful Components

For components that maintain internal state, test both the state management and processing logic:

import pytest
from smartgraph import ReactiveComponent

class CounterComponent(ReactiveComponent):
    def __init__(self, name: str):
        super().__init__(name)
        self.create_state("count", 0)

    async def process(self, input_data: Any) -> int:
        current_count = self.get_state("count").value
        new_count = current_count + 1
        self.update_state("count", new_count)
        return new_count

@pytest.mark.asyncio
async def test_counter_component():
    counter = CounterComponent("Counter")

    assert counter.get_state("count").value == 0

    result1 = await counter.process(None)
    assert result1 == 1
    assert counter.get_state("count").value == 1

    result2 = await counter.process(None)
    assert result2 == 2
    assert counter.get_state("count").value == 2

Testing Pipelines

Test entire pipelines to ensure components work correctly together:

import pytest
from smartgraph import ReactiveSmartGraph
from smartgraph.components import TextInputHandler, CompletionComponent

@pytest.mark.asyncio
async def test_simple_pipeline():
    graph = ReactiveSmartGraph()
    pipeline = graph.create_pipeline("TestPipeline")

    pipeline.add_component(TextInputHandler("Input"))
    pipeline.add_component(CompletionComponent("LLM", model="mock-model"))

    graph.compile()

    result = await graph.execute_and_await("TestPipeline", "Hello, world!")
    assert isinstance(result, str)
    assert len(result) > 0

Mocking External Dependencies

When testing components that rely on external services, use mocking to isolate your tests:

import pytest
from unittest.mock import patch, AsyncMock
from smartgraph import ReactiveComponent

class APIComponent(ReactiveComponent):
    async def process(self, input_data: str) -> dict:
        async with aiohttp.ClientSession() as session:
            async with session.get(f"https://api.example.com/{input_data}") as response:
                return await response.json()

@pytest.mark.asyncio
async def test_api_component():
    with patch('aiohttp.ClientSession.get') as mock_get:
        mock_response = AsyncMock()
        mock_response.json.return_value = {"result": "mocked data"}
        mock_get.return_value.__aenter__.return_value = mock_response

        component = APIComponent("API")
        result = await component.process("test")

        assert result == {"result": "mocked data"}
        mock_get.assert_called_once_with("https://api.example.com/test")

Testing Error Handling

Ensure your components and pipelines handle errors correctly:

import pytest
from smartgraph import ReactiveComponent, ReactiveSmartGraph
from smartgraph.exceptions import ExecutionError

class ErrorComponent(ReactiveComponent):
    async def process(self, input_data: Any) -> Any:
        raise ValueError("Simulated error")

@pytest.mark.asyncio
async def test_error_handling():
    graph = ReactiveSmartGraph()
    pipeline = graph.create_pipeline("ErrorPipeline")
    pipeline.add_component(ErrorComponent("ErrorComp"))
    graph.compile()

    with pytest.raises(ExecutionError):
        await graph.execute_and_await("ErrorPipeline", "test input")

Integration Testing

Perform integration tests to ensure different parts of your SmartGraph application work together:

import pytest
from smartgraph import ReactiveSmartGraph
from your_app.components import ComponentA, ComponentB, ComponentC

@pytest.mark.asyncio
async def test_integration():
    graph = ReactiveSmartGraph()
    pipeline = graph.create_pipeline("IntegrationPipeline")

    pipeline.add_component(ComponentA("A"))
    pipeline.add_component(ComponentB("B"))
    pipeline.add_component(ComponentC("C"))

    graph.compile()

    result = await graph.execute_and_await("IntegrationPipeline", "test input")
    # Assert the expected end-to-end behavior
    assert result == "expected output"

Testing Asynchronous Behavior

Test the asynchronous nature of SmartGraph components:

import pytest
import asyncio
from smartgraph import ReactiveComponent

class DelayedComponent(ReactiveComponent):
    async def process(self, input_data: int) -> int:
        await asyncio.sleep(0.1)
        return input_data * 2

@pytest.mark.asyncio
async def test_delayed_component():
    component = DelayedComponent("Delayed")
    start_time = asyncio.get_event_loop().time()
    result = await component.process(5)
    end_time = asyncio.get_event_loop().time()

    assert result == 10
    assert end_time - start_time >= 0.1

Performance Testing

Test the performance of your SmartGraph pipelines:

import pytest
import time
from smartgraph import ReactiveSmartGraph
from your_app.pipelines import create_complex_pipeline

@pytest.mark.asyncio
async def test_pipeline_performance():
    graph = ReactiveSmartGraph()
    pipeline = create_complex_pipeline(graph)
    graph.compile()

    start_time = time.time()
    await graph.execute_and_await("ComplexPipeline", "test input")
    end_time = time.time()

    execution_time = end_time - start_time
    assert execution_time < 1.0  # Assuming the pipeline should complete within 1 second

Best Practices for Testing SmartGraph Applications

  1. Isolate Tests: Test components in isolation before testing entire pipelines.

  2. Use Mocking: Mock external dependencies to ensure consistent and fast tests.

  3. Test State Management: For stateful components, test both the state changes and the processing logic.

  4. Error Scenarios: Include tests for error conditions and edge cases.

  5. Asynchronous Testing: Use pytest.mark.asyncio and be mindful of the asynchronous nature of SmartGraph.

  6. Integration Tests: Include tests that verify the correct interaction between components in a pipeline.

  7. Performance Benchmarks: Establish performance benchmarks for critical pipelines and monitor them over time.

  8. Test Configuration Variations: Test your components and pipelines with different configurations to ensure flexibility.

  9. Continuous Integration: Integrate your tests into a CI/CD pipeline for automated testing on each code change.

  10. Test Coverage: Aim for high test coverage, especially for critical components and pipelines.

Conclusion

Thorough testing is essential for building reliable and maintainable SmartGraph applications. By combining unit tests for individual components, integration tests for pipelines, and end-to-end tests for entire applications, you can ensure that your AI pipelines function correctly under various conditions.

Remember that testing reactive systems can be challenging due to their asynchronous nature. Always be mindful of the asynchronous behavior when writing and running tests.

Next Steps

With a solid understanding of testing SmartGraph applications, you’re well-equipped to build robust AI pipelines. Explore advanced topics and real-world use cases in the Advanced Concepts section to take your SmartGraph skills to the next level.