Run Python extensions in isolated virtual environments with seamless inter-process communication.
⚠️ Warning: This library is currently in active development and the API may change. While the core functionality is working, it should not be considered stable for production use yet.
pyisolate enables you to run Python extensions with conflicting dependencies in the same application by automatically creating isolated virtual environments for each extension. Extensions communicate with the host process through a transparent RPC system, making the isolation invisible to your code.
You can find documentation on this library here: https://comfy-org.github.io/pyisolate/
- 🔒 Dependency Isolation: Run extensions with incompatible dependencies (e.g., numpy 1.x and 2.x) in the same application
- 🚀 Zero-Copy PyTorch Tensor Sharing: Share PyTorch tensors between processes without serialization overhead
- 🔄 Transparent Communication: Call async methods across process boundaries as if they were local
- 🎯 Simple API: Clean, intuitive interface with minimal boilerplate
- ⚡ Fast: Uses
uv
for blazing-fast virtual environment creation
pip install pyisolate
For development:
pip install pyisolate[dev]
Create an extension that runs in an isolated environment:
# extensions/my_extension/__init__.py
from pyisolate import ExtensionBase
class MyExtension(ExtensionBase):
def on_module_loaded(self, module):
self.module = module
async def process_data(self, data):
# This runs in an isolated process with its own dependencies
import numpy as np # This could be numpy 2.x
return np.array(data).mean()
Load and use the extension from your main application:
# main.py
import pyisolate
import asyncio
async def main():
# Configure the extension manager
config = pyisolate.ExtensionManagerConfig(
venv_root_path="./venvs"
)
manager = pyisolate.ExtensionManager(pyisolate.ExtensionBase, config)
# Load an extension with specific dependencies
extension = await manager.load_extension(
pyisolate.ExtensionConfig(
name="data_processor",
module_path="./extensions/my_extension",
isolated=True,
dependencies=["numpy>=2.0.0"]
)
)
# Use the extension
result = await extension.process_data([1, 2, 3, 4, 5])
print(f"Mean: {result}") # Mean: 3.0
# Cleanup
await extension.stop()
asyncio.run(main())
Share PyTorch tensors between processes without serialization:
# extensions/ml_extension/__init__.py
from pyisolate import ExtensionBase
import torch
class MLExtension(ExtensionBase):
async def process_tensor(self, tensor: torch.Tensor):
# Tensor is shared, not copied!
return tensor.mean()
# main.py
extension = await manager.load_extension(
pyisolate.ExtensionConfig(
name="ml_processor",
module_path="./extensions/ml_extension",
share_torch=True # Enable zero-copy tensor sharing
)
)
# Large tensor is shared, not serialized
large_tensor = torch.randn(1000, 1000)
mean = await extension.process_tensor(large_tensor)
Share state across all extensions using ProxiedSingleton:
# shared.py
from pyisolate import ProxiedSingleton
class DatabaseAPI(ProxiedSingleton):
def __init__(self):
self.data = {}
def get(self, key):
return self.data.get(key)
def set(self, key, value):
self.data[key] = value
# extensions/extension_a/__init__.py
class ExtensionA(ExtensionBase):
async def save_result(self, result):
db = DatabaseAPI() # Returns proxy to host's instance
await db.set("result", result)
# extensions/extension_b/__init__.py
class ExtensionB(ExtensionBase):
async def get_result(self):
db = DatabaseAPI() # Returns proxy to host's instance
return await db.get("result")
A complete pyisolate application requires a special main.py
entry point to handle virtual environment activation:
# main.py
if __name__ == "__main__":
# When running as the main script, import and run your host application
from host import main
main()
else:
# When imported by extension processes, ensure venv is properly activated
import os
import site
import sys
if os.name == "nt": # Windows-specific venv activation
venv = os.environ.get("VIRTUAL_ENV", "")
if venv != "":
sys.path.insert(0, os.path.join(venv, "Lib", "site-packages"))
site.addsitedir(os.path.join(venv, "Lib", "site-packages"))
# host.py - Your main application logic
import pyisolate
import asyncio
async def async_main():
# Create extension manager
config = pyisolate.ExtensionManagerConfig(
venv_root_path="./extension-venvs"
)
manager = pyisolate.ExtensionManager(ExtensionBase, config)
# Load extensions (e.g., from a directory or configuration file)
extensions = []
for extension_path in discover_extensions():
extension_config = pyisolate.ExtensionConfig(
name=extension_name,
module_path=extension_path,
isolated=True,
dependencies=load_dependencies(extension_path),
apis=[SharedAPI] # Optional shared singletons
)
extension = manager.load_extension(extension_config)
extensions.append(extension)
# Use extensions
for extension in extensions:
result = await extension.process()
print(f"Result: {result}")
# Clean shutdown
for extension in extensions:
await extension.stop()
def main():
asyncio.run(async_main())
This structure ensures that:
- The host application runs normally when executed directly
- Extension processes properly activate their virtual environments when spawned
- Windows-specific path handling is properly managed
- Automatic Virtual Environment Management: Creates and manages isolated environments automatically
- Bidirectional RPC: Extensions can call host methods and vice versa
- Async/Await Support: Full support for asynchronous programming
- Lifecycle Hooks:
before_module_loaded()
,on_module_loaded()
, andstop()
for setup/teardown - Error Propagation: Exceptions are properly propagated across process boundaries
- Dependency Resolution: Automatically installs extension-specific dependencies
- Platform Support: Works on Windows, Linux, and soon to be tested on macOS
- Context Tracking: Ensures callbacks happen on the same asyncio loop as the original call
- Fast Installation: Uses
uv
for 10-100x faster package installation without every extension having its own copy of libraries
┌─────────────────────┐ RPC ┌─────────────┐
│ Host Process │◄────────────►│ Extension A │
│ │ │ (venv A) │
│ ┌──────────────┐ │ └─────────────┘
│ │ Shared │ │ RPC ┌─────────────┐
│ │ Singletons │ │◄────────────►│ Extension B │
│ └──────────────┘ │ │ (venv B) │
└─────────────────────┘ └─────────────┘
- Core isolation and RPC system
- Automatic virtual environment creation
- Bidirectional communication
- PyTorch tensor sharing
- Shared singleton pattern
- Comprehensive test suite
- Windows, Linux support
- Security features (path normalization)
- Fast installation with
uv
- Context tracking for RPC calls
- Async/await support
- Performance benchmarking suite
- Memory usage tracking and benchmarking
- Documentation site
- macOS testing
- Wrapper for non-async calls between processes
- Network access restrictions per extension
- Filesystem access sandboxing
- CPU/Memory usage limits
- Hot-reloading of extensions
- Distributed RPC (across machines)
- Profiling and debugging tools
pyisolate is perfect for:
- Plugin Systems: When plugins may require conflicting dependencies
- ML Pipelines: Different models requiring different library versions
- Microservices in a Box: Multiple services with different dependencies in one app
- Testing: Running tests with different dependency versions in parallel
- Legacy Code Integration: Wrapping legacy code with specific dependency requirements
We welcome contributions!
# Setup development environment
uv venv && source .venv/bin/activate
uv pip install -e ".[dev,test]"
pre-commit install
# Run tests
pytest
# Run linting
ruff check pyisolate tests
# Run benchmarks
python benchmarks/simple_benchmark.py
pyisolate includes a comprehensive benchmarking suite to measure RPC call overhead:
# Install benchmark dependencies
uv pip install -e ".[bench]"
# Quick benchmark using existing example extensions
python benchmarks/simple_benchmark.py
# Full benchmark suite with statistical analysis
python benchmarks/benchmark.py
# Quick mode with fewer iterations for faster results
python benchmarks/benchmark.py --quick
# Skip torch benchmarks (if torch not available)
python benchmarks/benchmark.py --no-torch
# Skip GPU benchmarks
python benchmarks/benchmark.py --no-gpu
# Run benchmarks via pytest
pytest tests/test_benchmarks.py -v -s
============================================================
RPC BENCHMARK RESULTS
============================================================
Successful Benchmarks:
+--------------------------+-------------+----------------+------------+------------+
| Test | Mean (ms) | Std Dev (ms) | Min (ms) | Max (ms) |
+==========================+=============+================+============+============+
| small_int_shared | 0.29 | 0.04 | 0.22 | 0.71 |
+--------------------------+-------------+----------------+------------+------------+
| small_string_shared | 0.29 | 0.04 | 0.22 | 0.74 |
+--------------------------+-------------+----------------+------------+------------+
| medium_string_shared | 0.29 | 0.04 | 0.22 | 0.74 |
+--------------------------+-------------+----------------+------------+------------+
| large_string_shared | 0.3 | 0.04 | 0.25 | 0.73 |
+--------------------------+-------------+----------------+------------+------------+
| tiny_tensor_cpu_shared | 0.98 | 0.1 | 0.84 | 1.88 |
+--------------------------+-------------+----------------+------------+------------+
| tiny_tensor_gpu_shared | 1.27 | 0.29 | 0.91 | 2.83 |
+--------------------------+-------------+----------------+------------+------------+
| small_tensor_cpu_shared | 0.89 | 0.1 | 0.76 | 2.31 |
+--------------------------+-------------+----------------+------------+------------+
| small_tensor_gpu_shared | 1.5 | 0.38 | 1.06 | 2.99 |
+--------------------------+-------------+----------------+------------+------------+
| medium_tensor_cpu_shared | 0.88 | 0.09 | 0.76 | 1.77 |
+--------------------------+-------------+----------------+------------+------------+
| medium_tensor_gpu_shared | 1.37 | 0.28 | 1.04 | 3.52 |
+--------------------------+-------------+----------------+------------+------------+
| large_tensor_cpu_shared | 0.88 | 0.1 | 0.74 | 1.97 |
+--------------------------+-------------+----------------+------------+------------+
| large_tensor_gpu_shared | 1.66 | 0.65 | 1.06 | 11.44 |
+--------------------------+-------------+----------------+------------+------------+
| image_8k_cpu_shared | 1.18 | 0.12 | 1.01 | 2.07 |
+--------------------------+-------------+----------------+------------+------------+
| image_8k_gpu_shared | 2.93 | 0.96 | 2.04 | 26.92 |
+--------------------------+-------------+----------------+------------+------------+
| model_6gb_cpu_shared | 0.9 | 0.1 | 0.76 | 2.04 |
+--------------------------+-------------+----------------+------------+------------+
Failed Tests:
+----------------------+------------------+
| Test | Error |
+======================+==================+
| model_6gb_gpu_shared | CUDA OOM/Timeout |
+----------------------+------------------+
The benchmarks measure:
- Small Data RPC Overhead: ~0.26-0.28ms for basic data types (integers, strings)
- Large Data Scaling: Performance with large arrays and tensors
- Torch Tensor Overhead: Additional cost for tensor serialization
- GPU vs CPU Tensors: GPU tensors show higher overhead due to device transfers
- Array Processing: Numpy arrays show ~95% overhead vs basic data types
For detailed benchmark documentation, see benchmarks/README.md.
pyisolate is licensed under the MIT License. See LICENSE for details.
- Built on Python's
multiprocessing
andasyncio
- Uses
uv
for fast package management - Inspired by plugin systems like Chrome Extensions and VS Code Extensions
Star this repo if you find it useful! ⭐