-
Notifications
You must be signed in to change notification settings - Fork 11
06. FAQ
This page addresses common questions and issues encountered when using TritonParse.
Most frequently asked questions:
- What is TritonParse?
- How do I install it?
- How do I generate traces?
- Why are my traces empty?
- Can't see source mappings?
- How do I compare kernels?
- How do I generate a reproducer?
A: TritonParse is a comprehensive visualization and analysis tool for Triton IR files. It helps developers analyze, debug, and understand Triton kernel compilation processes by:
- Capturing structured compilation and launch logs
- Providing interactive visualization of IR stages and launch events
- Mapping transformations between compilation stages
- Offering side-by-side IR code viewing
A: It depends on your use case:
- For analysis only: No installation needed! Just visit https://meta-pytorch.org/tritonparse/
-
For generating traces: You need to install the Python package (
pip install -e .
) - For development: You need the full development setup
A:
- Python >= 3.10
- Triton >= 3.4.0 (PyPI installation is now recommended)
-
GPU Support (for GPU tracing):
- NVIDIA GPUs: CUDA 11.8+ or 12.x
- AMD GPUs: ROCm 5.0+ (MI100, MI200, MI300 series)
- Modern browser (Chrome 90+, Firefox 88+, Safari 14+, Edge 90+)
A: Install from PyPI (recommended):
pip install triton
π‘ For building from source or development setup, see Installation Guide.
A: This usually means:
- Triton isn't installed - Install from source (see above)
- Wrong Python environment - Make sure you're in the right virtual environment
- Installation failed - Check for compilation errors during Triton installation
A: Yes, a GPU is required because Triton itself depends on GPU:
- For generating traces: GPU is required (either NVIDIA with CUDA or AMD with ROCm)
- For web interface only: No GPU needed (just to view existing trace files from others)
Note: Triton kernels can only run on GPU, so you need GPU hardware to generate your own traces.
A: Initialize logging before running kernels, then parse:
import tritonparse.structured_logging
import tritonparse.utils
tritonparse.structured_logging.init("./logs/", enable_trace_launch=True)
# Your kernel code here
tritonparse.utils.unified_parse(source="./logs/", out="./parsed_output")
π‘ See Usage Guide for complete examples.
A: Common causes:
-
Logging not initialized - Make sure you call
tritonparse.structured_logging.init()
before kernel execution - No kernel execution - Ensure your code actually executes Triton kernels
-
Cache issues - Set
TORCHINDUCTOR_FX_GRAPH_CACHE=0
environment variable - Permissions - Check that the log directory is writable
A:
-
.ndjson
: Raw trace logs, no source mapping, good for debugging -
.gz
: Compressed parsed traces with full source mapping, recommended for analysis
Always use .gz
files for full functionality in the web interface.
A: Yes! The web interface runs entirely in your browser:
- No data is sent to servers
- Files are processed locally
- Use the online interface safely at https://meta-pytorch.org/tritonparse/
A:
-
Using .ndjson files - Switch to
.gz
files fromparsed_output
directory -
Parsing failed - Check that
unified_parse()
completed successfully - Browser issues - Try refreshing the page or using a different browser
A: Try these solutions:
- Use smaller trace files (filter specific kernels)
- Enable browser hardware acceleration
- Use Chrome (recommended browser)
- Close other tabs to free memory
A: Options:
- Host trace files and share URL:
?json_url=YOUR_FILE_URL
- Take screenshots of findings
- Export browser bookmarks
π‘ The web interface runs locally - no data is uploaded to servers.
A: Here's the compilation pipeline:
Stage | Description | When to Use |
---|---|---|
TTIR | Triton IR - High-level language constructs | Understanding kernel logic |
TTGIR | Triton GPU IR - GPU-specific operations | GPU-specific optimizations |
LLIR | LLVM IR - Low-level operations | Compiler optimizations |
PTX | NVIDIA assembly | Final code generation |
AMDGCN | AMD assembly | AMD GPU final code |
A: Key areas:
- Memory access patterns and coalescing
- Register usage in metadata
- Vectorization in transformations
- Branch divergence in control flow
π‘ See Web Interface Guide for detailed workflows.
A: Follow this order:
- Check call stack for error location
- Start with TTIR for syntax issues
- Check LLIR for type problems
- Verify PTX for hardware compatibility
- Enable debug:
TRITONPARSE_DEBUG=1
A: Use the File Diff tab or URL:
?view=file_diff&json_url=trace1.gz&json_b_url=trace2.gz
Steps:
- Load two trace files (left/right)
- Select kernels to compare
- Choose IR type and mode (single/all)
π‘ See File Diff Guide for all features and URL parameters.
A: Customize the diff display:
- Ignore whitespace (default: true)
- Word vs line-level diff (default: word)
- Context lines (default: 3)
- Word wrap and show-only-changes
Details: Diff Options
A: The File Diff View allows comparing kernels across two different trace files:
-
Access: Click the "File Diff" tab in the main interface or use
?view=file_diff
in the URL - Load Left Source: Use the currently loaded trace or load via URL/file
- Load Right Source: Enter URL in "Right Source" field or upload a file
- Select Kernels: Choose kernels from each side's dropdown
- Choose IR and Mode: Select which IR to compare and single/all mode
- Adjust Options: Configure diff display (whitespace, word-level, context, etc.)
A: Complete URL parameter reference:
?view=file_diff
&json_url=LEFT_TRACE_URL # Left trace file
&json_b_url=RIGHT_TRACE_URL # Right trace file
&kernel_hash_a=LEFT_KERNEL_HASH # Pre-select left kernel
&kernel_hash_b=RIGHT_KERNEL_HASH # Pre-select right kernel
&mode=single # or 'all'
&ir=ttgir # IR type
&ignore_ws=true # Ignore whitespace
&word_level=true # Word-level diff
&context=3 # Context lines
&wrap=on # Word wrap
&only_changed=false # Show only changes
A: Two approaches:
Using URLs:
https://meta-pytorch.org/tritonparse/?view=file_diff&json_url=trace1.gz&json_b_url=trace2.gz
Using Local Files:
- Go to File Diff tab
- Upload first file on left side
- Upload second file on right side
- Select kernels to compare
A: Customizable diff options:
- Ignore Whitespace: Ignore spaces/indentation (default: true)
- Word-level Diff: Highlight at word vs line level (default: true)
- Context Lines: Unchanged lines around changes (default: 3)
- Word Wrap: Wrap long lines or scroll (default: on)
- Only Changes: Hide unchanged sections (default: false)
A: Yes! You can compare any two kernels:
- Select different kernels from left and right dropdowns
- Useful for comparing alternative implementations
- Side-by-side view helps spot algorithmic differences
A: Use the CLI or Python API:
tritonparse reproduce trace.ndjson --line 1 --out-dir repro_output
π‘ See Reproducer Guide for templates and advanced options.
A: Depends on tracing configuration:
-
Real data: If
save_tensor_blobs=True
was enabled during tracing - Synthetic: Generated from saved statistics (mean, std, min, max)
- Random: Fallback if no statistics available
Details: Tensor Strategies
A: TritonParse now uses subcommands:
# New style (recommended)
tritonparse parse ./logs/ --out ./parsed_output
tritonparse reproduce trace.ndjson --line 1
# Also works with python -m
python -m tritonparse parse ./logs/
π‘ See CLI Reference for all parameters.
A: Key environment variables:
Variable | Description | Example |
---|---|---|
TRITON_TRACE_FOLDER |
Trace output directory | "./logs/" |
TRITON_TRACE_LAUNCH |
Enable launch tracing | "1" |
TRITONPARSE_DEBUG |
Enable debug logging | "1" |
TRITONPARSE_KERNEL_ALLOWLIST |
Filter kernels | "kernel1*" |
TORCHINDUCTOR_FX_GRAPH_CACHE |
Disable cache | "0" |
Usage: Environment Variables Guide
A: After installing TritonParse, you can use it as a command:
# Install
pip install tritonparse
# Use as command
tritonparse parse ./logs/ --out ./parsed_output
tritonparse reproduce trace.ndjson --line 1
# Or with python -m
python -m tritonparse parse ./logs/ --out ./parsed_output
python -m tritonparse reproduce trace.ndjson --line 1
A: Initialize TritonParse from environment variables:
import tritonparse.structured_logging
import os
# Set environment variables
os.environ["TRITON_TRACE_FOLDER"] = "./logs/"
os.environ["TRITON_TRACE_LAUNCH"] = "1"
# Initialize from environment
tritonparse.structured_logging.init_with_env()
Or from shell:
export TRITON_TRACE_FOLDER="./logs/"
export TRITON_TRACE_LAUNCH="1"
python my_script.py # Uses init_with_env() inside
A: TritonParse supports these environment variables:
Variable | Description | Example |
---|---|---|
TRITON_TRACE_FOLDER |
Trace output directory | "./logs/" |
TRITON_TRACE_LAUNCH |
Enable launch tracing | "1" |
TRITONPARSE_DEBUG |
Enable debug logging | "1" |
TRITON_TRACE_GZIP |
Enable gzip compression | "1" |
TRITONPARSE_KERNEL_ALLOWLIST |
Filter kernels | "kernel1*,kernel2*" |
TORCHINDUCTOR_FX_GRAPH_CACHE |
Disable cache (for testing) | "0" |
A: Common causes:
- Logging not initialized before kernel execution
- No Triton kernels actually ran
- Cache issues: Set
TORCHINDUCTOR_FX_GRAPH_CACHE=0
- Check log directory is writable
A: Solutions:
- Use
.gz
files fromparsed_output/
(not raw.ndjson
) - Ensure
unified_parse()
completed successfully - Try refreshing browser or using Chrome
A: Check:
- Kernel code actually executed
- Set
TORCHINDUCTOR_FX_GRAPH_CACHE=0
- Logging initialized before kernel runs
- Each process has its own log directory (multi-process)
A: Use kernel allowlist:
export TRITONPARSE_KERNEL_ALLOWLIST="my_kernel*,important_*"
π‘ For more issues, see Troubleshooting
A: Yes! Parse with rank options:
# All ranks
tritonparse parse ./logs/ --out ./parsed_output --all-ranks
# Specific rank
tritonparse parse ./logs/ --out ./parsed_output --rank 1
A:
- Check Contributing Guide
- Browse GitHub Issues
- Join GitHub Discussions
A:
- GitHub Discussions - Community Q&A
- GitHub Issues - Bug reports
- This Wiki - Comprehensive documentation
A: Include in your report:
- System info (Python version, OS, GPU)
- TritonParse version
- Minimal reproduction code
- Complete error logs
Template: GitHub Issue Template
A:
A: Yes! Check:
- Basic Examples
- Advanced Examples
-
tests/
directory in repository - GitHub Discussions
- π Home - Wiki home page
- π Installation - Setup instructions
- π Usage Guide - Complete usage tutorial
- π Web Interface Guide - Interface walkthrough
- π§ Developer Guide - Contributing and development
- β GitHub Discussions - Community Q&A
Can't find your question?
- Search the GitHub Issues for similar problems
- Ask in GitHub Discussions
- Check the other wiki pages for more detailed information