-
Notifications
You must be signed in to change notification settings - Fork 177
feat: add bloatnet marker support #2169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add bloatnet marker support #2169
Conversation
affe788
to
a06bd60
Compare
Make sure to mark any existing benchmarks in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change from -m state
to -m stateful
- the same for the folder name: state/
to stateful
? Since we have -m state_test
already!
The logic all makes sense to me. Having 2 conftest's was the better option after reviewing this ;)
Will re-review once 2186 is merged so we can check the logic.
We just need to move the file under |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for tackling this @LouisTsai-Csie! The PR description is really helpful and documents the expectation very well 🙏
The benchmark paths aren't correct in the following, but it should explain my suggestion.
With the additional requirement to "filter" stateful
tests, I'm wondering if we're adding too much complexity to our pytest setup via markers in order to run the correct subset of tests. The logic alone for tests/benchmark/conftest.py
is quite complex and then we overlay the logic from tests/benchmark/stateful/conftest.py
on top of this.
I won't block this, but I think the requirement is actually quite simple and perhaps we can quickly consider an alternative approach before merging. My proposal would be to remove all of this conftest logic and use an approach that modifies pytest's testpaths
at the CLI logic stage.
I.e., adding a mode
flag, None
by default (=just fill regular tests), but that can be set to "benchmark" or "stateful".
uv run fill --mode=benchmark
Then, we add (simpler) logic in src/cli/pytest_commands
to modify fill and execute's ini config from (and others...):
- https://github.com/LouisTsai-Csie/execution-spec-tests/blob/7e4d3e214252fa878a7ac5c381fcab0d48d1c40a/src/cli/pytest_commands/pytest_ini_files/pytest-fill.ini#L5
https://github.com/LouisTsai-Csie/execution-spec-tests/blob/7e4d3e214252fa878a7ac5c381fcab0d48d1c40a/src/cli/pytest_commands/pytest_ini_files/pytest-execute.ini#L5
to overwite testpaths
with the relevant path (needs a well-organized structure underneath ./tests/benchmarks/
). For this, we don't modify or add new ini files, but rather modify it using this functionality:
-o testpaths=tests/benchmark/stateful
From pytest --help
:
-o OVERRIDE_INI, --override-ini=OVERRIDE_INI
Override ini option with "option=value" style, e.g. `-o
xfail_strict=True -o cache_dir=cache`.
For normal filling mode (no benchmarking), we can append:
uv run pytest --help | grep ignore
--ignore=path Ignore path during collection (multi-allowed)
to ignore ./tests/benchmarks
in regular filling.
Hey @LouisTsai-Csie, tried to add a clearer formulation here: As we discussed, happy to merge this as-is if there are other priorities and tackle the simpler solution later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is great, I agree there's a better approach but I would like for this to get merged so it's not lost in the Weld transition.
I'm rebasing and moving the files to the new folder before merging 👍 |
I'm not super sure if this will seamlessly work when we transition to the EELS repo, I think it would work but I would defer implementing this approach until we move the code. |
7e4d3e2
to
72b22a4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @LouisTsai-Csie, one of us can update the implementation to the approach in #2208 after the Weld!
to overwite
testpaths
with the relevant path (needs a well-organized structure underneath./tests/benchmarks/
). For this, we don't modify or add new ini files, but rather modify it using this functionality:I'm not super sure if this will seamlessly work when we transition to the EELS repo, I think it would work but I would defer implementing this approach until we move the code.
It should be as simple as updating the specified testpaths
, but yes, it's one more thing that will break / to remember. Let's get this merged!
🗒️ Description
Background
Bloatnet benchmark tests were introduced in PR #2040 and #2090. Because of the nature of bloatnet, using
execute
together with a stub address is a more ideal solution compared to thefill
andconsume
method.With the
fill
command, the pre-deployment state of contracts is encoded in the genesis file. However, this becomes problematic when the state configuration in bloatnet is too large. By leveragingexecute
, we can directly send transactions to the network to interact with contracts.Instead of creating a new folder for bloatnet, the test cases are placed under the existing
benchmark/
folder, with a specific marker to distinguish them.Following the discussion in issue #1926, the
test_bloatnet.py
file is placed underbenchmark/state/bloatnet/
. All tests that require the combination ofexecute
and stub addresses are grouped under thestateful/
folder. PR #2101 would also benefit from this structure.Based on this design, I did not add a separate
bloatnet
marker. Instead, I extended the marker tostateful
marker, which is shared across all stateful tests.Verification
-m stateful
runs only the bloatnet-related tests (excluding benchmark tests).-m benchmark
runs only the benchmark tests (excluding bloatnet tests).To verify:
test_bloatnet.py
file, then run:uv run fill -v tests/benchmark/stateful/bloatnet/test_bloatnet.py -m benchmark
→ should not runuv run fill -v tests/benchmark/stateful/bloatnet/test_bloatnet.py -m stateful
→ should runstate
marker to another file underbenchmark/
(e.g.test_block_full_data
), then run:uv run fill -v tests/benchmark/test_worst_blocks.py::test_block_full_data -m benchmark
→ should not runuv run fill -v tests/benchmark/test_worst_blocks.py::test_block_full_data -m stateful
→ should runDocumentation:
-m stateful
flagstateful
, the command include the-m stateful
flag-m benchmark
Notes
I am open to discussion, please let me know if you want to rename the marker, or
🔗 Related Issues or PRs
PR #2040 , #2090 and #2101
✅ Checklist
tox
checks to avoid unnecessary CI fails, see also Code Standards and Enabling Pre-commit Checks:uvx --with=tox-uv tox -e lint,typecheck,spellcheck,markdownlint
type(scope):
.mkdocs serve
locally and verified the auto-generated docs for new tests in the Test Case Reference are correctly formatted.@ported_from
marker.