-
Notifications
You must be signed in to change notification settings - Fork 3
[launcher] Fake launcher to produce random results #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: James McCorrie <[email protected]>
After running this through I found a regression introduced with #32, inadvertently renaming a variable to the same name as another variable defined later in the function. This was missed in review, which further highlights the need for more comprehensive tests. |
This is useful when generating fake run data for developing report templates for example. It is also useful for investigating the scheduler behaviour without having to spawn sub processes. With further development this could be useful for testing the scheduler. Signed-off-by: James McCorrie <[email protected]>
17ac52c
to
5db4458
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of comments
"group", | ||
] | ||
|
||
deploy.cov_results_dict = {k: f"{random() * 100:.2f} %" for k in keys} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine provided that you never need to recreate a run otherwise the RNG will need seeding
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, in this case I don't think we need to reproduce the results. The random side is just to get some fake data for the report templates while working on those. Specifically test the colourisation with different percentages.
In the future if there are other use cases for this we could add more configuration options. One of those might be seeding the randomness, perhaps also setting the type of randomness.
"""Launch jobs and return fake results.""" | ||
|
||
# Poll job's completion status every this many seconds | ||
poll_freq = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume that this is used elsewhere and that 0 is correctly interpreted as "Don't poll"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used by the scheduler for the sleep duration between polling. It looks like it's used to rate limit the "are you done yet" polling. This is useful where a launcher is submitting a job to a grid engine for example, where polling an external API with no wait period would lock the system up.
This fake launcher only returns a job completion status. Setting this to 0
means the scheduler will not wait before asking the launcher again about status, as it will be the next set of jobs it asks about and there is no value in waiting between jobs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Add a fake luncher to generate random results instead of launching the actual deployment job.
This fake launcher is one step further than the dry-run mode and allows us to run a full flow through to the end, exercising everything apart from executing the jobs themselves. Making it easier to find issues with with DVSim with a quick local run without having to run a full regression.
Another use for this is generating fake run data for developing report templates for example. As well as investigating the scheduler behaviour without having to spawn long running sub processes. Perhaps useful for testing the scheduler behaviours.