Skip to content

Conversation

machshev
Copy link
Collaborator

@machshev machshev commented Oct 2, 2025

Add a fake luncher to generate random results instead of launching the actual deployment job.

This fake launcher is one step further than the dry-run mode and allows us to run a full flow through to the end, exercising everything apart from executing the jobs themselves. Making it easier to find issues with with DVSim with a quick local run without having to run a full regression.

Another use for this is generating fake run data for developing report templates for example. As well as investigating the scheduler behaviour without having to spawn long running sub processes. Perhaps useful for testing the scheduler behaviours.

Signed-off-by: James McCorrie <[email protected]>
@machshev
Copy link
Collaborator Author

machshev commented Oct 2, 2025

After running this through I found a regression introduced with #32, inadvertently renaming a variable to the same name as another variable defined later in the function. This was missed in review, which further highlights the need for more comprehensive tests.

This is useful when generating fake run data for developing
report templates for example. It is also useful for investigating
the scheduler behaviour without having to spawn sub processes.

With further development this could be useful for testing the scheduler.

Signed-off-by: James McCorrie <[email protected]>
Copy link

@mkj121 mkj121 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of comments

"group",
]

deploy.cov_results_dict = {k: f"{random() * 100:.2f} %" for k in keys}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine provided that you never need to recreate a run otherwise the RNG will need seeding

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, in this case I don't think we need to reproduce the results. The random side is just to get some fake data for the report templates while working on those. Specifically test the colourisation with different percentages.

In the future if there are other use cases for this we could add more configuration options. One of those might be seeding the randomness, perhaps also setting the type of randomness.

"""Launch jobs and return fake results."""

# Poll job's completion status every this many seconds
poll_freq = 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I presume that this is used elsewhere and that 0 is correctly interpreted as "Don't poll"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used by the scheduler for the sleep duration between polling. It looks like it's used to rate limit the "are you done yet" polling. This is useful where a launcher is submitting a job to a grid engine for example, where polling an external API with no wait period would lock the system up.

This fake launcher only returns a job completion status. Setting this to 0 means the scheduler will not wait before asking the launcher again about status, as it will be the next set of jobs it asks about and there is no value in waiting between jobs.

Copy link
Contributor

@KinzaQamar KinzaQamar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants