Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,49 @@ Remove this section if this change applies to all flows or to the documentation
If there are no setup requirements, you can remove this section.

Thank you for your contribution. ❤️ -->

---

### Contributor Checklist ✅

- [ ] PR Title and commits follows [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/)
- [ ] Add a `closes #ISSUE_ID` or `fixes #ISSUE_ID` in the description if the PR relates to an opened issue.
- [ ] Documentation updated (plugin docs from `@Schema` for properties and outputs, `@Plugin` with examples, `README.md` file with basic knowledge and specifics).
- [ ] Setup instructions included if needed (API keys, accounts, etc.).
- [ ] Prefix all rendered properties by `r` not `rendered` (eg: `rHost`).
- [ ] Use `runContext.logger()` to log enough important infos where it's needed and with the best level (DEBUG, INFO, WARN or ERROR).

⚙️ **Properties**
- [ ] Properties are declared with `Property<T>` carrier type, do **not** use `@PluginProperty`.
- [ ] Mandatory properties must be annotated with `@NotNull` and checked during the rendering.
- [ ] You can model a JSON thanks to a simple `Property<Map<String, Object>>`.

🌐 **HTTP**
- [ ] Must use Kestra’s internal HTTP client from `io.kestra.core.http.client`

📦 **JSON**
- [ ] If you are serializing response from an external API, you may have to add a `@JsonIgnoreProperties(ignoreUnknown = true)` at the mapped class level. So that we will avoid to crash the plugin if the provider add a new field suddenly.
- [ ] Must use Jackson mappers provided by core (`io.kestra.core.serializers`)

✨ **New plugins / subplugins**
- [ ] Make sure your new plugin is configured like mentioned [here](https://kestra.io/docs/plugin-developer-guide/gradle#mandatory-configuration).
- [ ] Add a `package-info.java` under each sub package respecting [this format](https://github.com/kestra-io/plugin-odoo/blob/main/src/main/java/io/kestra/plugin/odoo/package-info.java) and choosing the right category.
- [ ] Icons added in `src/main/resources/icons` in SVG format and not in thumbnail (keep it big):
- `plugin-icon.svg`
- One icon per package, e.g. `io.kestra.plugin.aws.svg`
- For subpackages, e.g. `io.kestra.plugin.aws.s3`, add `io.kestra.plugin.aws.s3.svg`
See example [here](https://github.com/kestra-io/plugin-elasticsearch/blob/master/src/main/java/io/kestra/plugin/elasticsearch/Search.java#L76).
- [ ] Use `"{{ secret('YOUR_SECRET') }}"` in the examples for sensible infos such as an API KEY.
- [ ] If you are fetching data (one, many or too many), you must add a `Property<FetchType> fetchType` to be able to use `FETCH_ONE`, `FETCH` and even `STORE` to store big amount of data in the internal storage.
- [ ] Align the `"""` to close examples blocks with the flow id.

🧪 **Tests**
- [ ] Unit Tests added or updated to cover the change (using the `RunContext` to actually run tasks).
- [ ] Add sanity checks if possible with a YAML flow inside `src/test/resources/flows`.
- [ ] Avoid disabling tests for CI. Instead, configure a local environment whenever it's possible with `.github/setup-unit.sh` (which can be executed locally and in the CI) all along with a new `docker-compose-ci.yml` file (do **not** edit the existing `docker-compose.yml`).
- [ ] Provide screenshots from your QA / tests locally in the PR description. The goal here is to use the JAR of the plugin and directly test it locally in Kestra UI to ensure it integrates well.

📤 **Outputs**
- [ ] Do not send back as outputs the same infos you already have in your properties.
- [ ] If you do not have any output use `VoidOutput`.
- [ ] Do not output twice the same infos (eg: a status code, an error code saying the same thing...).
2 changes: 1 addition & 1 deletion plugin-jdbc-pinot/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jar {
}

dependencies {
implementation("org.apache.pinot:pinot-jdbc-client:1.3.0") {
implementation("org.apache.pinot:pinot-jdbc-client:1.4.0") {
exclude group: 'org.slf4j'
exclude group: 'com.fasterxml.jackson.core'
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,26 @@ void update() throws Exception {
assertThat(runOutput.getRow().get("t_varchar"), is("D"));
}

@Test
void shouldReturnSizeZeroWhenEmptyResult() throws Exception {
RunContext runContext = runContextFactory.of(ImmutableMap.of());

Query task = Query.builder()
.url(Property.ofValue(getUrl()))
.username(Property.ofValue(getUsername()))
.password(Property.ofValue(getPassword()))
.fetchType(Property.ofValue(FETCH_ONE))
.timeZoneId(Property.ofValue("Europe/Paris"))
.sql(Property.ofValue("SELECT * FROM sqlserver_types WHERE 1=0"))
.build();

AbstractJdbcQuery.Output runOutput = task.run(runContext);

assertThat(runOutput.getRow(), is(nullValue()));
assertThat(runOutput.getSize(), is(0L));
}


@Override
protected String getUrl() {
return "jdbc:sqlserver://localhost:41433;trustServerCertificate=true";
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,9 +128,10 @@ private long extractResultsFromResultSet(final Connection connection,
long size = 0L;
switch (this.renderFetchType(runContext)) {
case FETCH_ONE -> {
size = 1L;
var result = fetchResult(rs, cellConverter, connection);
size = (result == null ? 0L : 1L);
output
.row(fetchResult(rs, cellConverter, connection))
.row(result)
.size(size);
}
case STORE -> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,28 +72,26 @@ public AbstractJdbcBaseQuery.Output run(RunContext runContext) throws Exception
var result = fetchResult(rs, cellConverter, conn);
size = result == null ? 0L : 1L;
output
.row(result)
.size(size);
.row(result);
}
case STORE -> {
File tempFile = runContext.workingDir().createTempFile(".ion").toFile();
try (BufferedWriter fileWriter = new BufferedWriter(new FileWriter(tempFile), FileSerde.BUFFER_SIZE)) {
size = fetchToFile(stmt, rs, fileWriter, cellConverter, conn);
}
output
.uri(runContext.storage().putFile(tempFile))
.size(size);
.uri(runContext.storage().putFile(tempFile));
}
case FETCH -> {
List<Map<String, Object>> maps = new ArrayList<>();
size = fetchResults(stmt, rs, maps, cellConverter, conn);
output
.rows(maps)
.size(size);
.rows(maps);
}
}
}
}
output.size(size);
runContext.metric(Counter.of("fetch.size", size, this.tags(runContext)));
return output.build();
} finally {
Expand Down
Loading