Change ChunkAppend leader to use worker subplan

When running a parallel ChunkAppend query, the code would use a
subplan selection routine for the leader that would return an empty
slot unless it determined there were no workers started. While this
had the desired effect of forcing the workers to do the work of
evaluating the subnodes, returning an empty slot is not always safe.
In particular, Append nodes will interpret an empty slot as a sign
that a given subplan has completed, so a plan resulting in a
parallel MergeAppend under an Append node (very possible under a
UNION of hypertables) might fail to execute some or all of the
subplans.

This change modifies the ChunkAppend so that the leader uses the
same subplan function as the workers. This may result in the leader
being less responsive as it try to fetch a batch of results on its
own if no worker has any results yet. However, if this isn't the
desired behavior, PostgresQL already exposes a GUC option,
parallel_leader_participation, which will prevent the leader from
executing any subplans.
This commit is contained in:
Brian Rowe 2020-05-11 13:02:41 -07:00
parent a95308e917
commit d4c655fb2f
2 changed files with 9 additions and 16 deletions

View File

@ -10,7 +10,8 @@ accidentally triggering the load of a previous DB version.**
**Bugfixes** **Bugfixes**
* #1850 Fix scheduler failure due to bad next_start_time for jobs * #1850 Fix scheduler failure due to bad next_start_time for jobs
* #1861 Fix qual pushodwn for compressed hypertables where quals have casts * #1861 Fix qual pushdown for compressed hypertables where quals have casts
* #1864 Fix issue with subplan selection in parallel ChunkAppend
* #1868 Add support for WHERE, HAVING clauses with real time aggregates * #1868 Add support for WHERE, HAVING clauses with real time aggregates
* #1875 Fix hypertable detection in subqueries * #1875 Fix hypertable detection in subqueries
@ -18,6 +19,7 @@ accidentally triggering the load of a previous DB version.**
* @frostwind for reporting issue with casts in where clauses on compressed hypertables * @frostwind for reporting issue with casts in where clauses on compressed hypertables
* @fvannee for reporting an issue with hypertable detection in inlined SQL functions * @fvannee for reporting an issue with hypertable detection in inlined SQL functions
* @hgiasac for reporting missing where clause with real time aggregates * @hgiasac for reporting missing where clause with real time aggregates
* @airton-neto for reporting an issue with queries over UNIONs of hypertables
## 1.7.0 (2020-04-16) ## 1.7.0 (2020-04-16)

View File

@ -472,20 +472,6 @@ choose_next_subplan_non_parallel(ChunkAppendState *state)
state->current = get_next_subplan(state, state->current); state->current = get_next_subplan(state, state->current);
} }
static void
choose_next_subplan_for_leader(ChunkAppendState *state)
{
/*
* If no workers got launched for this parallel plan
* we have to let leader participate in subplan
* execution.
*/
if (state->pcxt->nworkers_launched == 0)
choose_next_subplan_for_worker(state);
else
state->current = NO_MATCHING_SUBPLANS;
}
static void static void
choose_next_subplan_for_worker(ChunkAppendState *state) choose_next_subplan_for_worker(ChunkAppendState *state)
{ {
@ -648,7 +634,12 @@ chunk_append_initialize_dsm(CustomScanState *node, ParallelContext *pcxt, void *
state->lock = chunk_append_get_lock_pointer(); state->lock = chunk_append_get_lock_pointer();
pstate->next_plan = INVALID_SUBPLAN_INDEX; pstate->next_plan = INVALID_SUBPLAN_INDEX;
state->choose_next_subplan = choose_next_subplan_for_leader; /*
* Leader should use the same subplan selection as normal worker threads. If the user wishes to
* disallow running plans on the leader they should do so via the parallel_leader_participation
* GUC.
*/
state->choose_next_subplan = choose_next_subplan_for_worker;
state->current = INVALID_SUBPLAN_INDEX; state->current = INVALID_SUBPLAN_INDEX;
state->pcxt = pcxt; state->pcxt = pcxt;
state->pstate = pstate; state->pstate = pstate;