mirror of
https://github.com/timescale/timescaledb.git
synced 2025-05-16 18:43:18 +08:00
The parallel version of the ChunkAppend node uses shared memory to coordinate the plan selection for the parallel workers. If the workers perform the startup exclusion individually, it may choose different subplans for each worker (e.g., due to a "constant" function that claims to be constant but returns different results). In that case, we have a disagreement about the plans between the workers. This would lead to hard-to-debug problems and out-of-bounds reads when pstate->next_plan is used for subplan selection. With this patch, startup exclusion is only performed in the parallel leader. The leader stores this information in shared memory. The parallel workers read the information from shared memory and don't perform startup exclusion.