digraph G {
0 [labelType="html" label="<br><b>AdaptiveSparkPlan</b><br><br>"];
1 [labelType="html" label="<b>Execute InsertIntoHadoopFsRelationCommand</b><br><br>task commit time: 423 ms<br>number of written files: 1<br>job commit time: 557 ms<br>number of output rows: 677<br>number of dynamic part: 0<br>written output: 8.1 KiB"];
2 [labelType="html" label="<br><b>WriteFiles</b><br><br>"];
subgraph cluster3 {
isCluster="true";
label="WholeStageCodegen (2)\n \nduration: 91 ms";
4 [labelType="html" label="<b>Sort</b><br><br>sort time: 3 ms<br>peak memory: 64.1 MiB<br>spill size: 0.0 B"];
}
5 [labelType="html" label="<b>AQEShuffleRead</b><br><br>number of partitions: 1<br>partition data size: 47.4 KiB<br>number of coalesced partitions: 1"];
6 [labelType="html" label="<b>Exchange</b><br><br>shuffle records written: 677<br>local merged chunks fetched: 0<br>shuffle write time total (min, med, max (stageId: taskId))<br>9 ms (0 ms, 0 ms, 1 ms (stage 35.0: task 271))<br>remote merged bytes read: 0.0 B<br>local merged blocks fetched: 0<br>corrupt merged block chunks: 0<br>remote merged reqs duration: 0 ms<br>remote merged blocks fetched: 0<br>records read: 677<br>local bytes read: 18.6 KiB<br>fetch wait time: 0 ms<br>remote bytes read: 27.2 KiB<br>merged fetch fallback count: 0<br>local blocks read: 40<br>remote merged chunks fetched: 0<br>remote blocks read: 60<br>data size total (min, med, max (stageId: taskId))<br>100.5 KiB (8.6 KiB, 9.9 KiB, 11.6 KiB (stage 35.0: task 267))<br>local merged bytes read: 0.0 B<br>number of partitions: 10<br>remote reqs duration: 8 ms<br>remote bytes read to disk: 0.0 B<br>shuffle bytes written total (min, med, max (stageId: taskId))<br>45.8 KiB (4.1 KiB, 4.7 KiB, 4.9 KiB (stage 35.0: task 267))"];
subgraph cluster7 {
isCluster="true";
label="WholeStageCodegen (1)\n \nduration: total (min, med, max (stageId: taskId))\n2.3 s (101 ms, 112 ms, 163 ms (stage 34.0: task 258))";
8 [labelType="html" label="<br><b>Project</b><br><br>"];
9 [labelType="html" label="<b>Filter</b><br><br>number of output rows: 1,354"];
10 [labelType="html" label="<b>ColumnarToRow</b><br><br>number of output rows: 1,354<br>number of input batches: 20"];
}
11 [labelType="html" label="<b>Scan parquet </b><br><br>number of files read: 10<br>scan time total (min, med, max (stageId: taskId))<br>2.2 s (96 ms, 109 ms, 158 ms (stage 34.0: task 258))<br>metadata time: 0 ms<br>size of files read: 36.0 KiB<br>number of output rows: 1,354"];
1->0;
2->1;
4->2;
5->4;
6->5;
8->6;
9->8;
10->9;
11->10;
}
12
AdaptiveSparkPlan isFinalPlan=true
Execute InsertIntoHadoopFsRelationCommand hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-demand-service/out/demand/10000003977/0_1_10000003977, false, Parquet, [fs.hdlfs.ssl.certfile=*********(redacted), fs.hdlfs.filecontainer=7646b954-15f6-4bdc-91a5-2644c1a43a19, fs.hdlfs.ssl.keyfile=*********(redacted), path=hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-demand-service/out/demand/10000003977/0_1_10000003977], Overwrite, [product, plant, demandChannel, demandStream, demandTimeBuckets, demandPointInTimeStart, demandPointInTimeEnd, demandPointInTime]
WriteFiles
Sort [product#142 ASC NULLS FIRST, plant#143 ASC NULLS FIRST], true, 0
WholeStageCodegen (2)
AQEShuffleRead coalesced
Exchange rangepartitioning(product#142 ASC NULLS FIRST, plant#143 ASC NULLS FIRST, 10), ENSURE_REQUIREMENTS, [plan_id=1204]
Project [product#142, plant#143, null AS demandChannel#1682, null AS demandStream#1686, [] AS demandTimeBuckets#1701, null AS demandPointInTimeStart#1707, null AS demandPointInTimeEnd#1714, [] AS demandPointInTime#1731]
Filter true
ColumnarToRow
WholeStageCodegen (1)
FileScan parquet [product#142,plant#143] Batched: true, DataFilters: [true], Format: Parquet, Location: InMemoryFileIndex(1 paths)[hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanac..., PartitionFilters: [], PushedFilters: [AlwaysTrue()], ReadSchema: struct<product:string,plant:string>
== Physical Plan ==
AdaptiveSparkPlan (17)
+- == Final Plan ==
Execute InsertIntoHadoopFsRelationCommand (10)
+- WriteFiles (9)
+- * Sort (8)
+- AQEShuffleRead (7)
+- ShuffleQueryStage (6), Statistics(sizeInBytes=100.5 KiB, rowCount=677)
+- Exchange (5)
+- * Project (4)
+- * Filter (3)
+- * ColumnarToRow (2)
+- Scan parquet (1)
+- == Initial Plan ==
Execute InsertIntoHadoopFsRelationCommand (16)
+- WriteFiles (15)
+- Sort (14)
+- Exchange (13)
+- Project (12)
+- Filter (11)
+- Scan parquet (1)
(1) Scan parquet
Output [2]: [product#142, plant#143]
Batched: true
Location: InMemoryFileIndex [hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-order-qty-opt-service/out/product-plant-list/10000003977/shardId=0_1_10000003977]
PushedFilters: [AlwaysTrue()]
ReadSchema: struct<product:string,plant:string>
(2) ColumnarToRow [codegen id : 1]
Input [2]: [product#142, plant#143]
(3) Filter [codegen id : 1]
Input [2]: [product#142, plant#143]
Condition : true
(4) Project [codegen id : 1]
Output [8]: [product#142, plant#143, null AS demandChannel#1682, null AS demandStream#1686, [] AS demandTimeBuckets#1701, null AS demandPointInTimeStart#1707, null AS demandPointInTimeEnd#1714, [] AS demandPointInTime#1731]
Input [2]: [product#142, plant#143]
(5) Exchange
Input [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
Arguments: rangepartitioning(product#142 ASC NULLS FIRST, plant#143 ASC NULLS FIRST, 10), ENSURE_REQUIREMENTS, [plan_id=1204]
(6) ShuffleQueryStage
Output [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
Arguments: 0
(7) AQEShuffleRead
Input [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
Arguments: coalesced
(8) Sort [codegen id : 2]
Input [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
Arguments: [product#142 ASC NULLS FIRST, plant#143 ASC NULLS FIRST], true, 0
(9) WriteFiles
Input [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
(10) Execute InsertIntoHadoopFsRelationCommand
Input: []
Arguments: hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-demand-service/out/demand/10000003977/0_1_10000003977, false, Parquet, [fs.hdlfs.ssl.certfile=*********(redacted), fs.hdlfs.filecontainer=7646b954-15f6-4bdc-91a5-2644c1a43a19, fs.hdlfs.ssl.keyfile=*********(redacted), path=hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-demand-service/out/demand/10000003977/0_1_10000003977], Overwrite, [product, plant, demandChannel, demandStream, demandTimeBuckets, demandPointInTimeStart, demandPointInTimeEnd, demandPointInTime]
(11) Filter
Input [2]: [product#142, plant#143]
Condition : true
(12) Project
Output [8]: [product#142, plant#143, null AS demandChannel#1682, null AS demandStream#1686, [] AS demandTimeBuckets#1701, null AS demandPointInTimeStart#1707, null AS demandPointInTimeEnd#1714, [] AS demandPointInTime#1731]
Input [2]: [product#142, plant#143]
(13) Exchange
Input [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
Arguments: rangepartitioning(product#142 ASC NULLS FIRST, plant#143 ASC NULLS FIRST, 10), ENSURE_REQUIREMENTS, [plan_id=1185]
(14) Sort
Input [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
Arguments: [product#142 ASC NULLS FIRST, plant#143 ASC NULLS FIRST], true, 0
(15) WriteFiles
Input [8]: [product#142, plant#143, demandChannel#1682, demandStream#1686, demandTimeBuckets#1701, demandPointInTimeStart#1707, demandPointInTimeEnd#1714, demandPointInTime#1731]
(16) Execute InsertIntoHadoopFsRelationCommand
Input: []
Arguments: hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-demand-service/out/demand/10000003977/0_1_10000003977, false, Parquet, [fs.hdlfs.ssl.certfile=*********(redacted), fs.hdlfs.filecontainer=7646b954-15f6-4bdc-91a5-2644c1a43a19, fs.hdlfs.ssl.keyfile=*********(redacted), path=hdlfs://7646b954-15f6-4bdc-91a5-2644c1a43a19.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-demand-service/out/demand/10000003977/0_1_10000003977], Overwrite, [product, plant, demandChannel, demandStream, demandTimeBuckets, demandPointInTimeStart, demandPointInTimeEnd, demandPointInTime]
(17) AdaptiveSparkPlan
Output: []
Arguments: isFinalPlan=true