demo@flex13:~/crail-deployment/hadoop/logs/userlogs$ cat application_1496059230928_0084/container_1496059230928_0084_01_000002/stderr
Picked up JAVA_TOOL_OPTIONS: -XX:+PreserveFramePointer
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/demo/crail-deployment/crail-1.0/jars-atr/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/mnt/tmpfs/tmp/nm-local-dir/usercache/demo/filecache/172/__spark_libs__8005724844821539349.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/demo/crail-deployment/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/06/15 15:47:56 0 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 14006@flex14
17/06/15 15:47:56 3 INFO SignalUtils: Registered signal handler for TERM
17/06/15 15:47:56 3 INFO SignalUtils: Registered signal handler for HUP
17/06/15 15:47:56 3 INFO SignalUtils: Registered signal handler for INT
17/06/15 15:47:56 95 DEBUG Shell: setsid exited with exit code 0
17/06/15 15:47:56 284 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
17/06/15 15:47:56 290 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
17/06/15 15:47:56 290 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
17/06/15 15:47:56 290 DEBUG MetricsSystemImpl: UgiMetrics, User and group related metrics
17/06/15 15:47:56 326 DEBUG KerberosName: Kerberos krb5 configuration not found, setting default realm to empty
17/06/15 15:47:56 328 DEBUG Groups: Creating new Groups object
17/06/15 15:47:56 331 DEBUG NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/06/15 15:47:56 331 DEBUG NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
17/06/15 15:47:56 331 DEBUG NativeCodeLoader: java.library.path=/home/demo/crail-deployment/crail/lib/:/home/jpf/Source/3rd/dpdk/lib/:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
17/06/15 15:47:56 332 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/15 15:47:56 332 DEBUG PerformanceAdvisory: Falling back to shell based
17/06/15 15:47:56 332 DEBUG JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
17/06/15 15:47:56 382 DEBUG Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
17/06/15 15:47:56 384 DEBUG YarnSparkHadoopUtil: running as user: demo
17/06/15 15:47:56 388 DEBUG UserGroupInformation: hadoop login
17/06/15 15:47:56 388 DEBUG UserGroupInformation: hadoop login commit
17/06/15 15:47:56 392 DEBUG UserGroupInformation: using local user:UnixPrincipal: demo
17/06/15 15:47:56 392 DEBUG UserGroupInformation: Using user: "UnixPrincipal: demo" with name demo
17/06/15 15:47:56 392 DEBUG UserGroupInformation: User entry: "demo"
17/06/15 15:47:56 399 DEBUG UserGroupInformation: UGI loginUser:demo (auth:SIMPLE)
17/06/15 15:47:56 400 DEBUG UserGroupInformation: PrivilegedAction as:demo (auth:SIMPLE) from:org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
17/06/15 15:47:56 410 INFO SecurityManager: Changing view acls to: demo
17/06/15 15:47:56 410 INFO SecurityManager: Changing modify acls to: demo
17/06/15 15:47:56 411 INFO SecurityManager: Changing view acls groups to:
17/06/15 15:47:56 411 INFO SecurityManager: Changing modify acls groups to:
17/06/15 15:47:56 412 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(demo); groups with view permissions: Set(); users with modify permissions: Set(demo); groups with modify permissions: Set()
17/06/15 15:47:56 420 DEBUG SecurityManager: Created SSL options for fs: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
17/06/15 15:47:56 500 DEBUG InternalLoggerFactory: Using SLF4J as the default logging framework
17/06/15 15:47:56 504 DEBUG PlatformDependent0: java.nio.Buffer.address: available
17/06/15 15:47:56 504 DEBUG PlatformDependent0: sun.misc.Unsafe.theUnsafe: available
17/06/15 15:47:56 505 DEBUG PlatformDependent0: sun.misc.Unsafe.copyMemory: available
17/06/15 15:47:56 505 DEBUG PlatformDependent0: direct buffer constructor: available
17/06/15 15:47:56 507 DEBUG PlatformDependent0: java.nio.Bits.unaligned: available, true
17/06/15 15:47:56 516 DEBUG PlatformDependent0: java.nio.DirectByteBuffer.<init>(long, int): available
17/06/15 15:47:56 517 DEBUG Cleaner0: java.nio.ByteBuffer.cleaner(): available
17/06/15 15:47:56 518 DEBUG PlatformDependent: Java version: 8
17/06/15 15:47:56 518 DEBUG PlatformDependent: -Dio.netty.noUnsafe: false
17/06/15 15:47:56 519 DEBUG PlatformDependent: sun.misc.Unsafe: available
17/06/15 15:47:56 520 DEBUG PlatformDependent: -Dio.netty.noJavassist: false
17/06/15 15:47:56 579 DEBUG PlatformDependent: Javassist: available
17/06/15 15:47:56 579 DEBUG PlatformDependent: -Dio.netty.tmpdir: /mnt/tmpfs/tmp/nm-local-dir/usercache/demo/appcache/application_1496059230928_0084/container_1496059230928_0084_01_000002/tmp (java.io.tmpdir)
17/06/15 15:47:56 579 DEBUG PlatformDependent: -Dio.netty.bitMode: 64 (sun.arch.data.model)
17/06/15 15:47:56 580 DEBUG PlatformDependent: -Dio.netty.noPreferDirect: false
17/06/15 15:47:56 580 DEBUG PlatformDependent: io.netty.maxDirectMemory: 0 bytes
17/06/15 15:47:56 581 DEBUG JavassistTypeParameterMatcherGenerator: Generated: io.netty.util.internal.__matchers__.org.apache.spark.network.protocol.MessageMatcher
17/06/15 15:47:56 585 DEBUG JavassistTypeParameterMatcherGenerator: Generated: io.netty.util.internal.__matchers__.io.netty.buffer.ByteBufMatcher
17/06/15 15:47:56 595 DEBUG MultithreadEventLoopGroup: -Dio.netty.eventLoopThreads: 32
17/06/15 15:47:56 613 DEBUG NioEventLoop: -Dio.netty.noKeySetOptimization: false
17/06/15 15:47:56 613 DEBUG NioEventLoop: -Dio.netty.selectorAutoRebuildThreshold: 512
17/06/15 15:47:57 616 DEBUG PlatformDependent: org.jctools-core.MpscChunkedArrayQueue: available
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.numHeapArenas: 32
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.numDirectArenas: 32
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.pageSize: 8192
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.maxOrder: 11
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.chunkSize: 16777216
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.tinyCacheSize: 512
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.smallCacheSize: 256
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.normalCacheSize: 64
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.maxCachedBufferCapacity: 32768
17/06/15 15:47:57 635 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.cacheTrimInterval: 8192
17/06/15 15:47:57 677 DEBUG TransportClientFactory: Creating new connection to /10.40.0.11:51811
17/06/15 15:47:57 689 DEBUG ThreadLocalRandom: -Dio.netty.initialSeedUniquifier: 0x9ad6a8eb576f7749 (took 0 ms)
17/06/15 15:47:57 709 DEBUG ByteBufUtil: -Dio.netty.allocator.type: unpooled
17/06/15 15:47:57 709 DEBUG ByteBufUtil: -Dio.netty.threadLocalDirectBufferSize: 65536
17/06/15 15:47:57 709 DEBUG ByteBufUtil: -Dio.netty.maxThreadLocalCharBufferSize: 16384
17/06/15 15:47:57 726 DEBUG AbstractByteBuf: -Dio.netty.buffer.bytebuf.checkAccessible: true
17/06/15 15:47:57 728 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.level: simple
17/06/15 15:47:57 728 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.maxRecords: 4
17/06/15 15:47:57 729 DEBUG ResourceLeakDetectorFactory: Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@4329ee01
17/06/15 15:47:57 734 DEBUG TransportClientFactory: Connection to /10.40.0.11:51811 successful, running bootstraps...
17/06/15 15:47:57 734 INFO TransportClientFactory: Successfully created connection to /10.40.0.11:51811 after 53 ms (0 ms spent in bootstraps)
17/06/15 15:47:57 738 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 32768
17/06/15 15:47:57 739 DEBUG Recycler: -Dio.netty.recycler.maxSharedCapacityFactor: 2
17/06/15 15:47:57 739 DEBUG Recycler: -Dio.netty.recycler.linkCapacity: 16
17/06/15 15:47:57 739 DEBUG Recycler: -Dio.netty.recycler.ratio: 8
17/06/15 15:47:57 829 INFO SecurityManager: Changing view acls to: demo
17/06/15 15:47:57 829 INFO SecurityManager: Changing modify acls to: demo
17/06/15 15:47:57 829 INFO SecurityManager: Changing view acls groups to:
17/06/15 15:47:57 829 INFO SecurityManager: Changing modify acls groups to:
17/06/15 15:47:57 829 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(demo); groups with view permissions: Set(); users with modify permissions: Set(demo); groups with modify permissions: Set()
17/06/15 15:47:57 829 DEBUG SecurityManager: Created SSL options for fs: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
17/06/15 15:47:57 843 DEBUG SparkEnv: Using serializer: class org.apache.spark.serializer.KryoSerializer
17/06/15 15:47:57 894 DEBUG TransportClientFactory: Creating new connection to /10.40.0.11:51811
17/06/15 15:47:57 895 DEBUG TransportClientFactory: Connection to /10.40.0.11:51811 successful, running bootstraps...
17/06/15 15:47:57 895 INFO TransportClientFactory: Successfully created connection to /10.40.0.11:51811 after 1 ms (0 ms spent in bootstraps)
17/06/15 15:47:57 916 INFO CrailShuffleManager: crail shuffle started
17/06/15 15:47:57 957 INFO DiskBlockManager: Created local directory at /mnt/tmpfs/tmp/nm-local-dir/usercache/demo/appcache/application_1496059230928_0084/blockmgr-eee721e4-da96-40a4-9d88-8deec7c915d4
17/06/15 15:47:57 958 DEBUG DiskBlockManager: Adding shutdown hook
17/06/15 15:47:57 959 DEBUG ShutdownHookManager: Adding shutdown hook
17/06/15 15:47:57 971 INFO MemoryStore: MemoryStore started with capacity 34.0 GB
17/06/15 15:47:57 1156 INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://[email protected]:51811
17/06/15 15:47:57 1176 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
17/06/15 15:47:57 1178 INFO Executor: Starting executor ID 1 on host flex14.zurich.ibm.com
17/06/15 15:47:57 1195 DEBUG NetUtil: Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
17/06/15 15:47:57 1196 DEBUG NetUtil: /proc/sys/net/core/somaxconn: 128
17/06/15 15:47:57 1199 DEBUG TransportServer: Shuffle server started on port: 34375
17/06/15 15:47:57 1199 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34375.
17/06/15 15:47:57 1200 INFO NettyBlockTransferService: Server created on flex14.zurich.ibm.com:34375
17/06/15 15:47:57 1201 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/06/15 15:47:57 1202 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(1, flex14.zurich.ibm.com, 34375, None)
17/06/15 15:47:57 1212 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(1, flex14.zurich.ibm.com, 34375, None)
17/06/15 15:47:57 1213 INFO BlockManager: Initialized BlockManager: BlockManagerId(1, flex14.zurich.ibm.com, 34375, None)
17/06/15 15:47:58 2301 INFO CoarseGrainedExecutorBackend: Got assigned task 0
17/06/15 15:47:58 2306 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
17/06/15 15:47:58 2329 INFO Executor: Fetching spark://10.40.0.11:51811/jars/sql-benchmarks-1.0.jar with timestamp 1497534467210
17/06/15 15:47:58 2350 DEBUG TransportClientFactory: Creating new connection to /10.40.0.11:51811
17/06/15 15:47:58 2352 DEBUG TransportClientFactory: Connection to /10.40.0.11:51811 successful, running bootstraps...
17/06/15 15:47:58 2352 INFO TransportClientFactory: Successfully created connection to /10.40.0.11:51811 after 1 ms (0 ms spent in bootstraps)
17/06/15 15:47:58 2352 DEBUG TransportClient: Sending stream request for /jars/sql-benchmarks-1.0.jar to /10.40.0.11:51811
17/06/15 15:47:58 2354 INFO Utils: Fetching spark://10.40.0.11:51811/jars/sql-benchmarks-1.0.jar to /mnt/tmpfs/tmp/nm-local-dir/usercache/demo/appcache/application_1496059230928_0084/spark-a5be3258-a214-42cc-aecc-d4e362b6e021/fetchFileTemp4745929699342980775.tmp
17/06/15 15:47:58 2377 INFO Utils: Copying /mnt/tmpfs/tmp/nm-local-dir/usercache/demo/appcache/application_1496059230928_0084/spark-a5be3258-a214-42cc-aecc-d4e362b6e021/-19571988961497534467210_cache to /mnt/tmpfs/tmp/nm-local-dir/usercache/demo/appcache/application_1496059230928_0084/container_1496059230928_0084_01_000002/./sql-benchmarks-1.0.jar
17/06/15 15:47:58 2385 INFO Executor: Adding file:/mnt/tmpfs/tmp/nm-local-dir/usercache/demo/appcache/application_1496059230928_0084/container_1496059230928_0084_01_000002/./sql-benchmarks-1.0.jar to class loader
17/06/15 15:47:58 2536 DEBUG Executor: Task 0's epoch is 0
17/06/15 15:47:58 2543 INFO CrailStore: CrailStore starting version 300
17/06/15 15:47:58 2543 INFO CrailStore: spark.crail.shuffle.affinity true
17/06/15 15:47:58 2543 INFO CrailStore: spark.crail.deleteonclose false
17/06/15 15:47:58 2543 INFO CrailStore: spark.crail.deleteOnStart true
17/06/15 15:47:58 2544 INFO CrailStore: spark.crail.shuffle.outstanding 16
17/06/15 15:47:58 2544 INFO CrailStore: spark.crail.preallocate 0
17/06/15 15:47:58 2544 INFO CrailStore: spark.crail.shuffleCycle 6
17/06/15 15:47:58 2544 INFO CrailStore: spark.crail.writeAhead 0
17/06/15 15:47:58 2545 INFO CrailStore: spark.crail.debug false
17/06/15 15:47:58 2545 INFO CrailStore: spark.crail.serializer org.apache.spark.serializer.CrailSparkSerializer
17/06/15 15:47:58 2548 INFO crail: creating singleton crail file system
17/06/15 15:47:58 2550 INFO crail: crail.version 2845
17/06/15 15:47:58 2550 INFO crail: crail.storage.types com.ibm.crail.storage.rdma.RdmaStorageTier
17/06/15 15:47:58 2550 INFO crail: crail.directory.depth 16
17/06/15 15:47:58 2550 INFO crail: crail.token.expiration 1
17/06/15 15:47:58 2550 INFO crail: crail.blocksize 1048576
17/06/15 15:47:58 2550 INFO crail: crail.cachelimit 32212254720
17/06/15 15:47:58 2550 INFO crail: crail.cachepath /mnt/hugetlbfs/craildata/cache
17/06/15 15:47:58 2550 INFO crail: crail.user stu
17/06/15 15:47:58 2550 INFO crail: crail.shadow.replication 1
17/06/15 15:47:58 2550 INFO crail: crail.debug false
17/06/15 15:47:58 2551 INFO crail: crail.statistics true
17/06/15 15:47:58 2551 INFO crail: crail.rpc.timeout 1000
17/06/15 15:47:58 2551 INFO crail: crail.data.timeout 1000
17/06/15 15:47:58 2551 INFO crail: crail.buffersize 1048576
17/06/15 15:47:58 2551 INFO crail: crail.slicesize 1048576
17/06/15 15:47:58 2551 INFO crail: crail.singleton true
17/06/15 15:47:58 2551 INFO crail: crail.regionsize 1073741824
17/06/15 15:47:58 2551 INFO crail: crail.directoryrecord 512
17/06/15 15:47:58 2551 INFO crail: crail.directoryrandomize true
17/06/15 15:47:58 2551 INFO crail: crail.cacheimpl com.ibm.crail.memory.MappedBufferCache
17/06/15 15:47:58 2551 INFO crail: crail.location.map
17/06/15 15:47:58 2551 INFO crail: crail.namenode.address crail://flex11-40g0:9060
17/06/15 15:47:58 2551 INFO crail: crail.namenode.blockselection roundrobin
17/06/15 15:47:58 2551 INFO crail: crail.namenode.fileblocks 16
17/06/15 15:47:58 2551 INFO crail: crail.namenode.rpc.type com.ibm.crail.namenode.rpc.darpc.DaRPCNameNode
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.interface enp27s0f4
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.port 50020
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.storagelimit 53687091200
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.allocationsize 1073741824
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.datapath /mnt/hugetlbfs/craildata/datanode
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.indexpath /mnt/tmpfs/index
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.localmap true
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.queuesize 32
17/06/15 15:47:58 2553 INFO crail: crail.storage.rdma.type passive
17/06/15 15:47:58 2554 INFO crail: adding tier to cache 0
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.polling false
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.type passive
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.affinity 1
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.maxinline 0
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.recvQueue 32
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.sendQueue 32
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.pollsize 32
17/06/15 15:47:58 2556 INFO crail: crail.namenode.darpc.clustersize 128
libibverbs: Warning: couldn't load driver 'flashnet': libflashnet-rdmav2.so: cannot open shared object file: No such file or directory
17/06/15 15:47:58 2578 INFO crail: rpc group started, recvQueue 32
17/06/15 15:47:58 2580 INFO crail: connecting to namenode at flex11-40g0/10.40.0.11:9060
17/06/15 15:47:58 2595 INFO crail: connected to namenode at flex11-40g0/10.40.0.11:9060
17/06/15 15:47:58 2597 INFO crail: buffer cache, allocationCount 30, bufferCount 1024
17/06/15 15:47:58 2599 INFO CrailStore: creating hostFile /spark/meta/hosts/1347095298
17/06/15 15:47:58 2610 INFO crail: passive data client
17/06/15 15:47:59 2801 INFO CrailStore: creating hostFile done /spark/meta/hosts/1347095298
17/06/15 15:47:59 2801 INFO CrailStore: buffer cache warmup
17/06/15 15:47:59 2809 INFO CrailStore: buffer cache warmup done
17/06/15 15:47:59 2809 INFO crail: CrailStatistics, tag=init
17/06/15 15:47:59 2809 INFO crail: provider=cache/endpoint [size 2]
17/06/15 15:47:59 2809 INFO crail: provider=core/input [total 0, localOps 0, remoteOps 0, localDirOps 0, remoteDirOps 0, cached 0, nonBlocking 0, blocking 0, prefetched 0, prefetchedNonBlocking 0, prefetchedBlocking 0, capacity 0, totalStreams 0, avgCapacity 0, avgOpLen 0]
17/06/15 15:47:59 2809 INFO crail: provider=cache/buffer [cacheGet 3, cachePut 3, cacheMiss 1, cacheSize 1024, cacheMax 1, mapMiss 1, mapHeap 0]
17/06/15 15:47:59 2809 INFO crail: provider=core/output [total 3, localOps 0, remoteOps 3, localDirOps 0, remoteDirOps 3, cached 3, nonBlocking 0, blocking 0, prefetched 0, prefetchedNonBlocking 0, prefetchedBlocking 0, capacity 526336, totalStreams 4, avgCapacity 131584, avgOpLen 512]
17/06/15 15:47:59 2809 INFO crail: provider=core/streams [open 4, openInput 0, openOutput 4, openInputDir 0, openOutputDir 3, close 4, closeInput 0, closeOutput 4, closeInputDir 0, closeOutputDir 3, maxInput 0, maxOutput 1]
17/06/15 15:47:59 2891 INFO ParquetFileReader: Initiating action with parallelism: 5
17/06/15 15:47:59 3036 DEBUG BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
17/06/15 15:47:59 3036 DEBUG BlockReaderLocal: dfs.client.read.shortcircuit = false
17/06/15 15:47:59 3036 DEBUG BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
17/06/15 15:47:59 3036 DEBUG BlockReaderLocal: dfs.domain.socket.path =
17/06/15 15:47:59 3065 DEBUG RetryUtils: multipleLinearRandomRetry = null
17/06/15 15:47:59 3076 DEBUG Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@5736e050
17/06/15 15:47:59 3079 DEBUG Client: getting client out of cache: org.apache.hadoop.ipc.Client@7370bc19
17/06/15 15:47:59 3374 DEBUG PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
17/06/15 15:47:59 3379 DEBUG DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
17/06/15 15:47:59 3398 DEBUG Client: The ping interval is 60000 ms.
17/06/15 15:47:59 3399 DEBUG Client: Connecting to flex11-40g0/10.40.0.11:9000
17/06/15 15:47:59 3402 DEBUG UserGroupInformation: PrivilegedAction as:demo (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)
17/06/15 15:47:59 3448 DEBUG SaslRpcClient: Sending sasl message state: NEGOTIATE
17/06/15 15:47:59 3453 DEBUG SaslRpcClient: Received SASL message state: NEGOTIATE
auths {
method: "TOKEN"
mechanism: "DIGEST-MD5"
protocol: ""
serverId: "default"
challenge: "realm=\"default\",nonce=\"UVCDzc+1SOKa6kgcaBQI3i/2vtv+UT7jfE5bhQRq\",qop=\"auth\",charset=utf-8,algorithm=md5-sess"
}
auths {
method: "SIMPLE"
mechanism: ""
}
17/06/15 15:47:59 3453 DEBUG SaslRpcClient: Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.token.TokenInfo(value=class org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
17/06/15 15:47:59 3454 DEBUG SaslRpcClient: Use SIMPLE authentication for protocol ClientNamenodeProtocolPB
17/06/15 15:47:59 3454 DEBUG SaslRpcClient: Sending sasl message state: INITIATE
auths {
method: "SIMPLE"
mechanism: ""
}
17/06/15 15:47:59 3458 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo: starting, having connections 1
17/06/15 15:47:59 3459 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #0
17/06/15 15:47:59 3460 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #0
17/06/15 15:47:59 3460 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 71ms
17/06/15 15:47:59 3505 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770093_29269; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.22:50010,DS-bb3468dd-3e03-41ac-b1b4-854b353dd7f0,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770093_29269; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.22:50010,DS-bb3468dd-3e03-41ac-b1b4-854b353dd7f0,DISK]]}
isLastBlockComplete=true}
17/06/15 15:47:59 3507 DEBUG ParquetFileReader: File length 14963
17/06/15 15:47:59 3509 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:47:59 3511 DEBUG DFSClient: Connecting to datanode 10.40.0.22:50010
17/06/15 15:47:59 3517 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #1
17/06/15 15:47:59 3517 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #1
17/06/15 15:47:59 3518 DEBUG ProtobufRpcEngine: Call: getServerDefaults took 1ms
17/06/15 15:47:59 3522 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.22, datanodeId = DatanodeInfoWithStorage[10.40.0.22:50010,DS-bb3468dd-3e03-41ac-b1b4-854b353dd7f0,DISK]
17/06/15 15:47:59 3553 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
17/06/15 15:47:59 3575 DEBUG DFSClient: Connecting to datanode 10.40.0.22:50010
17/06/15 15:47:59 3582 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:null, key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:00 3647 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ ]
}
17/06/15 15:48:00 3759 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1414 bytes result sent to driver
17/06/15 15:48:01 4918 INFO CoarseGrainedExecutorBackend: Got assigned task 1
17/06/15 15:48:01 4919 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/06/15 15:48:01 4932 DEBUG Executor: Task 1's epoch is 0
17/06/15 15:48:01 5003 DEBUG BlockManager: Getting local block rdd_4_0
17/06/15 15:48:01 5004 DEBUG BlockManager: Block rdd_4_0 was not found
17/06/15 15:48:01 5004 DEBUG BlockManager: Getting remote block rdd_4_0
17/06/15 15:48:01 5012 DEBUG BlockManager: Block rdd_4_0 not found
17/06/15 15:48:01 5109 DEBUG CodeGenerator:
/* 001 */ public Object generate(Object[] references) {
/* 002 */ return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */ private Object[] references;
/* 007 */ private scala.collection.Iterator[] inputs;
/* 008 */ private scala.collection.Iterator scan_input;
/* 009 */ private org.apache.spark.sql.execution.metric.SQLMetric scan_numOutputRows;
/* 010 */ private org.apache.spark.sql.execution.metric.SQLMetric scan_scanTime;
/* 011 */ private long scan_scanTime1;
/* 012 */ private org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
/* 013 */ private int scan_batchIdx;
/* 014 */ private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance0;
/* 015 */ private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance1;
/* 016 */ private UnsafeRow scan_result;
/* 017 */ private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder scan_holder;
/* 018 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter scan_rowWriter;
/* 019 */
/* 020 */ public GeneratedIterator(Object[] references) {
/* 021 */ this.references = references;
/* 022 */ }
/* 023 */
/* 024 */ public void init(int index, scala.collection.Iterator[] inputs) {
/* 025 */ partitionIndex = index;
/* 026 */ this.inputs = inputs;
/* 027 */ scan_input = inputs[0];
/* 028 */ this.scan_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 029 */ this.scan_scanTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 030 */ scan_scanTime1 = 0;
/* 031 */ scan_batch = null;
/* 032 */ scan_batchIdx = 0;
/* 033 */ scan_colInstance0 = null;
/* 034 */ scan_colInstance1 = null;
/* 035 */ scan_result = new UnsafeRow(2);
/* 036 */ this.scan_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(scan_result, 32);
/* 037 */ this.scan_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(scan_holder, 2);
/* 038 */
/* 039 */ }
/* 040 */
/* 041 */ private void scan_nextBatch() throws java.io.IOException {
/* 042 */ long getBatchStart = System.nanoTime();
/* 043 */ if (scan_input.hasNext()) {
/* 044 */ scan_batch = (org.apache.spark.sql.execution.vectorized.ColumnarBatch)scan_input.next();
/* 045 */ scan_numOutputRows.add(scan_batch.numRows());
/* 046 */ scan_batchIdx = 0;
/* 047 */ scan_colInstance0 = scan_batch.column(0);
/* 048 */ scan_colInstance1 = scan_batch.column(1);
/* 049 */
/* 050 */ }
/* 051 */ scan_scanTime1 += System.nanoTime() - getBatchStart;
/* 052 */ }
/* 053 */
/* 054 */ protected void processNext() throws java.io.IOException {
/* 055 */ if (scan_batch == null) {
/* 056 */ scan_nextBatch();
/* 057 */ }
/* 058 */ while (scan_batch != null) {
/* 059 */ int numRows = scan_batch.numRows();
/* 060 */ while (scan_batchIdx < numRows) {
/* 061 */ int scan_rowIdx = scan_batchIdx++;
/* 062 */ boolean scan_isNull = scan_colInstance0.isNullAt(scan_rowIdx);
/* 063 */ int scan_value = scan_isNull ? -1 : (scan_colInstance0.getInt(scan_rowIdx));
/* 064 */ boolean scan_isNull1 = scan_colInstance1.isNullAt(scan_rowIdx);
/* 065 */ byte[] scan_value1 = scan_isNull1 ? null : (scan_colInstance1.getBinary(scan_rowIdx));
/* 066 */ scan_holder.reset();
/* 067 */
/* 068 */ scan_rowWriter.zeroOutNullBytes();
/* 069 */
/* 070 */ if (scan_isNull) {
/* 071 */ scan_rowWriter.setNullAt(0);
/* 072 */ } else {
/* 073 */ scan_rowWriter.write(0, scan_value);
/* 074 */ }
/* 075 */
/* 076 */ if (scan_isNull1) {
/* 077 */ scan_rowWriter.setNullAt(1);
/* 078 */ } else {
/* 079 */ scan_rowWriter.write(1, scan_value1);
/* 080 */ }
/* 081 */ scan_result.setTotalSize(scan_holder.totalSize());
/* 082 */ append(scan_result);
/* 083 */ if (shouldStop()) return;
/* 084 */ }
/* 085 */ scan_batch = null;
/* 086 */ scan_nextBatch();
/* 087 */ }
/* 088 */ scan_scanTime.add(scan_scanTime1 / (1000 * 1000));
/* 089 */ scan_scanTime1 = 0;
/* 090 */ }
/* 091 */ }
17/06/15 15:48:01 5270 INFO CodeGenerator: Code generated in 230.509252 ms
17/06/15 15:48:01 5318 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00000-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:01 5346 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #2
17/06/15 15:48:01 5347 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #2
17/06/15 15:48:01 5347 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:01 5353 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #3
17/06/15 15:48:01 5353 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #3
17/06/15 15:48:01 5353 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5354 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770093_29269; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.22:50010,DS-bb3468dd-3e03-41ac-b1b4-854b353dd7f0,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770093_29269; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.22:50010,DS-bb3468dd-3e03-41ac-b1b4-854b353dd7f0,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5354 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:01 5354 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:01 5354 DEBUG DFSClient: Connecting to datanode 10.40.0.22:50010
17/06/15 15:48:01 5355 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:01 5355 DEBUG DFSClient: Connecting to datanode 10.40.0.22:50010
17/06/15 15:48:01 5368 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:66 A9 3A 7A, min:E7 4E 85 23, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:53 22 21 25 60 02 9A A7 6E 4A F7 54 69 64 72 9F 76 05 48 CF 45 C6 83 D7 9E FB 7D 6A 26 32 C6 FB 7A 66 0B C0 1D 4A 9D 9D 4E 5A 86 D4 86 15 72 04 BE EE 4B 46 7E AA 8A F9 F2 F9 0E B8 B9 1D 39 3F 6F 1C 9E 8F 70 6D F6 F2 24 F9 60 F0 47 10 76 9C 86 F4 90 D6 50 7F 7C 76 B4 86 82 18 9B 0A DE 66 F3 5F 59 B3 F6 99 4D A8 03 24 A4 D6 C5 37 0D 88 1A A8 B9 3F 1A 7A 7F 7A 90 38 6F DC 7B 9E 02 A7..., min:9F 76 75 25 16 2D 9D 15 DC AD 93 4E D1 E6 B7 B6 21 28 41 3C 9B 4D F2 A0 F2 26 EF 2A B9 CF 71 E4 7B A4 9E B7 BC 71 9E 69 2A 60 03 12 8B 7E 9A 7B C6 1E 3E 12 E6 D7 E5 73 21 C1 FD F0 7E 4F 0B 6B DB AC 90 50 E0 03 A1 B7 CC 34 1C F3 B8 A1 5B 7F 15 CB C6 47 42 F1 AC 75 E0 DC B6 CA E0 10 5E 4B 5B D8 4C 89 27 79 07 1F 74 17 A7 75 F0 33 3F C2 5C CE C4 64 21 DF 5F 5E E3 BA 66 B7 4F F5 D0 34..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:01 5401 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 2050664806,
"min" : 595939047,
"maxBytes" : "Zqk6eg==",
"minBytes" : "506FIw==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:01 5422 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #4
17/06/15 15:48:01 5423 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #4
17/06/15 15:48:01 5423 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5424 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770093_29269; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.22:50010,DS-bb3468dd-3e03-41ac-b1b4-854b353dd7f0,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770093_29269; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.22:50010,DS-bb3468dd-3e03-41ac-b1b4-854b353dd7f0,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5424 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #5
17/06/15 15:48:01 5424 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #5
17/06/15 15:48:01 5425 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:01 5427 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:01 5435 DEBUG DFSClient: Connecting to datanode 10.40.0.22:50010
17/06/15 15:48:01 5445 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:01 5447 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:01 5501 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:01 5501 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5502 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:01 5502 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5526 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00001-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:01 5527 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #6
17/06/15 15:48:01 5528 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #6
17/06/15 15:48:01 5528 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:01 5528 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #7
17/06/15 15:48:01 5529 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #7
17/06/15 15:48:01 5529 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5529 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770097_29273; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.21:50010,DS-589ba9c4-ca96-4f79-ae58-bec019b77995,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770097_29273; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.21:50010,DS-589ba9c4-ca96-4f79-ae58-bec019b77995,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5529 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:01 5529 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:01 5529 DEBUG DFSClient: Connecting to datanode 10.40.0.21:50010
17/06/15 15:48:01 5530 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.21, datanodeId = DatanodeInfoWithStorage[10.40.0.21:50010,DS-589ba9c4-ca96-4f79-ae58-bec019b77995,DISK]
17/06/15 15:48:01 5536 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:01 5536 DEBUG DFSClient: Connecting to datanode 10.40.0.21:50010
17/06/15 15:48:01 5539 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[RLE, PLAIN, BIT_PACKED], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:7B 31 41 58, min:FF 6A 05 0A, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[RLE, PLAIN, BIT_PACKED], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:7F FA B8 31 67 B2 DE 78 33 74 3C C1 26 B4 CF E8 A3 22 3A 51 25 4D 5C E6 06 90 20 FF AA C4 9F 8C 2F 03 E1 03 95 B5 86 2E 2E B3 17 A6 65 A6 FC 50 47 4F 09 D4 AA 4D 5B BD 1A 46 CB 7B CF 77 66 AA 36 B5 CC 57 1F C4 22 A1 6A 32 F8 95 6C B2 B7 DB 91 C2 68 1F CB 8F FB 1F B2 DD A2 FA 3D 08 43 04 9D 66 DC 4B 39 4C 40 48 7C 8C 49 C3 79 24 ED 20 ED B6 F0 E7 78 CB 1E 39 93 71 AB 06 7C 43 55 78..., min:E8 19 B7 78 8F 7A 5C BC 70 26 0C F7 78 65 C2 5E DE B5 EF EA 20 29 EB 83 B4 BB 24 EE E9 96 6E C1 B9 94 30 35 33 14 AB E0 AB 03 E0 F7 2E 2E D9 D5 C6 D0 2D DD 9E 46 A5 36 FE 5A F1 9C 71 62 A3 7E 1C 88 60 39 1C 84 6C 75 26 69 58 88 08 5F 9C 93 B8 AA 90 27 A8 66 2E E1 32 15 74 65 E7 92 6B 14 E9 06 AB 79 0E 7D 28 DD C6 82 C9 D5 56 03 4C 4D EE 37 C1 3B BE 3E 38 E1 66 5B CF F3 E3 95 1D AE..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:01 5540 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 1480667515,
"min" : 168127231,
"maxBytes" : "ezFBWA==",
"minBytes" : "/2oFCg==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:01 5546 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #8
17/06/15 15:48:01 5547 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #8
17/06/15 15:48:01 5548 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 2ms
17/06/15 15:48:01 5549 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770097_29273; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.21:50010,DS-589ba9c4-ca96-4f79-ae58-bec019b77995,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770097_29273; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.21:50010,DS-589ba9c4-ca96-4f79-ae58-bec019b77995,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5550 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #9
17/06/15 15:48:01 5550 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #9
17/06/15 15:48:01 5551 DEBUG ProtobufRpcEngine: Call: getFileInfo took 2ms
17/06/15 15:48:01 5552 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:01 5552 DEBUG DFSClient: Connecting to datanode 10.40.0.21:50010
17/06/15 15:48:01 5553 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:01 5553 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:01 5553 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:01 5553 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5554 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:01 5554 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5556 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00002-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:01 5557 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #10
17/06/15 15:48:01 5557 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #10
17/06/15 15:48:01 5558 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:01 5558 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #11
17/06/15 15:48:01 5558 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #11
17/06/15 15:48:01 5558 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/06/15 15:48:01 5559 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770098_29274; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.19:50010,DS-d3e4af75-898b-4733-a54a-18e81a7020af,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770098_29274; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.19:50010,DS-d3e4af75-898b-4733-a54a-18e81a7020af,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5559 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:01 5559 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:01 5559 DEBUG DFSClient: Connecting to datanode 10.40.0.19:50010
17/06/15 15:48:01 5559 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.19, datanodeId = DatanodeInfoWithStorage[10.40.0.19:50010,DS-d3e4af75-898b-4733-a54a-18e81a7020af,DISK]
17/06/15 15:48:01 5561 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:01 5561 DEBUG DFSClient: Connecting to datanode 10.40.0.19:50010
17/06/15 15:48:01 5562 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:77 B1 DD 74, min:F9 CA 2E 02, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:7E CA EC E3 63 5D 31 42 53 47 DD 32 F0 D0 0F 31 0B DB 9F 08 22 3B 37 BF B0 EF 84 D7 B4 0B 9A 8E BB E0 D4 64 D4 15 DD 0F 71 E2 C5 01 7C FD 31 AF 48 70 DD FD 02 6D FF 50 51 CB 12 74 B1 CD A2 8E BD BF 20 ED 6F EF A6 62 EB E0 62 2B B5 3E 27 E6 A5 BD E7 3A 68 AD E8 FA 7D CA 4E 13 57 9A D5 9B 64 F6 B0 D8 60 9C E6 CB EE 9D EF 4E 8F 71 79 C5 E4 EB 12 4C BB 3F 1C A4 11 3A A1 54 67 23 1D 46..., min:C6 D3 73 F2 C9 6D 71 31 60 86 30 E2 84 0E 5C 6F 11 3E 05 46 18 3D 13 02 DB A5 0E 26 6C 86 A7 36 97 E8 FA 81 29 FD 62 57 13 89 AD 5F 3C 8F EC 69 A1 19 C0 A9 DA 37 C9 D0 54 56 B5 D1 1C D6 D9 0E 38 FD 90 DE 63 03 C6 32 45 F2 8D 77 22 2A 7A 18 E8 58 96 5C CA A4 EE BF 4A 63 41 24 E4 A5 14 4B C5 37 22 93 9C 65 FE 18 29 87 96 23 4E 5E ED 7E B3 D2 3C B1 D9 90 32 C4 EF FE 6E 4A B6 AA 61 28..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:01 5564 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 1960685943,
"min" : 36621049,
"maxBytes" : "d7HddA==",
"minBytes" : "+couAg==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:01 5565 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #12
17/06/15 15:48:01 5566 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #12
17/06/15 15:48:01 5566 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5566 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770098_29274; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.19:50010,DS-d3e4af75-898b-4733-a54a-18e81a7020af,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770098_29274; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.19:50010,DS-d3e4af75-898b-4733-a54a-18e81a7020af,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5566 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #13
17/06/15 15:48:01 5567 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #13
17/06/15 15:48:01 5567 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:01 5567 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:01 5567 DEBUG DFSClient: Connecting to datanode 10.40.0.19:50010
17/06/15 15:48:01 5568 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:01 5568 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:01 5569 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:01 5569 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5569 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:01 5569 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5569 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00003-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:01 5570 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #14
17/06/15 15:48:01 5570 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #14
17/06/15 15:48:01 5571 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:01 5571 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #15
17/06/15 15:48:01 5571 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #15
17/06/15 15:48:01 5572 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5572 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770096_29272; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.16:50010,DS-6b452b44-11c4-4608-86d5-c778915d5d29,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770096_29272; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.16:50010,DS-6b452b44-11c4-4608-86d5-c778915d5d29,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5572 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:01 5572 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:01 5572 DEBUG DFSClient: Connecting to datanode 10.40.0.16:50010
17/06/15 15:48:01 5572 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.16, datanodeId = DatanodeInfoWithStorage[10.40.0.16:50010,DS-6b452b44-11c4-4608-86d5-c778915d5d29,DISK]
17/06/15 15:48:01 5574 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:01 5574 DEBUG DFSClient: Connecting to datanode 10.40.0.16:50010
17/06/15 15:48:01 5575 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:3A A6 D1 6A, min:5C FD AC 24, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:37 73 D2 62 86 EF AA 27 64 79 0D CE BD 38 78 B2 D9 B2 E7 F8 ED 95 DB B1 2F 3A 33 13 B7 72 52 93 11 34 E6 42 C2 C8 09 D3 3E 05 AF 88 CC 6D 21 98 0F 99 86 BE 61 E8 8F 35 8B A0 FC E1 97 C6 7F 66 58 45 60 0A 81 A4 F7 45 FB B0 BA AC 35 2C 61 59 46 50 05 04 96 6F A4 3B F4 B3 19 3D 49 9C FD 4C B3 AA C6 BE 76 44 27 3E 32 7E 48 41 46 DB 65 A1 F1 6B A8 D7 3F A2 56 2A DB 53 29 27 EC AC B0 C6..., min:87 08 73 BD 5F 95 6B A2 BD 51 13 E7 6D B2 3D F0 C4 F0 CE 92 2D 7E 7E 07 3B C4 BE 41 52 96 CA 6C 01 EE 5F E8 E9 9C 3D D3 13 4D 6F 24 43 F9 80 DB 98 D3 7C 25 DC 2C 3E 09 FC 93 62 8C 5F 93 90 7C 19 11 28 99 33 5D C9 69 A9 90 CB 06 3D E6 96 1E A4 B4 59 DA BA C6 E6 EC 3A C7 C4 02 54 32 03 F8 7B BE 36 16 EA F9 71 29 F5 57 C5 76 9D DE F9 D7 D2 99 31 F3 25 69 EC 71 35 D6 2A AD 76 36 14 D5..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:01 5576 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 1792124474,
"min" : 615316828,
"maxBytes" : "OqbRag==",
"minBytes" : "XP2sJA==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:01 5577 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #16
17/06/15 15:48:01 5578 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #16
17/06/15 15:48:01 5578 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5578 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770096_29272; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.16:50010,DS-6b452b44-11c4-4608-86d5-c778915d5d29,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770096_29272; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.16:50010,DS-6b452b44-11c4-4608-86d5-c778915d5d29,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5578 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #17
17/06/15 15:48:01 5579 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #17
17/06/15 15:48:01 5579 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:01 5579 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:01 5579 DEBUG DFSClient: Connecting to datanode 10.40.0.16:50010
17/06/15 15:48:01 5580 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:01 5580 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:01 5580 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:01 5580 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5580 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:01 5580 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5581 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00004-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:01 5582 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #18
17/06/15 15:48:01 5582 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #18
17/06/15 15:48:01 5582 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:01 5583 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #19
17/06/15 15:48:01 5583 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #19
17/06/15 15:48:01 5583 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5583 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770094_29270; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.20:50010,DS-7e06b257-5427-4f5b-bb30-610c5653429c,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770094_29270; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.20:50010,DS-7e06b257-5427-4f5b-bb30-610c5653429c,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5583 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:01 5583 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:01 5583 DEBUG DFSClient: Connecting to datanode 10.40.0.20:50010
17/06/15 15:48:01 5584 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.20, datanodeId = DatanodeInfoWithStorage[10.40.0.20:50010,DS-7e06b257-5427-4f5b-bb30-610c5653429c,DISK]
17/06/15 15:48:01 5585 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:01 5585 DEBUG DFSClient: Connecting to datanode 10.40.0.20:50010
17/06/15 15:48:01 5586 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:C2 A3 AB 56, min:C0 32 1A 0C, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:4E 74 9D 11 44 90 D9 44 A2 37 2E 38 6E EB 25 12 56 D2 3A 20 7D 15 64 E2 53 9B 04 CF 87 CA CD 8A DA 8F 35 E2 DA 88 B6 F7 5A BF 96 B3 2A 0F 06 9C BE 02 05 3B 98 19 E0 DE BB 38 C5 F6 6F 0C AF 10 DB EF 17 EF BE 59 6B FE 62 C8 FA 0E 41 57 78 13 37 3B B8 46 39 FC 8E 3C 7A A8 F1 2F 1D 2B CC F2 75 E7 7C D2 48 D1 29 26 FF DA EF AC 19 D7 85 EB 8F 18 98 98 1A A5 57 E5 91 4D 61 2D 27 77 33 EA..., min:87 99 AC 01 0E 16 3E 96 65 D2 53 42 C8 B1 20 23 26 8E 1A EF B2 89 04 D1 42 D7 03 62 BB 1B 9C 05 BB D4 CA 38 35 D4 53 F9 1A 50 77 F0 7E 38 30 4F AE 78 30 53 12 26 D4 6D 19 7A DB 0D 60 37 23 7B 73 BF 74 EF 0F 82 C9 A2 87 D1 97 8C A4 6E B3 08 FB F5 13 5B 9E 6F 5D B7 5E DE 24 E4 86 6E C9 CC 6D 21 75 45 86 EF E6 DD 1B B6 AA FC 80 C0 89 94 15 87 7F 34 C2 96 AA 44 D9 46 25 63 C9 D6 AC 82..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:01 5587 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 1454089154,
"min" : 203043520,
"maxBytes" : "wqOrVg==",
"minBytes" : "wDIaDA==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:01 5588 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #20
17/06/15 15:48:01 5589 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #20
17/06/15 15:48:01 5589 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:01 5589 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770094_29270; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.20:50010,DS-7e06b257-5427-4f5b-bb30-610c5653429c,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770094_29270; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.20:50010,DS-7e06b257-5427-4f5b-bb30-610c5653429c,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:01 5590 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #21
17/06/15 15:48:01 5590 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #21
17/06/15 15:48:01 5590 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:01 5591 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:01 5591 DEBUG DFSClient: Connecting to datanode 10.40.0.20:50010
17/06/15 15:48:01 5591 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:01 5591 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:01 5592 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:01 5592 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5592 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:01 5592 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:01 5598 DEBUG IntColumnBuilder: Compressor for [intKey]: org.apache.spark.sql.execution.columnar.compression.PassThrough$Encoder@6eda0a22, ratio: 1.0
17/06/15 15:48:01 5601 INFO MemoryStore: Block rdd_4_0 stored as values in memory (estimated size 50.8 KB, free 34.0 GB)
17/06/15 15:48:01 5614 DEBUG BlockManagerMaster: Updated info of block rdd_4_0
17/06/15 15:48:01 5614 DEBUG BlockManager: Told master about block rdd_4_0
17/06/15 15:48:02 5615 DEBUG BlockManager: Put block rdd_4_0 locally took 600 ms
17/06/15 15:48:02 5617 DEBUG BlockManager: Putting block rdd_4_0 without replication took 602 ms
17/06/15 15:48:02 5617 DEBUG BlockManager: Getting local block rdd_4_0
17/06/15 15:48:02 5619 DEBUG BlockManager: Level for block rdd_4_0 is StorageLevel(disk, memory, deserialized, 1 replicas)
17/06/15 15:48:02 5642 DEBUG GeneratePredicate: Generated predicate 'true':
/* 001 */ public SpecificPredicate generate(Object[] references) {
/* 002 */ return new SpecificPredicate(references);
/* 003 */ }
/* 004 */
/* 005 */ class SpecificPredicate extends org.apache.spark.sql.catalyst.expressions.codegen.Predicate {
/* 006 */ private final Object[] references;
/* 007 */
/* 008 */
/* 009 */ public SpecificPredicate(Object[] references) {
/* 010 */ this.references = references;
/* 011 */
/* 012 */ }
/* 013 */
/* 014 */ public void initialize(int partitionIndex) {
/* 015 */
/* 016 */ }
/* 017 */
/* 018 */
/* 019 */
/* 020 */ public boolean eval(InternalRow i) {
/* 021 */
/* 022 */ return !false && true;
/* 023 */ }
/* 024 */ }
17/06/15 15:48:02 5643 DEBUG CodeGenerator:
/* 001 */ public SpecificPredicate generate(Object[] references) {
/* 002 */ return new SpecificPredicate(references);
/* 003 */ }
/* 004 */
/* 005 */ class SpecificPredicate extends org.apache.spark.sql.catalyst.expressions.codegen.Predicate {
/* 006 */ private final Object[] references;
/* 007 */
/* 008 */
/* 009 */ public SpecificPredicate(Object[] references) {
/* 010 */ this.references = references;
/* 011 */
/* 012 */ }
/* 013 */
/* 014 */ public void initialize(int partitionIndex) {
/* 015 */
/* 016 */ }
/* 017 */
/* 018 */
/* 019 */
/* 020 */ public boolean eval(InternalRow i) {
/* 021 */
/* 022 */ return !false && true;
/* 023 */ }
/* 024 */ }
17/06/15 15:48:02 5654 INFO CodeGenerator: Code generated in 12.602941 ms
17/06/15 15:48:02 5664 DEBUG GenerateColumnAccessor: Generated ColumnarIterator:
/* 001 */ import java.nio.ByteBuffer;
/* 002 */ import java.nio.ByteOrder;
/* 003 */ import scala.collection.Iterator;
/* 004 */ import org.apache.spark.sql.types.DataType;
/* 005 */ import org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder;
/* 006 */ import org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter;
/* 007 */ import org.apache.spark.sql.execution.columnar.MutableUnsafeRow;
/* 008 */
/* 009 */ public SpecificColumnarIterator generate(Object[] references) {
/* 010 */ return new SpecificColumnarIterator();
/* 011 */ }
/* 012 */
/* 013 */ class SpecificColumnarIterator extends org.apache.spark.sql.execution.columnar.ColumnarIterator {
/* 014 */
/* 015 */ private ByteOrder nativeOrder = null;
/* 016 */ private byte[][] buffers = null;
/* 017 */ private UnsafeRow unsafeRow = new UnsafeRow(0);
/* 018 */ private BufferHolder bufferHolder = new BufferHolder(unsafeRow);
/* 019 */ private UnsafeRowWriter rowWriter = new UnsafeRowWriter(bufferHolder, 0);
/* 020 */ private MutableUnsafeRow mutableRow = null;
/* 021 */
/* 022 */ private int currentRow = 0;
/* 023 */ private int numRowsInBatch = 0;
/* 024 */
/* 025 */ private scala.collection.Iterator input = null;
/* 026 */ private DataType[] columnTypes = null;
/* 027 */ private int[] columnIndexes = null;
/* 028 */
/* 029 */
/* 030 */
/* 031 */ public SpecificColumnarIterator() {
/* 032 */ this.nativeOrder = ByteOrder.nativeOrder();
/* 033 */ this.buffers = new byte[0][];
/* 034 */ this.mutableRow = new MutableUnsafeRow(rowWriter);
/* 035 */ }
/* 036 */
/* 037 */ public void initialize(Iterator input, DataType[] columnTypes, int[] columnIndexes) {
/* 038 */ this.input = input;
/* 039 */ this.columnTypes = columnTypes;
/* 040 */ this.columnIndexes = columnIndexes;
/* 041 */ }
/* 042 */
/* 043 */
/* 044 */
/* 045 */ public boolean hasNext() {
/* 046 */ if (currentRow < numRowsInBatch) {
/* 047 */ return true;
/* 048 */ }
/* 049 */ if (!input.hasNext()) {
/* 050 */ return false;
/* 051 */ }
/* 052 */
/* 053 */ org.apache.spark.sql.execution.columnar.CachedBatch batch = (org.apache.spark.sql.execution.columnar.CachedBatch) input.next();
/* 054 */ currentRow = 0;
/* 055 */ numRowsInBatch = batch.numRows();
/* 056 */ for (int i = 0; i < columnIndexes.length; i ++) {
/* 057 */ buffers[i] = batch.buffers()[columnIndexes[i]];
/* 058 */ }
/* 059 */
/* 060 */
/* 061 */ return hasNext();
/* 062 */ }
/* 063 */
/* 064 */ public InternalRow next() {
/* 065 */ currentRow += 1;
/* 066 */ bufferHolder.reset();
/* 067 */ rowWriter.zeroOutNullBytes();
/* 068 */
/* 069 */ unsafeRow.setTotalSize(bufferHolder.totalSize());
/* 070 */ return unsafeRow;
/* 071 */ }
/* 072 */ }
17/06/15 15:48:02 5668 DEBUG CodeGenerator:
/* 001 */ import java.nio.ByteBuffer;
/* 002 */ import java.nio.ByteOrder;
/* 003 */ import scala.collection.Iterator;
/* 004 */ import org.apache.spark.sql.types.DataType;
/* 005 */ import org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder;
/* 006 */ import org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter;
/* 007 */ import org.apache.spark.sql.execution.columnar.MutableUnsafeRow;
/* 008 */
/* 009 */ public SpecificColumnarIterator generate(Object[] references) {
/* 010 */ return new SpecificColumnarIterator();
/* 011 */ }
/* 012 */
/* 013 */ class SpecificColumnarIterator extends org.apache.spark.sql.execution.columnar.ColumnarIterator {
/* 014 */
/* 015 */ private ByteOrder nativeOrder = null;
/* 016 */ private byte[][] buffers = null;
/* 017 */ private UnsafeRow unsafeRow = new UnsafeRow(0);
/* 018 */ private BufferHolder bufferHolder = new BufferHolder(unsafeRow);
/* 019 */ private UnsafeRowWriter rowWriter = new UnsafeRowWriter(bufferHolder, 0);
/* 020 */ private MutableUnsafeRow mutableRow = null;
/* 021 */
/* 022 */ private int currentRow = 0;
/* 023 */ private int numRowsInBatch = 0;
/* 024 */
/* 025 */ private scala.collection.Iterator input = null;
/* 026 */ private DataType[] columnTypes = null;
/* 027 */ private int[] columnIndexes = null;
/* 028 */
/* 029 */
/* 030 */
/* 031 */ public SpecificColumnarIterator() {
/* 032 */ this.nativeOrder = ByteOrder.nativeOrder();
/* 033 */ this.buffers = new byte[0][];
/* 034 */ this.mutableRow = new MutableUnsafeRow(rowWriter);
/* 035 */ }
/* 036 */
/* 037 */ public void initialize(Iterator input, DataType[] columnTypes, int[] columnIndexes) {
/* 038 */ this.input = input;
/* 039 */ this.columnTypes = columnTypes;
/* 040 */ this.columnIndexes = columnIndexes;
/* 041 */ }
/* 042 */
/* 043 */
/* 044 */
/* 045 */ public boolean hasNext() {
/* 046 */ if (currentRow < numRowsInBatch) {
/* 047 */ return true;
/* 048 */ }
/* 049 */ if (!input.hasNext()) {
/* 050 */ return false;
/* 051 */ }
/* 052 */
/* 053 */ org.apache.spark.sql.execution.columnar.CachedBatch batch = (org.apache.spark.sql.execution.columnar.CachedBatch) input.next();
/* 054 */ currentRow = 0;
/* 055 */ numRowsInBatch = batch.numRows();
/* 056 */ for (int i = 0; i < columnIndexes.length; i ++) {
/* 057 */ buffers[i] = batch.buffers()[columnIndexes[i]];
/* 058 */ }
/* 059 */
/* 060 */
/* 061 */ return hasNext();
/* 062 */ }
/* 063 */
/* 064 */ public InternalRow next() {
/* 065 */ currentRow += 1;
/* 066 */ bufferHolder.reset();
/* 067 */ rowWriter.zeroOutNullBytes();
/* 068 */
/* 069 */ unsafeRow.setTotalSize(bufferHolder.totalSize());
/* 070 */ return unsafeRow;
/* 071 */ }
/* 072 */ }
17/06/15 15:48:02 5690 INFO CodeGenerator: Code generated in 25.922259 ms
17/06/15 15:48:02 5694 DEBUG CodeGenerator:
/* 001 */ public Object generate(Object[] references) {
/* 002 */ return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */ private Object[] references;
/* 007 */ private scala.collection.Iterator[] inputs;
/* 008 */ private boolean agg_initAgg;
/* 009 */ private boolean agg_bufIsNull;
/* 010 */ private long agg_bufValue;
/* 011 */ private scala.collection.Iterator inputadapter_input;
/* 012 */ private org.apache.spark.sql.execution.metric.SQLMetric agg_numOutputRows;
/* 013 */ private org.apache.spark.sql.execution.metric.SQLMetric agg_aggTime;
/* 014 */ private UnsafeRow agg_result;
/* 015 */ private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder agg_holder;
/* 016 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter agg_rowWriter;
/* 017 */
/* 018 */ public GeneratedIterator(Object[] references) {
/* 019 */ this.references = references;
/* 020 */ }
/* 021 */
/* 022 */ public void init(int index, scala.collection.Iterator[] inputs) {
/* 023 */ partitionIndex = index;
/* 024 */ this.inputs = inputs;
/* 025 */ agg_initAgg = false;
/* 026 */
/* 027 */ inputadapter_input = inputs[0];
/* 028 */ this.agg_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 029 */ this.agg_aggTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 030 */ agg_result = new UnsafeRow(1);
/* 031 */ this.agg_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(agg_result, 0);
/* 032 */ this.agg_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(agg_holder, 1);
/* 033 */
/* 034 */ }
/* 035 */
/* 036 */ private void agg_doAggregateWithoutKey() throws java.io.IOException {
/* 037 */ // initialize aggregation buffer
/* 038 */ agg_bufIsNull = false;
/* 039 */ agg_bufValue = 0L;
/* 040 */
/* 041 */ while (inputadapter_input.hasNext()) {
/* 042 */ InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
/* 043 */ // do aggregate
/* 044 */ // common sub-expressions
/* 045 */
/* 046 */ // evaluate aggregate function
/* 047 */ boolean agg_isNull1 = false;
/* 048 */
/* 049 */ long agg_value1 = -1L;
/* 050 */ agg_value1 = agg_bufValue + 1L;
/* 051 */ // update aggregation buffer
/* 052 */ agg_bufIsNull = false;
/* 053 */ agg_bufValue = agg_value1;
/* 054 */ if (shouldStop()) return;
/* 055 */ }
/* 056 */
/* 057 */ }
/* 058 */
/* 059 */ protected void processNext() throws java.io.IOException {
/* 060 */ while (!agg_initAgg) {
/* 061 */ agg_initAgg = true;
/* 062 */ long agg_beforeAgg = System.nanoTime();
/* 063 */ agg_doAggregateWithoutKey();
/* 064 */ agg_aggTime.add((System.nanoTime() - agg_beforeAgg) / 1000000);
/* 065 */
/* 066 */ // output the result
/* 067 */
/* 068 */ agg_numOutputRows.add(1);
/* 069 */ agg_rowWriter.zeroOutNullBytes();
/* 070 */
/* 071 */ if (agg_bufIsNull) {
/* 072 */ agg_rowWriter.setNullAt(0);
/* 073 */ } else {
/* 074 */ agg_rowWriter.write(0, agg_bufValue);
/* 075 */ }
/* 076 */ append(agg_result);
/* 077 */ }
/* 078 */ }
/* 079 */ }
17/06/15 15:48:02 5705 INFO CodeGenerator: Code generated in 13.592832 ms
17/06/15 15:48:02 5718 INFO CrailShuffleWriter: shuffler writer: initTime 2956.0, runTime 717962.0, initRatio 242.882949932341, overhead 0.4117209545909115
17/06/15 15:48:02 5729 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 2664 bytes result sent to driver
17/06/15 15:48:02 5736 INFO CoarseGrainedExecutorBackend: Got assigned task 2
17/06/15 15:48:02 5736 INFO Executor: Running task 1.0 in stage 1.0 (TID 2)
17/06/15 15:48:02 5739 DEBUG Executor: Task 2's epoch is 0
17/06/15 15:48:02 5742 DEBUG BlockManager: Getting local block rdd_4_1
17/06/15 15:48:02 5742 DEBUG BlockManager: Block rdd_4_1 was not found
17/06/15 15:48:02 5742 DEBUG BlockManager: Getting remote block rdd_4_1
17/06/15 15:48:02 5744 DEBUG BlockManager: Block rdd_4_1 not found
17/06/15 15:48:02 5745 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00005-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:02 5746 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #22
17/06/15 15:48:02 5747 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #22
17/06/15 15:48:02 5747 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:02 5747 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #23
17/06/15 15:48:02 5747 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #23
17/06/15 15:48:02 5748 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:02 5748 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770092_29268; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.23:50010,DS-affd1c1e-3e1c-4abe-86f2-2918066d12ad,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770092_29268; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.23:50010,DS-affd1c1e-3e1c-4abe-86f2-2918066d12ad,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5748 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:02 5748 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:02 5748 DEBUG DFSClient: Connecting to datanode 10.40.0.23:50010
17/06/15 15:48:02 5748 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.23, datanodeId = DatanodeInfoWithStorage[10.40.0.23:50010,DS-affd1c1e-3e1c-4abe-86f2-2918066d12ad,DISK]
17/06/15 15:48:02 5749 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:02 5750 DEBUG DFSClient: Connecting to datanode 10.40.0.23:50010
17/06/15 15:48:02 5751 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[PLAIN, BIT_PACKED, RLE], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:42 19 62 68, min:7C C9 0F 00, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[PLAIN, BIT_PACKED, RLE], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:7F E0 91 6D A8 4F 00 4D AC 81 B5 AB D2 C2 03 52 C7 7A 02 1F 22 F3 7B DD 92 9A 74 C7 3C 8C 31 BF 14 AB 73 37 C0 F7 12 5F 4B BE EF F8 1C 01 95 F2 B3 9D 77 7C 54 12 88 83 80 AB A3 67 99 12 9A B0 BB A2 F4 02 9C 50 D8 81 AA 8B 4D DD 0A 46 8C 02 29 FC 80 6B 93 36 B6 85 D8 E7 1F BB 63 04 BB CE 4C EA 2A 20 02 DD 08 DD AE A5 CB 16 EB EC 4E 00 63 9A 8A 48 5C E5 25 37 B0 50 AD 56 32 1F 15 39..., min:9D 97 13 58 05 59 7A 03 BB EB 30 FA 1B ED 0C 4B 2B 39 46 62 06 9F 4C A7 01 F6 E1 D2 2A 06 6F 96 3F 91 1A A9 05 8A 3C C6 9F 54 F0 E7 6E 84 9B 4A 24 F0 F0 B4 B5 AA 33 E3 73 59 4B 87 02 53 2B 2D 4E 0C 1C 7B B6 ED 7A 9D 98 EE 83 DF 93 36 43 8D D2 71 20 8B F5 47 CF 49 F8 D4 AB 64 67 6A 4A D1 17 F5 36 AE 53 45 EA E5 24 B8 D1 E8 A1 89 FF 7C CD 1C 77 4A 84 5D B2 B1 67 2B 0A 3C C3 48 1C 0F..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:02 5752 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 1751259458,
"min" : 1034620,
"maxBytes" : "QhliaA==",
"minBytes" : "fMkPAA==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:02 5753 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #24
17/06/15 15:48:02 5754 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #24
17/06/15 15:48:02 5754 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:02 5754 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770092_29268; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.23:50010,DS-affd1c1e-3e1c-4abe-86f2-2918066d12ad,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770092_29268; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.23:50010,DS-affd1c1e-3e1c-4abe-86f2-2918066d12ad,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5754 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #25
17/06/15 15:48:02 5755 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #25
17/06/15 15:48:02 5755 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:02 5756 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:02 5756 DEBUG DFSClient: Connecting to datanode 10.40.0.23:50010
17/06/15 15:48:02 5757 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:02 5757 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:02 5757 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:02 5757 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5757 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:02 5757 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5758 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00006-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:02 5759 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #26
17/06/15 15:48:02 5759 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #26
17/06/15 15:48:02 5759 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:02 5759 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #27
17/06/15 15:48:02 5760 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #27
17/06/15 15:48:02 5760 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:02 5760 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770095_29271; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.18:50010,DS-02976b48-62ba-4785-b182-5037b8db76fb,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770095_29271; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.18:50010,DS-02976b48-62ba-4785-b182-5037b8db76fb,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5760 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:02 5760 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:02 5760 DEBUG DFSClient: Connecting to datanode 10.40.0.18:50010
17/06/15 15:48:02 5760 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.18, datanodeId = DatanodeInfoWithStorage[10.40.0.18:50010,DS-02976b48-62ba-4785-b182-5037b8db76fb,DISK]
17/06/15 15:48:02 5761 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:02 5761 DEBUG DFSClient: Connecting to datanode 10.40.0.18:50010
17/06/15 15:48:02 5763 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:2F AE EA 67, min:D1 33 7A 06, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:70 C5 AB A7 F0 48 E0 83 3B 9D C2 CE 5A C6 EB 68 CF B3 68 EC E1 DB 04 5F 6B A1 32 A4 FD D3 33 BB 3E 7E A2 D5 C1 6A D5 71 9F C0 5E C7 14 72 2E 11 EA 88 8D 1C 25 63 20 B2 06 58 49 41 93 C9 03 D2 E5 48 8D 53 DF 8E E9 BA 80 92 23 24 19 71 31 6B 15 A7 38 3A EC 6D 1F 54 B8 86 77 D8 76 33 EF 14 6F 1E 4D 60 90 CA 31 31 77 A2 8D 55 6E 3A 3B BD 65 09 F9 8C BF EB 3A AC 37 F7 6C 01 BC A1 37 9D..., min:81 54 79 5E 2D 66 8E FC 41 1E 04 89 F5 A1 40 8F 06 B1 5B E5 21 D9 B4 73 D2 2E 7C 53 72 49 F0 7C 7A 63 CF 2F 31 C0 BC 09 ED 54 FB C2 33 BC 07 9D 82 80 0A 3B 95 85 17 62 0B EE FD 3D 5C 48 7B 00 AB A2 01 39 58 27 2C BA D1 60 D8 76 FE FE D6 50 BF D7 D8 C0 17 BC 15 A0 84 BC CD C2 75 D0 12 CF 5A 8F E5 C4 08 00 3F B6 BC 5B AC 9B 10 8B 61 53 BC 5E C8 6D 3E 2E BB 82 57 7E 85 FE F7 9F 9E 1A..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:02 5765 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 1743433263,
"min" : 108671953,
"maxBytes" : "L67qZw==",
"minBytes" : "0TN6Bg==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:02 5766 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #28
17/06/15 15:48:02 5766 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #28
17/06/15 15:48:02 5766 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/06/15 15:48:02 5767 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770095_29271; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.18:50010,DS-02976b48-62ba-4785-b182-5037b8db76fb,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770095_29271; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.18:50010,DS-02976b48-62ba-4785-b182-5037b8db76fb,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5767 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #29
17/06/15 15:48:02 5767 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #29
17/06/15 15:48:02 5767 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:02 5768 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:02 5768 DEBUG DFSClient: Connecting to datanode 10.40.0.18:50010
17/06/15 15:48:02 5769 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:02 5769 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:02 5769 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:02 5769 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5769 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:02 5769 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5770 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00007-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:02 5770 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #30
17/06/15 15:48:02 5771 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #30
17/06/15 15:48:02 5771 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:02 5771 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #31
17/06/15 15:48:02 5772 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #31
17/06/15 15:48:02 5772 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:02 5772 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770090_29266; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.14:50010,DS-695bf534-8da7-4857-80d8-b996b18b3e49,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770090_29266; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.14:50010,DS-695bf534-8da7-4857-80d8-b996b18b3e49,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5772 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:02 5772 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:02 5772 DEBUG DFSClient: Connecting to datanode 10.40.0.14:50010
17/06/15 15:48:02 5772 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.14, datanodeId = DatanodeInfoWithStorage[10.40.0.14:50010,DS-695bf534-8da7-4857-80d8-b996b18b3e49,DISK]
17/06/15 15:48:02 5774 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:02 5774 DEBUG DFSClient: Connecting to datanode 10.40.0.14:50010
17/06/15 15:48:02 5775 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:46 64 B0 7B, min:8A 0C 12 16, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:7F 63 43 31 78 A7 2E C3 CC 24 6F F0 F9 96 A2 86 F4 BD 6C FD 8E FE 92 F8 8C 00 82 62 DB 8A 29 75 93 0B 4C 09 BC 04 0E 79 9F 8D DA D9 80 AF CB F7 F2 68 6F 84 B3 E0 5E 52 6B ED DD E2 54 5B 62 8F 77 02 99 8B 76 C1 45 05 6D CB 41 4F 97 7A FE 31 F9 9A 10 A5 30 8A 61 0A 81 FE 3E 30 51 51 0A F2 A7 58 34 A3 42 E7 1B FC 5F 7A 76 91 83 E8 70 60 29 FC 4E 9F 9C 20 99 E6 68 55 75 B6 9E 9A B2 A3..., min:9D 5C AA 59 1D 91 55 9A 3B B1 BA 93 87 0B 7B B0 BF 8B 8A 8E 65 F1 71 8D 60 09 DB 74 FC 5D 70 FC B8 7A 16 7B 20 98 29 CC B4 A6 EA 1A C6 E8 D0 79 E7 95 57 C8 97 42 61 98 4E 41 BE 2E E7 AC A8 34 C1 CA A5 21 C5 EE A0 A6 AB 42 A0 4C FD 2A 58 36 C0 BD 84 5D 3D 0C AD 97 C1 6E A7 36 C7 58 D2 06 9A EB 49 FD 4B 57 07 48 55 8A A6 B0 E7 EA 85 CE B6 29 67 2E 5A C2 FF 4E 89 92 3F AC E9 F6 EA F2..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:02 5779 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 2075157574,
"min" : 370281610,
"maxBytes" : "RmSwew==",
"minBytes" : "igwSFg==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:02 5780 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #32
17/06/15 15:48:02 5781 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #32
17/06/15 15:48:02 5781 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:02 5781 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770090_29266; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.14:50010,DS-695bf534-8da7-4857-80d8-b996b18b3e49,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770090_29266; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.14:50010,DS-695bf534-8da7-4857-80d8-b996b18b3e49,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5782 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #33
17/06/15 15:48:02 5782 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #33
17/06/15 15:48:02 5782 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:02 5782 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:02 5782 DEBUG DFSClient: Connecting to datanode 10.40.0.14:50010
17/06/15 15:48:02 5783 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:02 5783 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:02 5783 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:02 5783 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5783 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:02 5783 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5784 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00008-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:02 5785 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #34
17/06/15 15:48:02 5785 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #34
17/06/15 15:48:02 5785 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:02 5786 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #35
17/06/15 15:48:02 5786 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #35
17/06/15 15:48:02 5786 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:02 5786 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770091_29267; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.15:50010,DS-eea85b06-06f6-470c-925e-43616e9837b7,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770091_29267; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.15:50010,DS-eea85b06-06f6-470c-925e-43616e9837b7,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5786 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:02 5786 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:02 5786 DEBUG DFSClient: Connecting to datanode 10.40.0.15:50010
17/06/15 15:48:02 5787 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.15, datanodeId = DatanodeInfoWithStorage[10.40.0.15:50010,DS-eea85b06-06f6-470c-925e-43616e9837b7,DISK]
17/06/15 15:48:02 5788 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:02 5788 DEBUG DFSClient: Connecting to datanode 10.40.0.15:50010
17/06/15 15:48:02 5789 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:FB 07 D3 61, min:D4 9C 7C 02, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[BIT_PACKED, RLE, PLAIN], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:62 C5 3A 57 E9 03 2D 88 CF 5F 41 16 66 95 AF 8D 08 DC 18 92 F7 9C 90 0C DF F8 D8 1D A0 6E 55 CB 3E 89 01 D3 24 4D FB 10 71 43 C9 C4 71 6B 33 C8 09 34 85 DF 65 A6 59 71 72 3E F0 2A BE 7E 14 B1 2F 2B 40 08 30 D4 C1 CE 57 64 73 9B F4 9D 4A A2 D4 BA 0B 49 52 2A FB 88 61 F3 39 19 F3 9D 5C EB 67 78 78 97 FE 4C EF B6 22 F6 8C 1B 49 6B 80 78 BE E8 46 82 28 46 7D D3 40 D0 90 4A A3 11 84 03..., min:83 89 3C 68 6C 4D 7A 87 D9 CA 41 CF 3B 46 AC 5E A9 C3 24 26 18 63 0B A4 CC A7 DC B4 44 BD 21 F5 69 AA 55 56 60 F3 4C 2C E3 30 E7 B0 9E 89 69 5D CA 1D 88 47 67 68 EB E3 3A FE 4A 92 AA 17 C8 E6 42 2E 8E 9D 76 72 A6 68 4C FD 02 8B 8E 0D 1F 92 9F 89 43 2B 0F 3D B4 19 73 02 79 1D 6C D8 30 D1 2A 52 1D 57 41 61 7A B1 AF 6E 6C F7 D5 58 73 BC 09 C4 BD DF 3E 29 33 CF A6 ED 76 AC 80 62 FC 63..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:02 5790 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 1641220091,
"min" : 41721044,
"maxBytes" : "+wfTYQ==",
"minBytes" : "1Jx8Ag==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:02 5791 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #36
17/06/15 15:48:02 5791 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #36
17/06/15 15:48:02 5791 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/06/15 15:48:02 5792 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770091_29267; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.15:50010,DS-eea85b06-06f6-470c-925e-43616e9837b7,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770091_29267; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.15:50010,DS-eea85b06-06f6-470c-925e-43616e9837b7,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5792 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #37
17/06/15 15:48:02 5792 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #37
17/06/15 15:48:02 5792 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:02 5793 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:02 5793 DEBUG DFSClient: Connecting to datanode 10.40.0.15:50010
17/06/15 15:48:02 5793 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:02 5794 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:02 5794 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:02 5794 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5794 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:02 5794 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5794 INFO FileScanRDD: Reading File path: hdfs://flex11-40g0:9000/sql/small.parquet/part-00009-c69c2cee-c413-4349-bac0-c87a6ef32a36.parquet, range: 0-14963, partition values: [empty row]
17/06/15 15:48:02 5795 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #38
17/06/15 15:48:02 5795 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #38
17/06/15 15:48:02 5795 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/06/15 15:48:02 5796 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #39
17/06/15 15:48:02 5796 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #39
17/06/15 15:48:02 5796 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/06/15 15:48:02 5796 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770089_29265; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.13:50010,DS-e0c53985-588a-4578-b388-2ab8e64ea512,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770089_29265; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.13:50010,DS-e0c53985-588a-4578-b388-2ab8e64ea512,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5796 DEBUG ParquetFileReader: File length 14963
17/06/15 15:48:02 5796 DEBUG ParquetFileReader: reading footer index at 14955
17/06/15 15:48:02 5796 DEBUG DFSClient: Connecting to datanode 10.40.0.13:50010
17/06/15 15:48:02 5797 DEBUG SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /10.40.0.13, datanodeId = DatanodeInfoWithStorage[10.40.0.13:50010,DS-e0c53985-588a-4578-b388-2ab8e64ea512,DISK]
17/06/15 15:48:02 5799 DEBUG ParquetFileReader: read footer length: 2507, footer index: 12448
17/06/15 15:48:02 5800 DEBUG DFSClient: Connecting to datanode 10.40.0.13:50010
17/06/15 15:48:02 5801 DEBUG ParquetMetadataConverter: FileMetaData(version:1, schema:[SchemaElement(name:spark_schema, num_children:2), SchemaElement(type:INT32, repetition_type:OPTIONAL, name:intKey), SchemaElement(type:BYTE_ARRAY, repetition_type:OPTIONAL, name:payload)], num_rows:10, row_groups:[RowGroup(columns:[ColumnChunk(file_offset:4, meta_data:ColumnMetaData(type:INT32, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[intKey], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:79, total_compressed_size:79, data_page_offset:4, statistics:Statistics(max:65 DE 04 78, min:AC C4 CC 19, null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])), ColumnChunk(file_offset:83, meta_data:ColumnMetaData(type:BYTE_ARRAY, encodings:[PLAIN, RLE, BIT_PACKED], path_in_schema:[payload], codec:UNCOMPRESSED, num_values:10, total_uncompressed_size:12365, total_compressed_size:12365, data_page_offset:83, statistics:Statistics(max:71 62 D1 02 D2 E6 81 F5 C4 4C 46 55 28 76 CC 7E 7B 1E 17 87 90 8B BB 96 7F 51 14 45 E4 C0 E2 4C EA AF C8 28 0B B0 9A 59 F0 4A 7B B8 68 ED 42 2A 25 E6 EE B1 E1 1B 76 E2 EC BE 41 6A AA 7A EA 76 10 91 8C AD 36 ED 31 69 D0 74 35 11 FE B7 48 DD E9 7F 65 28 FB 90 5C 5A 6C 42 3E D7 03 41 53 72 54 19 36 AF 72 7B EC 11 3E 34 9C E1 C0 D3 0B B8 B3 E0 43 41 F1 64 02 BC E2 84 E3 82 09 E5 0A FD..., min:98 20 DD D3 8C 07 ED A3 17 DF 82 8A 66 FF 43 99 E1 18 32 D8 A4 9F 9F 5E 4B 96 22 8A 90 EC 7F F3 E6 97 48 43 8C 7F CF CB 44 70 57 EC 2C 81 8C 09 78 53 A0 B7 CB 52 B8 F3 21 93 3C 6F C4 EA 7F 33 FF 8E 44 64 AB BE FF C5 5A 35 2A C4 40 49 C1 AC CF D3 27 EA F8 55 CE EE 36 80 07 67 44 52 4A 2E BF 5A 12 5B 07 85 9D 41 80 A5 B3 25 D7 46 D5 19 FC FA 14 92 F9 B9 29 A8 79 1C FE 83 44 1F 42 DA..., null_count:0), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)]))], total_byte_size:12444, num_rows:10)], key_value_metadata:[KeyValue(key:org.apache.spark.sql.parquet.row.metadata, value:{"type":"struct","fields":[{"name":"intKey","type":"integer","nullable":true,"metadata":{}},{"name":"payload","type":"binary","nullable":true,"metadata":{}}]})], created_by:parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea))
17/06/15 15:48:02 5801 DEBUG ParquetMetadataConverter: {
"fileMetaData" : {
"schema" : {
"name" : "spark_schema",
"repetition" : "REPEATED",
"originalType" : null,
"id" : null,
"fields" : [ {
"name" : "intKey",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "INT32"
}, {
"name" : "payload",
"repetition" : "OPTIONAL",
"originalType" : null,
"id" : null,
"primitive" : true,
"typeLength" : 0,
"decimalMetadata" : null,
"primitiveTypeName" : "BINARY"
} ],
"paths" : [ [ "intKey" ], [ "payload" ] ],
"columns" : [ {
"path" : [ "intKey" ],
"type" : "INT32",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
}, {
"path" : [ "payload" ],
"type" : "BINARY",
"typeLength" : 0,
"maxRepetitionLevel" : 0,
"maxDefinitionLevel" : 1
} ],
"fieldCount" : 2,
"primitive" : false
},
"keyValueMetaData" : {
"org.apache.spark.sql.parquet.row.metadata" : "{\"type\":\"struct\",\"fields\":[{\"name\":\"intKey\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"payload\",\"type\":\"binary\",\"nullable\":true,\"metadata\":{}}]}"
},
"createdBy" : "parquet-mr version 1.8.2 (build aa78e929195723e4f9bf2bbad1b39e7e0277f8ea)"
},
"blocks" : [ {
"columns" : [ {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 79,
"totalUncompressedSize" : 79,
"statistics" : {
"max" : 2013584997,
"min" : 432850092,
"maxBytes" : "Zd4EeA==",
"minBytes" : "rMTMGQ==",
"numNulls" : 0,
"empty" : false
},
"firstDataPageOffset" : 4,
"codec" : "UNCOMPRESSED",
"startingPos" : 4,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "INT32",
"path" : [ "intKey" ]
}, {
"encodingStats" : {
"dictionaryEncodings" : [ ],
"dataEncodings" : [ "PLAIN" ]
},
"dictionaryPageOffset" : 0,
"valueCount" : 10,
"totalSize" : 12365,
"totalUncompressedSize" : 12365,
"statistics" : {
"max" : null,
"min" : null,
"maxBytes" : null,
"minBytes" : null,
"numNulls" : 0,
"empty" : true
},
"firstDataPageOffset" : 83,
"codec" : "UNCOMPRESSED",
"startingPos" : 83,
"encodings" : [ "RLE", "PLAIN", "BIT_PACKED" ],
"type" : "BINARY",
"path" : [ "payload" ]
} ],
"rowCount" : 10,
"totalByteSize" : 12444,
"path" : null,
"compressedSize" : 12444,
"startingPos" : 4
} ]
}
17/06/15 15:48:02 5802 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #40
17/06/15 15:48:02 5803 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #40
17/06/15 15:48:02 5803 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/06/15 15:48:02 5803 DEBUG DFSClient: newInfo = LocatedBlocks{
fileLength=14963
underConstruction=false
blocks=[LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770089_29265; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.13:50010,DS-e0c53985-588a-4578-b388-2ab8e64ea512,DISK]]}]
lastLocatedBlock=LocatedBlock{BP-350330489-10.40.0.11-1492009574738:blk_1073770089_29265; getBlockSize()=14963; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.40.0.13:50010,DS-e0c53985-588a-4578-b388-2ab8e64ea512,DISK]]}
isLastBlockComplete=true}
17/06/15 15:48:02 5803 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo sending #41
17/06/15 15:48:02 5804 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo got value #41
17/06/15 15:48:02 5804 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/06/15 15:48:02 5804 DEBUG ParquetFileFormat: Appending StructType() [empty row]
17/06/15 15:48:02 5804 DEBUG DFSClient: Connecting to datanode 10.40.0.13:50010
17/06/15 15:48:02 5805 DEBUG BytesInput: BytesInput from array of 46 bytes
17/06/15 15:48:02 5805 DEBUG BytesInput: BytesInput from array of 10286 bytes
17/06/15 15:48:02 5805 DEBUG BytesInput: converted 46 to byteArray of 46 bytes
17/06/15 15:48:02 5805 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5805 DEBUG BytesInput: converted 10286 to byteArray of 10286 bytes
17/06/15 15:48:02 5805 DEBUG ByteBitPackingValuesReader: reading 0 bytes for 10 values of size 0 bits.
17/06/15 15:48:02 5806 DEBUG IntColumnBuilder: Compressor for [intKey]: org.apache.spark.sql.execution.columnar.compression.PassThrough$Encoder@1dd8a342, ratio: 1.0
17/06/15 15:48:02 5807 INFO MemoryStore: Block rdd_4_1 stored as values in memory (estimated size 50.8 KB, free 34.0 GB)
17/06/15 15:48:02 5810 DEBUG BlockManagerMaster: Updated info of block rdd_4_1
17/06/15 15:48:02 5810 DEBUG BlockManager: Told master about block rdd_4_1
17/06/15 15:48:02 5810 DEBUG BlockManager: Put block rdd_4_1 locally took 65 ms
17/06/15 15:48:02 5810 DEBUG BlockManager: Putting block rdd_4_1 without replication took 65 ms
17/06/15 15:48:02 5810 DEBUG BlockManager: Getting local block rdd_4_1
17/06/15 15:48:02 5810 DEBUG BlockManager: Level for block rdd_4_1 is StorageLevel(disk, memory, deserialized, 1 replicas)
17/06/15 15:48:02 5811 DEBUG GeneratePredicate: Generated predicate 'true':
/* 001 */ public SpecificPredicate generate(Object[] references) {
/* 002 */ return new SpecificPredicate(references);
/* 003 */ }
/* 004 */
/* 005 */ class SpecificPredicate extends org.apache.spark.sql.catalyst.expressions.codegen.Predicate {
/* 006 */ private final Object[] references;
/* 007 */
/* 008 */
/* 009 */ public SpecificPredicate(Object[] references) {
/* 010 */ this.references = references;
/* 011 */
/* 012 */ }
/* 013 */
/* 014 */ public void initialize(int partitionIndex) {
/* 015 */
/* 016 */ }
/* 017 */
/* 018 */
/* 019 */
/* 020 */ public boolean eval(InternalRow i) {
/* 021 */
/* 022 */ return !false && true;
/* 023 */ }
/* 024 */ }
17/06/15 15:48:02 5813 DEBUG GenerateColumnAccessor: Generated ColumnarIterator:
/* 001 */ import java.nio.ByteBuffer;
/* 002 */ import java.nio.ByteOrder;
/* 003 */ import scala.collection.Iterator;
/* 004 */ import org.apache.spark.sql.types.DataType;
/* 005 */ import org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder;
/* 006 */ import org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter;
/* 007 */ import org.apache.spark.sql.execution.columnar.MutableUnsafeRow;
/* 008 */
/* 009 */ public SpecificColumnarIterator generate(Object[] references) {
/* 010 */ return new SpecificColumnarIterator();
/* 011 */ }
/* 012 */
/* 013 */ class SpecificColumnarIterator extends org.apache.spark.sql.execution.columnar.ColumnarIterator {
/* 014 */
/* 015 */ private ByteOrder nativeOrder = null;
/* 016 */ private byte[][] buffers = null;
/* 017 */ private UnsafeRow unsafeRow = new UnsafeRow(0);
/* 018 */ private BufferHolder bufferHolder = new BufferHolder(unsafeRow);
/* 019 */ private UnsafeRowWriter rowWriter = new UnsafeRowWriter(bufferHolder, 0);
/* 020 */ private MutableUnsafeRow mutableRow = null;
/* 021 */
/* 022 */ private int currentRow = 0;
/* 023 */ private int numRowsInBatch = 0;
/* 024 */
/* 025 */ private scala.collection.Iterator input = null;
/* 026 */ private DataType[] columnTypes = null;
/* 027 */ private int[] columnIndexes = null;
/* 028 */
/* 029 */
/* 030 */
/* 031 */ public SpecificColumnarIterator() {
/* 032 */ this.nativeOrder = ByteOrder.nativeOrder();
/* 033 */ this.buffers = new byte[0][];
/* 034 */ this.mutableRow = new MutableUnsafeRow(rowWriter);
/* 035 */ }
/* 036 */
/* 037 */ public void initialize(Iterator input, DataType[] columnTypes, int[] columnIndexes) {
/* 038 */ this.input = input;
/* 039 */ this.columnTypes = columnTypes;
/* 040 */ this.columnIndexes = columnIndexes;
/* 041 */ }
/* 042 */
/* 043 */
/* 044 */
/* 045 */ public boolean hasNext() {
/* 046 */ if (currentRow < numRowsInBatch) {
/* 047 */ return true;
/* 048 */ }
/* 049 */ if (!input.hasNext()) {
/* 050 */ return false;
/* 051 */ }
/* 052 */
/* 053 */ org.apache.spark.sql.execution.columnar.CachedBatch batch = (org.apache.spark.sql.execution.columnar.CachedBatch) input.next();
/* 054 */ currentRow = 0;
/* 055 */ numRowsInBatch = batch.numRows();
/* 056 */ for (int i = 0; i < columnIndexes.length; i ++) {
/* 057 */ buffers[i] = batch.buffers()[columnIndexes[i]];
/* 058 */ }
/* 059 */
/* 060 */
/* 061 */ return hasNext();
/* 062 */ }
/* 063 */
/* 064 */ public InternalRow next() {
/* 065 */ currentRow += 1;
/* 066 */ bufferHolder.reset();
/* 067 */ rowWriter.zeroOutNullBytes();
/* 068 */
/* 069 */ unsafeRow.setTotalSize(bufferHolder.totalSize());
/* 070 */ return unsafeRow;
/* 071 */ }
/* 072 */ }
17/06/15 15:48:02 5814 INFO CrailShuffleWriter: shuffler writer: initTime 19.0, runTime 72051.0, initRatio 3792.157894736842, overhead 0.02637020999014587
17/06/15 15:48:02 5818 INFO Executor: Finished task 1.0 in stage 1.0 (TID 2). 2664 bytes result sent to driver
17/06/15 15:48:02 5840 INFO CoarseGrainedExecutorBackend: Got assigned task 3
17/06/15 15:48:02 5840 INFO Executor: Running task 0.0 in stage 2.0 (TID 3)
17/06/15 15:48:02 5843 DEBUG Executor: Task 3's epoch is 1
17/06/15 15:48:02 5844 INFO MapOutputTrackerWorker: Updating epoch to 1 and clearing cache
17/06/15 15:48:02 5851 INFO CrailShuffleManager: loading shuffler sorter org.apache.spark.shuffle.CrailSparkShuffleSorter
17/06/15 15:48:02 5853 INFO CrailSparkShuffleSorter: crail shuffle spark sorter
17/06/15 15:48:02 5859 DEBUG CodeGenerator:
/* 001 */ public Object generate(Object[] references) {
/* 002 */ return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */ private Object[] references;
/* 007 */ private scala.collection.Iterator[] inputs;
/* 008 */ private boolean agg_initAgg;
/* 009 */ private boolean agg_bufIsNull;
/* 010 */ private long agg_bufValue;
/* 011 */ private scala.collection.Iterator inputadapter_input;
/* 012 */ private org.apache.spark.sql.execution.metric.SQLMetric agg_numOutputRows;
/* 013 */ private org.apache.spark.sql.execution.metric.SQLMetric agg_aggTime;
/* 014 */ private UnsafeRow agg_result;
/* 015 */ private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder agg_holder;
/* 016 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter agg_rowWriter;
/* 017 */
/* 018 */ public GeneratedIterator(Object[] references) {
/* 019 */ this.references = references;
/* 020 */ }
/* 021 */
/* 022 */ public void init(int index, scala.collection.Iterator[] inputs) {
/* 023 */ partitionIndex = index;
/* 024 */ this.inputs = inputs;
/* 025 */ agg_initAgg = false;
/* 026 */
/* 027 */ inputadapter_input = inputs[0];
/* 028 */ this.agg_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 029 */ this.agg_aggTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 030 */ agg_result = new UnsafeRow(1);
/* 031 */ this.agg_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(agg_result, 0);
/* 032 */ this.agg_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(agg_holder, 1);
/* 033 */
/* 034 */ }
/* 035 */
/* 036 */ private void agg_doAggregateWithoutKey() throws java.io.IOException {
/* 037 */ // initialize aggregation buffer
/* 038 */ agg_bufIsNull = false;
/* 039 */ agg_bufValue = 0L;
/* 040 */
/* 041 */ while (inputadapter_input.hasNext()) {
/* 042 */ InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
/* 043 */ long inputadapter_value = inputadapter_row.getLong(0);
/* 044 */
/* 045 */ // do aggregate
/* 046 */ // common sub-expressions
/* 047 */
/* 048 */ // evaluate aggregate function
/* 049 */ boolean agg_isNull3 = false;
/* 050 */
/* 051 */ long agg_value3 = -1L;
/* 052 */ agg_value3 = agg_bufValue + inputadapter_value;
/* 053 */ // update aggregation buffer
/* 054 */ agg_bufIsNull = false;
/* 055 */ agg_bufValue = agg_value3;
/* 056 */ if (shouldStop()) return;
/* 057 */ }
/* 058 */
/* 059 */ }
/* 060 */
/* 061 */ protected void processNext() throws java.io.IOException {
/* 062 */ while (!agg_initAgg) {
/* 063 */ agg_initAgg = true;
/* 064 */ long agg_beforeAgg = System.nanoTime();
/* 065 */ agg_doAggregateWithoutKey();
/* 066 */ agg_aggTime.add((System.nanoTime() - agg_beforeAgg) / 1000000);
/* 067 */
/* 068 */ // output the result
/* 069 */
/* 070 */ agg_numOutputRows.add(1);
/* 071 */ agg_rowWriter.zeroOutNullBytes();
/* 072 */
/* 073 */ if (agg_bufIsNull) {
/* 074 */ agg_rowWriter.setNullAt(0);
/* 075 */ } else {
/* 076 */ agg_rowWriter.write(0, agg_bufValue);
/* 077 */ }
/* 078 */ append(agg_result);
/* 079 */ }
/* 080 */ }
/* 081 */ }
17/06/15 15:48:02 5872 INFO CodeGenerator: Code generated in 13.903175 ms
17/06/15 15:48:02 5895 INFO Executor: Finished task 0.0 in stage 2.0 (TID 3). 1442 bytes result sent to driver
17/06/15 15:48:02 5980 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
17/06/15 15:48:02 5982 INFO CrailShuffleManager: shutting down crail shuffle manager
17/06/15 15:48:02 5983 INFO CrailStore: stopping CrailStore
17/06/15 15:48:02 5983 INFO crail: Closing CrailFS singleton
17/06/15 15:48:02 5983 INFO crail: mapped client cache closed
17/06/15 15:48:02 6187 INFO crail: CrailStatistics, tag=close
17/06/15 15:48:02 6187 INFO crail: provider=buffered/in [totalOps 5, blockingOps 0, nonBlockingOps 5]
17/06/15 15:48:02 6187 INFO crail: provider=buffered/out [totalOps 0, blockingOps 0, nonBlockingOps 0]
17/06/15 15:48:02 6187 INFO crail: provider=cache/endpoint [size 7]
17/06/15 15:48:02 6187 INFO crail: provider=core/input [total 6, localOps 1, remoteOps 5, localDirOps 0, remoteDirOps 0, cached 6, nonBlocking 0, blocking 0, prefetched 0, prefetchedNonBlocking 0, prefetchedBlocking 0, capacity 173780, totalStreams 6, avgCapacity 28963, avgOpLen 28963]
17/06/15 15:48:02 6187 INFO crail: provider=cache/buffer [cacheGet 24, cachePut 24, cacheMiss 0, cacheSize 0, cacheMax 17, mapMiss 0, mapHeap 0]
17/06/15 15:48:02 6187 INFO crail: provider=core/output [total 3, localOps 2, remoteOps 1, localDirOps 0, remoteDirOps 0, cached 3, nonBlocking 0, blocking 0, prefetched 0, prefetchedNonBlocking 0, prefetchedBlocking 0, capacity 572, totalStreams 3, avgCapacity 190, avgOpLen 184]
17/06/15 15:48:02 6187 INFO crail: provider=core/streams [open 9, openInput 6, openOutput 3, openInputDir 0, openOutputDir 0, close 9, closeInput 6, closeOutput 3, closeInputDir 0, closeOutputDir 0, maxInput 2, maxOutput 2]
17/06/15 15:48:02 6191 INFO MemoryStore: MemoryStore cleared
17/06/15 15:48:02 6192 INFO BlockManager: BlockManager stopped
17/06/15 15:48:02 6203 INFO ShutdownHookManager: Shutdown hook called
17/06/15 15:48:02 6204 INFO ShutdownHookManager: Deleting directory /mnt/tmpfs/tmp/nm-local-dir/usercache/demo/appcache/application_1496059230928_0084/spark-a5be3258-a214-42cc-aecc-d4e362b6e021
17/06/15 15:48:02 6207 DEBUG Client: stopping client from cache: org.apache.hadoop.ipc.Client@7370bc19
17/06/15 15:48:02 6207 DEBUG Client: removing client from cache: org.apache.hadoop.ipc.Client@7370bc19
17/06/15 15:48:02 6207 DEBUG Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@7370bc19
17/06/15 15:48:02 6207 DEBUG Client: Stopping client
17/06/15 15:48:02 6207 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo: closed
17/06/15 15:48:02 6207 DEBUG Client: IPC Client (1725272477) connection to flex11-40g0/10.40.0.11:9000 from demo: stopped, remaining connections 0
demo@flex13:~/crail-deployment/hadoop/logs/userlogs$