Unable to change dfs.blocksize & dfs.namenode.fs-limits.min-block-size?

Category: sql server hdinsight

Question

ZKats on Fri, 24 Oct 2014 15:09:32


Hi all,

I'm unable to adjust the -Ddfs.blocksize parameter. I'd like to set it to a value below 64MB to induce more mappers, but the parameter remains unaltered when I inspect the conf.xml file for the hadoop job I submit. Is it a final property? Similarly, I'm unable to change -Ddfs.namenode.fs-limits.min-block-size.

I'm submitting jobs via REST api in java code, and I'd like to be able to change the value programatically and dynamically, as opposed to RDP on hadoop cluster and change the core/mapred/hdfs-default.xml files.

Any thoughts or insights?

Replies

AmarpreetBassasn on Thu, 03 Sep 2015 08:18:09


Hi ,

You can set the split size in the streaming job using the -D "fs.azure.block.size=<value>" while submitting the streaming job. 

I will be marking this thread as "proposed as answer" but you can always reply on top of this thread for continued assistance.

Best,
Amar