@sorenmacbeth no, but I have run into strange classpath problems because CDH includes an old version of parquet-mr and I want a later build.
-
-
Replying to @avibryant
@sorenmacbeth we always have to do HADOOP_USER_CLASSPATH_FIRST=true to get around that.1 reply 0 retweets 0 likes -
Replying to @avibryant
@avibryant yep, that fixed it. thanks again. hail satan, etc etc.1 reply 0 retweets 2 likes -
Replying to @sorenmacbeth
@sorenmacbeth@avibryant I have no idea why that is not set to true by default. I almost always have to set it.2 replies 0 retweets 0 likes -
Replying to @amcclosky
@amcclosky@avibryant took me a minute to figure out I also had to set HADOOP_CLASSPATH to my uberjar as well to get it work.3 replies 0 retweets 0 likes -
Replying to @sorenmacbeth
@sorenmacbeth@amcclosky yeah, I think the fact that you always have to set those things is someone's sick idea of job security.2 replies 0 retweets 2 likes -
Replying to @avibryant
@sorenmacbeth@amcclosky speaking of which: ever since expanding our cluster, datanodes now occasionally freak out with constant OOME (1/2)1 reply 0 retweets 0 likes -
Replying to @avibryant
@sorenmacbeth@amcclosky specifically, OutOfMemoryError: Direct buffer memory. Restarting fixes them for a while. Ever seen this? Thoughts?4 replies 0 retweets 0 likes -
Replying to @busbeytheelder
@busbeytheelder CDH 4.5. Small cluster, increased from 24 to 36 nodes. Running Impala, MR1 (not YARN).2 replies 0 retweets 0 likes
@busbeytheelder when it gets bad, seeing thousands of OOME per hour, and unresponsive nodes (but still alive according to the namenode).
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.