Where is the audio for Shapiro and you!!! I need a mind boner.
-
-
-
Tweet unavailable
-
I have enthusiasm because they are both intelligent people and good speakers, Just because I don't agree with Shapiro doesn't mean I can't listen to him. I definitely won't call him an idiot because he has different opinions..
End of conversation
New conversation -
-
-
Wow - terrific listening on my Boxing Day destress bouncy walk (not quite a jog) My partner tramatised a Jehovahs Witness who came knocking - he asked him "how do I know you're not the anti christ" JV "because not true" MP "that's what he would say too"

-
I studied with the Jehovah’s Witnesses for one year out of curiosity.
End of conversation
New conversation -
-
-
You're atheist Deepak Choprahttp://www.echoplexmedia.com/new-blog/2017/11/4/is-sam-harris-atheisms-deepak-chopra …
-
There's absolutely nothing in this tripe worth anyone's attention. I did the dirty work by reading it so that you don't have to.
End of conversation
New conversation -
-
-
Please have a guest to discuss Russia who's neutral. Naill Ferguson or Steve Cohen. Thanks.
-
I’d love him to discuss this with
@ClarkeMicah -
Yes. That would be epic.
End of conversation
New conversation -
-
-
Still waiting on the Ben shapiro podcast.... which was had before this one...
@benshapiroThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
When my dad died, using http://ancestry.com to trace his family was so therapeutic for me. I am so excited for the humankind family tree!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Love your podcast font, Sam.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
One thing to consider on changing values: Imagine an AI whose behavior is focuses on optimizing some set of its intrinsic and instrumental values. Then imagine it can change some of these values on the fly with X difficulty. If that AI ever finds an obstacle to optimizing its...
-
...values that's more difficult to overcome than X, the most optimal behavior for optimizing its values will then be to change its values. As X gets easier and easier, less and less real-world obstacles may get overcome. This could even be an answer to the Fermi Paradox.
-
It’s a good thing “people” (including AIs) don’t want to optimize their values

-
What do you mean?
-
If people wanted to optimise their values like you suggested above, then we should expect meat eaters to adapt to values like “harming animals is good.” Rather, people try to find/produce good explanations and act according to those.
-
Thus the qualified "...some set of its values..." It's not enough to say agents "satisfy" some set of their values because it's very common for them to sacrifice some short-term values in order to satisfy other long term values. That's why the term "optimize" is more accurate.
-
People have complex sets of values that often contradict one another and the more intrinsic a value is, the more difficult (or impossible in practice) it is to adjust. And "harming animals is good" is a proposition, not a value in this context.
-
Yes, fair enough. But then values are derived from propositions, aren’t they? I think your original point was that certain values are harder to achieve than they are to change. If so, there must be scenarios of the kind I tried to describe.
- 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.