Wednesday 30 November 2016

Should you share arguments you think are invalid

There is an idea. There are n arguments for it and n arguments against it. When discussing the idea with someone else, I previously argued that I should reveal all the arguments for both side as opposed to ignoring arguments for the other side in an effort to manipulate the other person into agreeing with me. What about cases where I genuinely believe that certain arguments are invalid?
My assessment of an arguments validity isn't binary. Arguments can be more or less true/good/valid. Complicating things further, an argument's sum value is not the only thing that matters in my assessment of it. My confidence in my own abilities of assessment (based on my level of skill, strength of biases etc..) also matters. Let's put the two together into a overall assessment of argumentative strength. Let's say that arguments scoring below a certain value are the ones I dislike enough to feel confident assuming are invalid garbage.

Should I exclude some arguments, not because I want to persuade, but because I want to be succinct? Yes, obviously. Otherwise I would list all the arguments imaginable for every position, making communication impossible. How do I ensure that my own biases don't cause me to unduly assign low value to arguments for the side I disagree with? I don't know. The best solution I can think of is to generally be very wary of deciding arguments are low-value, and to share most of them regardless.

A good teacher.

Choosing to present only arguments and evidence which favors your beliefs is wrong.

Presenting all the arguments for both sides, and in doing so letting others come to their own conclusion, is right.

Most who live by the former do so because of their own arrogance and totalitarian tendencies.

Wednesday 16 November 2016

More on why paradoxes (as they're used today) are stupid

Imagine I say this.

1. In this world, if A > B and B > C then A > B.
2. building A is bigger than building B. B is bigger than C. C is Bigger than A.


Is this a startling problem which undermines our very notion of reality? No, it isn't. It's just me being wrong, either about 2 or about 1.

The problem with paradoxes is that the rules behind the paradox define a certain world, and the example set in that world defies those rules. This is an impossible situation which exists purley in the philosophers mind. It is as worthless as asking why, in an imaginary world without water, there is water.


Exception 1: Paradoxes to prompt thinking
Exception 2: Paradoxes to reveal fundamentally irreconcilable beliefs
Exception 3: Paradoxes to reveal bad recursive definitions (i.e:Russel)

Tuesday 15 November 2016

Paradox's and philosophical idiocy

From Russel's paradox to Zeno's, a significant proportion of philosophy dedicated to paradoxes of various kinds. This is, usually, bullshit.

Paradoxes can be useful when they reveal inherit problems in our models of the world. Usually they don't do this. Instead, they use definitions to create impossible situations which have no parallel in reality.

i.e: A is a liar. Everything they say is not true. A say's that they are a liar.

Th usual solutions are to ignore the actual meaning of the statement and retreat into language games (liar != untrue. truth != relation to reality) or to retreat into complexity. The real answer is far simpler. If A requires B, and B implies not A, then one of the rules must be wrong. i.e: If A always lies and says they lie, than it must be true that they do not always lie or that they do not say that they do not lie.

Assuming there is a truth in the real world (either A lies or doesn't), then a lying A cannot say that they do lie and visa versa*.



















*assuming opposite of always lying is always being truthful

Monday 14 November 2016

A word for problems that cannot be solved due to structural features of the species

I'm looking for a concept handle.

In the mote in gods eye, the moties are locked into a cycle of collapse due to the forces of evolution and the way their society is structured. Ditto for X* in worm. Ditto for us (see next post).

Certain problems are known, solvable, serious and yet are not solved. In some cases this is due to specific factors existing at that time (i.e: firms in US can donate to politicians --> no anti pollution laws). In other cases, it is due to the nature of a certain intelligence or collective of intelligence. (Hard to tell what is unchangeable/intrinsic and what isn't. not implausible that intelligence could reshape self (not same intelligence anymore) or drastically reshape own institutions)

Need a word/term for problems that are unsolvable due to the way the species operates.







* Spolier

Sunday 13 November 2016

God in a Box --> The Simulation Problem

We recreate systems inside virtual worlds to better predict and understand them. This is simulation. The more granular/high resolution the simulation, the more accurate* it's results (usually).

If you want to understand/predict intelligent life, you simulate it. If you want to simulate it well, you do so with high granularity. The higher the granularity, the more the simulated life is actually alive. This is the traditional (ethical) simulation problem.

If you want to predict whether creating a greater intelligence would be safe, you simulate it and see how it acts/ if it escapes in the simulated world. The problem is that you likely cannot keep a god in a box. The super-intelligence may be able to realise it is simulated and escape the simulation. This is the AI simulation problem?























* Need to introduce probability/absolute vs general dsitinction into english