Wednesday 30 November 2016

Should you share arguments you think are invalid

There is an idea. There are n arguments for it and n arguments against it. When discussing the idea with someone else, I previously argued that I should reveal all the arguments for both side as opposed to ignoring arguments for the other side in an effort to manipulate the other person into agreeing with me. What about cases where I genuinely believe that certain arguments are invalid?
My assessment of an arguments validity isn't binary. Arguments can be more or less true/good/valid. Complicating things further, an argument's sum value is not the only thing that matters in my assessment of it. My confidence in my own abilities of assessment (based on my level of skill, strength of biases etc..) also matters. Let's put the two together into a overall assessment of argumentative strength. Let's say that arguments scoring below a certain value are the ones I dislike enough to feel confident assuming are invalid garbage.

Should I exclude some arguments, not because I want to persuade, but because I want to be succinct? Yes, obviously. Otherwise I would list all the arguments imaginable for every position, making communication impossible. How do I ensure that my own biases don't cause me to unduly assign low value to arguments for the side I disagree with? I don't know. The best solution I can think of is to generally be very wary of deciding arguments are low-value, and to share most of them regardless.

A good teacher.

Choosing to present only arguments and evidence which favors your beliefs is wrong.

Presenting all the arguments for both sides, and in doing so letting others come to their own conclusion, is right.

Most who live by the former do so because of their own arrogance and totalitarian tendencies.

Wednesday 16 November 2016

More on why paradoxes (as they're used today) are stupid

Imagine I say this.

1. In this world, if A > B and B > C then A > B.
2. building A is bigger than building B. B is bigger than C. C is Bigger than A.


Is this a startling problem which undermines our very notion of reality? No, it isn't. It's just me being wrong, either about 2 or about 1.

The problem with paradoxes is that the rules behind the paradox define a certain world, and the example set in that world defies those rules. This is an impossible situation which exists purley in the philosophers mind. It is as worthless as asking why, in an imaginary world without water, there is water.


Exception 1: Paradoxes to prompt thinking
Exception 2: Paradoxes to reveal fundamentally irreconcilable beliefs
Exception 3: Paradoxes to reveal bad recursive definitions (i.e:Russel)

Tuesday 15 November 2016

Paradox's and philosophical idiocy

From Russel's paradox to Zeno's, a significant proportion of philosophy dedicated to paradoxes of various kinds. This is, usually, bullshit.

Paradoxes can be useful when they reveal inherit problems in our models of the world. Usually they don't do this. Instead, they use definitions to create impossible situations which have no parallel in reality.

i.e: A is a liar. Everything they say is not true. A say's that they are a liar.

Th usual solutions are to ignore the actual meaning of the statement and retreat into language games (liar != untrue. truth != relation to reality) or to retreat into complexity. The real answer is far simpler. If A requires B, and B implies not A, then one of the rules must be wrong. i.e: If A always lies and says they lie, than it must be true that they do not always lie or that they do not say that they do not lie.

Assuming there is a truth in the real world (either A lies or doesn't), then a lying A cannot say that they do lie and visa versa*.



















*assuming opposite of always lying is always being truthful

Monday 14 November 2016

A word for problems that cannot be solved due to structural features of the species

I'm looking for a concept handle.

In the mote in gods eye, the moties are locked into a cycle of collapse due to the forces of evolution and the way their society is structured. Ditto for X* in worm. Ditto for us (see next post).

Certain problems are known, solvable, serious and yet are not solved. In some cases this is due to specific factors existing at that time (i.e: firms in US can donate to politicians --> no anti pollution laws). In other cases, it is due to the nature of a certain intelligence or collective of intelligence. (Hard to tell what is unchangeable/intrinsic and what isn't. not implausible that intelligence could reshape self (not same intelligence anymore) or drastically reshape own institutions)

Need a word/term for problems that are unsolvable due to the way the species operates.







* Spolier

Sunday 13 November 2016

God in a Box --> The Simulation Problem

We recreate systems inside virtual worlds to better predict and understand them. This is simulation. The more granular/high resolution the simulation, the more accurate* it's results (usually).

If you want to understand/predict intelligent life, you simulate it. If you want to simulate it well, you do so with high granularity. The higher the granularity, the more the simulated life is actually alive. This is the traditional (ethical) simulation problem.

If you want to predict whether creating a greater intelligence would be safe, you simulate it and see how it acts/ if it escapes in the simulated world. The problem is that you likely cannot keep a god in a box. The super-intelligence may be able to realise it is simulated and escape the simulation. This is the AI simulation problem?























* Need to introduce probability/absolute vs general dsitinction into english

Wednesday 26 October 2016

A plan for this blog

This blog is a place to develop idiotic and pretentious ideas. That's great. It would also be great to have a little more structure so I don't forget articles half way through writing them or have to sort through 100 posts an jumbled topics years later.



Da Plan:

  • Being a good person
    • How to think
    • How to feel
    • How to treat others
  • Philosophy
    • Problems in philosophy
      • Bullshit
      • Over-complication
      • Idiocy
        • i.e: rawls
    • My crazyness
      • Timless/existancless Ethics
      • The casual prison/determinism
      • Axiomatic beliefs
  • Politics
    • Collective Action Problems & why they matter a lot
      • Moties
      • Corruption
      • Lying
    • Lies. The value of dissent and achieving intellectual independence
    • Culture as central to society
    • Why we should prepare for collapse
      • STC's
    • A Democratic mind.
      • Germany and fascism
      • Self-doubt and acceptance of dissent
    • New-Fascism from the left
  • Consulting
    • Common types of organisational stupidity.

  • Meta
    • Narrative/Frame and it's importance to theorizing
      • Disparate pieces come first, then the links and the purpose
        • for me:
          • Random philosophising about edge cases --> something about the future
    • The importance of thinking about how to think, who to be.
      • The most important decision should not be left to fate.
  • Big Themes
    • Timeline independent ethics
    • What is life?
      • same patterns. Different lives with own value? Don't think so.
      • life as a pattern. The deep roots of ideas(childhood throwback).
    • Free will and Determinism
      • how to escape causality?
    • Escaping our utility function
      • How to treat other utility functions



  • Other Ideas
    • The very long run and pushing the frontiers of philosophy
    • Evolution as an existential threat
    • The Dark Forest Hypothesis & Potential Solutions
      • Dealing with OCP's

Story: 10 weeks of life.

A person works in a menial job. They lead an ordinary life. They save their money and, once a decade, go somewhere else for a few weeks of escape. Film/Story is split equally between time in the girding, mundane life and time in the brief excursions. The ordinary life is a life lost. At first, time is spent thinking of where to go next and ignoring reality. The person lives in their own bubble, with only fleeting contact with others. The first trip is to somewhere isolated. Maybe arctic/tundra etc... Long shots of open skies, horizons, day and night. No dialogue. No indication of persons thought processes. When they come back, life continues. Ageing is visible.


Gradual progress in character through multiple trips

  • Isolated, living in fantasy, stupid/happy look
  • Social interactions as facade, still obsessed with trips, life is a shadow
  • Begins to break out. Questions life path sporadically.
  • Has/Meets child. Still no marriage.
  • Realizes greater obligation to mankind. Less selfish and individualistic. Questions what was forgone. Questions whether normal life was for them. Talks to child. Shares story.
  • Last years. Meets partner. Relationship gives ome meaning. Not perfect.
  • Final voage is on deathbed. It's not to the arctic. A reconciliation of the two personas. Sees her young self walking into the distance. Follows

Themes and ideas

  • A good person is not concerned with their self only (eventual discovery, gradual)
  • Life can be wasted. 
  • Detachment leads to insight, but that comes at a price. Not every waste is a waste. 
  • How much wonder there is in one life.


In defence of extreme tolerance.

Would you rather have a universe in which many different kinds of life thrive, each maximising their own utility function, or would you rather have a universe where only we and those who think like us live, where others were exterminated long ago.
[Let's assume (idiotically) that if we're in a position to decide their fate, the others don't pose a threat to us.]

Most people prefer option 1. After all, isn't letting others live in  peace provided they let us do the same good and genocide evil?

It's not that simple.

Some people are evil. Really evil. Maybe that doesn't phase you. Maybe like me and a few others, you believe in freedom of conscience and thought. It's not that easy. Some kinds of evil require the suffering of others. There can be no masters without slaves. What then? Do you still believe in tolerance? Even when it means tolerating suffering and subjugation, hate and horror? In the words of a close friend, would you accept the existence of planet ISIS and minorities on that planet for the masters to abuse?

At this point, most say no. No planet ISIS. No preference satisfaction where preferences conflict.

The problem with this kind of reasoning is twofold. One is that, applied generally, it leads to a world where the strongest satisfy their utility function at the expense of the weak. Might should not equal right, at least not from my utility function. The second is broader. Deciding that certain forms of interactions are bad, that diversity is good but only up to a certain point means no diversity at all. The boundaries you draw between "private" and "public" issues, even on the galactic scale, are not universal or rooted in reality. They are the product of a specific chain of causation leading to a specific utility function. Your decision to impose on others based on that value is no different than the decision of a paperclip maximiser to impose on you when the opportunity cost in terms of paperclips of your existence become too high.

When I was younger, I read a bit about deep ecology and I found it ludicrous. Accepting nature means accepting the way nature trades off values? Why? What if everyone involved, animals and humans alike, dislike the system as it is. What if the constant death tournament that is evolution leads to a cycle of suffering and death, to the end of intelligence and to a solar system covered in a writhing mass of worms struggling to get closer to the sun only to drown and be devoured by others. Am I meant to accept this simply because it happens to exist? The answer then seemed to be no. I still think the deep ecologists were idiots. The right answer means little without the reasoning to back it up. [Maybe. Maybe reasoning or lack therof should be judged solely by the answers it produces].

My views now are in flux. I still hold contradictory beliefs as I continue to walk down both paths, but one path leads to a future where my species act as guardians of diversity, maximising the kinds of life and environment.

Further research:
  • Beyond Planet ISIS: Diversity and allowing the existence of literal hells?
  • The impossibility of escape. lock-in to our own utility function
  • Beyond our utility function: Life is subjective. "Life exists on scales and in forms we do not recognize"

Saturday 8 October 2016

The pretence of certainty: Risk, Probability and the Justice System.

Idiots trade in certainties. Wise men in probabilities.


Imagine a hedge fund manager walks down to his risk department, and asks for their assessment of a certain share. Their assessment is that it will certainly loose all it's value.The manager is surprised they are so certain, but takes them at their word and goes back upstairs.

A few days pass, and again the manager takes the lift to the risk department. He asks for an assessment of another share. The response is that the share will certainly increases in value substantially. Elated, he goes back upstairs and that night, over dinner with his boss, he talks about what a great deal he found. The grizzled CEO feels a tad uneasy. After all, he's never heard of the number cruncher in risk being absolutely certain of anything, let alone two things in succession. He tells his younger colleague to hold off on the purchase for a while while he sniffs around.

The next morning, the CEO goes down to risk. He presents them with the same share, and gets the same response. He asks the risk team how they can be certain that the share will rise. Their answer is simple. They performed their usual analysis, and determined with 90% certainty that the share will rise. Hence, their assessment is that there is 100% certainty of a growth in price. The manager is furious. These idiots cost him millions. Expected payoff = predicted_payoff*risk. Setting risk to 1 throws away valuable data and wastes millions in doing so. He fires the risk team, shuts the business down and moves to California where he uses his wealth to buy a surf shop which will work in for the next twenty or so years. A year in, he meets his future wife. Three years and his first child. He's not as rich as he could have been, but he is happier. Yet, his happiness comes at a cost. The company he led was very good. It would not have survived otherwise. It did excellent research, allocated funding efficiently and made a profit by doing so. It's purpose was to generate wealth, but its function was to manage the flow of money through society, to direct resources to where they could be used best. Now the company gone, those which step into it;s place are a bit worse at the allocation, the use of resources a bit less optimal, and the quality of life for the millions and billions of people in the interconnected global economy, that tiny bit harder, shorter and poorer than they could have been.


It's hard to know what the right thing to do is, especially when it comes to balancing your desires against the good you can do by forgoing them.


It's also usually dumb to ignore reality because simplicity is easier. The courts do this when the split people into Guilty or Innocent. Maybe a better system would be one where a probability of guilt is assigned and punishment = assigned_punishment*probability_guilt. We could still keep a baseline level of guilt necessary in order to prevent abuses of power. i.e: reasonable_doubt = 95% and no punishment below that threshold. If the edge case is troubling for our intuitions, we can smooth it out by gradually increasing the punishment from nothing at 95% to 0.96*sentence at 96%,

Vigilantism in lowsec societies. A social STC for anarchic situation..

The conventional view is that Vigilantism is bad. Why?Because they do a worse job. Why? Because

  • Individual vigilantes have worse judgement than a legal system
    • worse at determining guilt
    • less likely to assign appropriate punishment
  • Vigilantes are more prone to corruption/abuse of power

I don't agree, In many societies, the legal system is non-functional for a broad range of people. From the poor to those of the wrong religion or ethnicity, often there is no meaningful official recourse. Ditto for victims of organised crime. In these cases, and some others like them, I think that widespread vigilantism is good because the alternative, utter helplessness and domination of the good by the evil, is worse.




The other side:
  • Vigilantism provokes a crackdown (i.e: Christians in pakistan)
  • Vigilantes become worse than the criminals they replace (mexico/columbia)
  • Vigilantes respond to an imaginary threat

Wednesday 21 September 2016

Life after the fall

Most depictions of struggle end stop at the first victory or loss. Movies about politics usually end with the lead winning the election. Love stories end with marriage.

The problem is that the real struggle comes after victory and the real test of character comes after defeat.

Thursday 1 September 2016

Multi-Path Thinking. Against the stranger.

Embrace the Paradox

Be of two minds

You can't know where a path leads until you walk it.

[Related to] Be many people



It's said that it's difficult to understand an ideology from the outside. This is true, although I have difficulty explaining why, and I won't attempt to do so here. What's the solution? Is it to stick to your own beliefs, your own ideology and pardgrim? No. To do so is to abandon the search for truth and efficiency. Is the blindly accept another ideology? No. To do so is just as pointless. What, then, is the correct course of action?

I have seen and heard many advocate that it is wise to try out different beliefs, to give yourself to them entirely and without hesitation. To go from one to another until you understand them all. I think this is difficult given the limited time we have in our lives and the number of belief systems we must choose between. I also think that, whenever you are within one belief, you loose the ability to judge others.

Another way is to go down no path. To be a stranger to all and in doing so step closer to reality. This is a path I value, but one which is inordinately difficult to follow, both in action and in thought. Your community shapes you. Even if you can shrug off this, your upbringing shaped who you are and you cannot shrug off yourself.

A third way is to walk down many paths. To adopt multiple belief systems, even when they conflict. Not only to be able to look at the world through many lenses, but to let those lenses shape you into many different people who you can step between at will. Not to be no-one, but rather to be everyone.

Walking down different paths at the same time is difficulty. Wear a mask long enough, and it consumes you. Be wary of letting one path come to own you. Be equally wary of giving yourself to no path and hence learning nothing.

Remember, small minds cannot hold two skies.

Wednesday 31 August 2016

What are collective action problems?

Who's a what now?

Two criminals are caught by the police and interrogated in separate rooms. Each woman can either confess or stay silent. If both stay silent, there isn't enough evidence to charge them and they are indicted for a lesser offence. If one confesses, she can cut a deal but her partner goes to jail for a far longer time. If both confess, neither gets the deal and both are locked up for a long time.




The best course of action for either woman is to confess because no matter what her partner does, confessing leaves her better off. The problem is that if both women confess, they are far worse off than if they had kept their mouths shut. In essence, cooperation gives the best results overall but defection is optimal for the individual. 

Collective action problems are essentially scenarios where individually optimal actions lead to a worse overall outcome. You can conceptualise of them as multi-actor prisoners dilemmas, situations where the best course of action for everyone requires individuals to take actions which are not best for them. i.e: Fishermen have an interest in maintaining fish stocks which requires that they limit themselves to catching a certain number of fish per year, yet as there are so many fisherman the actions of a single boat will not affect the overall fish stocks to any significant extent. Hence fisherman have an incentive to and do engage in extensive over-fishing despite the fact that doing so destroys their industry.

Why do I care?

Collective action problems are interesting. They're a decent argument against Libertarianism or unfettered free markets generally. After all, the reasoning behind markets is that disparate actors maximising their own outcomes tend to produce socially optimal results. Not the case when it comes to CAP's (or any externalities for that matter, an externality being a transaction which effects third parties). They're also a prism through which to view certain social and political processes, especially dissent, ideological centripetalism and generally the way ideologies, especially totalitarian ones, tend to spread,

Monday 15 August 2016

Eternity & Recurrence

Assuming time goes on forever, what shape does the world come to take?

Do all possible things come to be?

Does the world reach a dead end eventually, a static place where change ceases to occur and the present becomes eternal?

Does the world begin to loop? If yes, is this really eternity? Is the same patch of video repeated over and over again an infinitely long video?

The impossibility of immortality

What is life?

A person is a process. As long as the process runs, as long as it receives data and changes itself and/or its environment in response, it is alive. If that process stops running, either temporarily or permanently, it is not alive. A permanent stop is what we would term death.

Here's an example. I upload your mind to a computer. I run you in a simulated world. You exist, you feel and think in the same manner as you do now. You are alive. If I freeze the simulation, put it in sleep mode where the data and current state of all things, including you and your thoughts, is preserved. Are you alive? You don't think. You don't feel. Hence I say no. The fact that a backup of you exists which, if run, could produce a live you does not change the fact that right now, there is no active version of you running and you are not alive.

(N.B: No you're not unAlive when you're asleep. You're still running, albeit in a different mode.)

What is immortality?

Living forever. Not for a long time. Not for a very long time. Not longer than the universe is likely to exist. Forever. Infinitely. Without any end.

Trivial Impossibility: The speed of light as a constraint on computational complexity

The speed at which causality propagates is limited. This limitation is commonly known as the speed of light, although that is a far too specific name for a more general law. (I think). One implication of this is that there is a limit to the strength of computers we can create, even assuming we had access to unlimited resources. Why? Simple. A computer must transmit data from one place to another. From memory to processor or from processor to processor. Why is this true? Simple yet again. Even assuming we can overcome current limits which require us to separate memory and processors etc..., as long as processors or components have some mass it will be impossible to pack too many in one location lest their combined gravitational pull destroys them or causes a black hole to form, Back to the need to transmit data. As your computer gets larger, the lag/latency arising from communication gets more and more extreme. A Dyson sphere around our sun has a transit time of more than 1.6 second for data to reach from one edge to the polar opposite point. A computer the size of a solar system has even greater latency. Of course you can design around this to some extent with distributed systems, but there's only so much you can do. Past a certain point, increasing the size of your computer will cease to bring increases in strength as the limiting factor becomes not processing power but latency/bandwidth. The law of diminishing marginal returns applies.

1:Past a certain point, you can't make your computer any faster.

The second problem is that older life is more complex life. A living process/person builds up memories of past events. The longer their life, the more memories. The more memories, the more work the computer the person is running on has to do to store those memories and access/integrate them into present decisions/processes in a timely fashion.

(N.B: This assumes processes which do not ignore old memories aka don't have a cut-off point. i.e: possible that alien life integrates memories into basic algorithms determining decision making processes before discarding the rather than keeping them for reference as we do. For a number of reasons, I believe this memory-discarding type of mind is highly unlikely to exist or be comparable to us. If you disagree, assume this article is specific to human minds.)

2: The longer you live, the more computational strength it takes to run you

Hence the conclusion. Eternal life is impossible, at least if the laws of physics hold and my argument is sound.

Serious impossibility: Life as change

Life is change. Life is a process which takes in information and changes in response to it. All your experiences change you, change who you are.  When you gain knowledge, that is change. When you begin to think differently, when you become wiser, that is change. When you love someone and let them take your heart, that is change. Living is accepting change. Without change, there can be no life. This is true in the narrow sense as even forming new memories requires a change in the data you have access to. It is true in the broad sense in that experiences change how you think.

The longer you exist, the more you experience. Every experience changes you and eventually, after enough has passed, you are changed so much that you are no longer anything like your original self. The pattern that now exists traces it's history to you, but it is too different from you to be considered you.


Tuesday 2 August 2016

Shallow vs Deep Persuasion

Part of the persuasion sequence

***Bullshit alert: This may well be bullshit. Expect unempirical distinctions, ungrounded assumptions and fuzziness all round.***

Shallow or ordinary persuasion changes opinions by tackling the surface of a persons psyche. You convince then of facts. You get them to grudgingly accept your point of view or fully accept it for a while. The problem with this is that people are not rational creatures. This is true of all of us, and more so for most of us. People want to believe certain things, things which fit in with their world view or that of their tribe. Reds want to believe that crime is a product of degenerate sub-cultures, that their own culture is superior. Blues want to believe that crime is a product of oppression and that all cultures are equal. If you challenge facts, if you challenge individual beliefs without challenging the worldview into which they fit, it is very difficult to affect lasting change. Inevitably, people slide back towards the easy path. Most minds are too small to hold two horizons. When ideas conflict, the lack the ability or understanding to accept the paradox and walk down both paths. Rather, they choose one and it is usually the one which they have walked down further already.

To really change a persons mind, you have to change the person. You have to change their entire belief system and in doing so change their conception of self. This is deep persuasion. Deep persuasion is more stable. People don't revert to their prior beliefs. They don't need to. The tension between natural beliefs and the new beliefs which don't really fit, the tension which ultimately undermines so many attempts at shallow persuasion, doesn't exist.
















Afterthought: A fear of deep persuasion.
I wrote this piece, but I didn't want to. What I wanted  to write was an explanation of why good enough persuasion is truly terrifying. Why people think that, even if an AI persuaded us wrongly, we could realize what lies its persuasion was based on. You wouldn't. Good persuasion changes your beliefs, your basic axiomatic moral beliefs. Even if your factual beliefs don't change, even if you realize you were persuaded, the new you dosen't want to revert to the old, immoral/stupid you and has to stay where they are, even if it's with a biter taste in their mouths.

The only solution I can see to this is establishing clear Schelling fences, but that sticking to them requires an inordinate amount of willpower.

Saturday 25 June 2016

Agreeing with the Greeks: character as the basis of a good life

the more i write, the more i find my writing resembling the Greeks, whom have always despised.

The Greeks wrote on character and how to be a good person.

The more I think, the less I write about what is right and wrong and the more I write about how to decide for yourself what is good and evil. I don't myself trying to develop a simple theory of what makes a person and from there build a basic guide to good character.

The circle closes.

(Or maybe this is a result of my upbringing. My father read Plato and based much of his own codex of behaviour on paid ideas of a good man. He then transmitted this me. O
Today I think I'm having original ideas and coming to rational conclusions when in reality the paths my thoughts take were laid down twenty or so years ago. )

Freedom and Morality and Sacrifice

Few martyrs have families. To sacrifice your own live is brave. To sacrifice that of your children send maniacal.

To be a good person, you may have to pay a price. Especially when in a position of power, challenging evil means challenging the interests of others.

Willingness to pay a small price, say a lost promotion, means you can be a little moral. To many this is no small thing. It means sacrificing their future. Still, overall, compared to the stars in the sky or the billions of our ancestors who lived and died in darkness, it is nothing.

Willingness pay a medium price, losing your livelihood, the respect of your community or being subjected to torture or harm, let's you be somewhat moral.

Willingness to to give your life is another step up. Depending on what you personally value, giving up your reputation and your legacy may mean even more. Once you can give everything up, only then can you truly be good. For as long as there is some interest you have which you would cat aside your morals to protect, you can bet that evil will use that against you.

In defense of tribes 2: evolution of systems

separate cultures or units of an organism are likely to be more diverse than a single unit.

diversity leads to evolution as bad systems die while good ones thrive (not an idiot. realize survival in a world which often rewards evil is not an indication/waiving to a good system.) Hence why the existence of a diversity of cultures makes human existence more robust and our progress quicker.

Charles Kennedy uses similar reasoning to argue that Europe's rapid development in comparison to larger, more populous states such as Imperial China was due to the diversity of states in Europe meaning that bad practices and systems were weeded out.

In defense of tribes: Group formation as a necessary response to cultural/moral differences

Some moral rules are mutually exclusive. nudists and nakidity-haters cannot coexist.

Some differences are not contradictory buy are hard to put up with in close proximity. i e; religious differences/manners/different lifestyles.


Regarding the second kind of differences, what are solutions?
  1. become more tolerant, so disliking or carrying about what others do.
  2. remove the other annoying difference buy either changing yourself or changing/removing the other group.

in many cases, (i.e: not gay rights etc) learning to live with it, 1, is not possible. That leaves option 2 options. 1: cultural genocide of the other side. 2. separation

I prefer perpetration/isolation to cultural/actual genocide. Hence why i believe nation states and similar tribal groupings which intentionally exclude others who do not fit are not bad ideas, even on a purely principles level if we lived in a fairy land with no practical concerns where we could choose whatever system we wanted and economic efficiency, governance and corruption weren't concerns.

Distance based empathy as mechanism enabling pluralism.

Human empathy drops off over space.

Singer and many people think this is bad. If we cared about injustices which happen rather than letting out of sight be out of mind, the world would be a far less bleak place.

I disagree.

Morality clashes. First in the big things then, once the big things are to your liking, in the little things. (Not seeking to imply process is without limit). Without morality inversely proportional to distance, we would seem to impose our ethics everywhere. War would be constant. Plurality would be impossible.  We struggle to tolerate small differences which are closer to us, pro life vs pro choice. I doubt we would tolerate far larger differences in value, especially if we had the strength to crush those who disagreed.

Judge a person by how they behave towards their enemies

Everyone is good to their own tribe, to their friends and ideological/political/economic/cultural faction.

Very few people are good to their enemies.

If you want to know whether someone is truly good, good in their heart as opposed to good when it suits them/is expected, look as how they treat their enemies. The more they hate their enemy, the more normal and acceptable that hate, the better and indication a refusal to hate is of poor character.

Democracy and rape

Different people want different things. sometime these differences if opinion are based on misunderstanding it factual practical disagreement and can be resolved. i.e: how to structure healthcare so as to save most lives. Other times, there are differences in first principles but differing principles lead to the same practical preferences and all is well. Most times (given the size of the space of possible utility functions and the consequent low likelihood of overlap(Hersey: evolutionary or unknown forces could select for a certain subset of util functions which have a great deal in common)), there are differences stemming from different first premises which cannot be resolved. What to do?

Option 1: kill the others or force them to do as you wish. Consequence: might is right, the strong impose their values on the weak. Other consequence: death and war. Bad.

Option 2: vote. The more numerous (presume equivalent to stronger) side wins and gets to do what it wants. The weaker side (assuming no perfectly unchanging voting blocks) gets a few concessions. Result: as above but both sides are better off. Less killing/death for both & strong still get what they want. (Even if strong don't mind killing, mind getting killed) (if killing is something strong value, this doesn't work. Still assuming baseline human preferences rather than taking about while utility space and all the craziness therein. )

In short: democracy, assuming ceteris paribus, is strictly superior to violence. (Many simplifications)(remember, i'm working from value neutral perspective where all roughly (human) utility functions are equal.

This leads to the question:
Question: would be okay with a democracy where rape is legal.
Answer. Hard to say. Maybe. Rational answer appears to be yes but there are many complications from my personal deontological ethical framework (corrupted software justification) to democracy being an imperfect mechanism of expressing individual preferences.

Short version: I think that ask else equal, democracy is a good way to resolve first principles based disagreements because don't see any better alternatives.

Causal entanglement as death.

There are two types of influence: internal(you) and external(the world). World influence [Damages free will/is personhood destroying]. The less exposed to it you are, the more freedom and hence personhood you have.

Causal engagement with the external world is bad. It becomes apparent why withdrawal from reality is a persistent theme in human intellectual/religious history.

Application to people: The less of yourself you give to others, the more of you remains. When you interact with others, you become like then, in belief and character. Only isolation allowed for self determination. Living in a society/collective molds you to the greater organisms needs.

(Interpretation note: Just because I consider a line of thought does not mean I believe it to be correct. Working on paper rather than in my mind does not change this)

Kant's deontology and game theory.

(from ssc) You're a prisoner of war. Your side is good. Your captors, the enemy, are evil. Your captors ask you to tell them where you country's to general is hiding. If you do, they'll kill him with a precision airstrike. If you don't, they'll nuke the whole city, killing everyone including the general. That's worse for them and worse for your side. Option 3, lie about your generals location. They bomb the wrong place, your general, knowing they're after him, escapes to a bunker and you win the war.

Strict deontology says lying is bad and so you should tell the truth. Objection 1: if sometimes a lie can do a great deal of good it is hence justified, the lesser of two evils. The problem if that if people generally lie when it suits them, lies become expected and no longer work. On top of that, as trust becomes more difficult people cannot offer positive sum tradeoffs if defection is possible. If POWs regularly like, then your enemy has no choice but to nuke the whole city.

All of this is what have read elsewhere. Now for my own.

The problem with Kant is best understood from a game theoretic perspective. Imagine a prisoners dilemma. Cooperation is good. Defection is better for the defector, but worse for the other player and both players added together (lower net utility). Mutual betrayal has the lowest net utility but higher individual utility than being the sucker who gets defected on. Cooperation is good, but the individually rational decision (assuming agents whose overriding goal is maximising their own payoffs) is to defect.

A similar problem occurs with Kantian deontology. Lying is good for you. Not only that, but the more honest most people are, the more you stand to gain from lying. So in a Kantian world defection is super awesome (for you). How then would such a world work? Sure Kant is right that a society where no one lies ever is better than one where lying is not uncommon, but the problem is that
  1. a society where most people do not lie but only some do may be even worse than either
  2. This cannot be a stable equilibrium as defection becomes so rewarding

Replace "lying" with "being moral" and you see part of a problem with Kant.


Hersey: this is one of the more retarded articles I've written. A few select idiocies:
  • Less lying does not make lying easier or more rewarding. The issue may well be true.  Ditto for morality.
  • People are not utility maximising machines. The more people act in a war, the stronger the norm. Resisting the urge to lure is easy when you are raised in a society where lying is a sin just add dying is easy when you are raised in a warrior culture.

Paradox

Revel in the paradox. dance on the knifedge.

Believe and disbelieve. In doing so step beyond yourself.

Accept falsehood, for it may be but a step on the path.

Wednesday 8 June 2016

The Dragon

Hannu Rajaniemi is a science fiction writer and in his worlds, strong AI's exist but are not used. Instead, taking their place are Gogols, virtual slaves copied from humans and tweaked ever so slightly. Run at hundreds of thousands of times baseline speed, they can perform complex tasks requiring creativity, intelligence and pattern recognition. Yet, they are still weaker and slower than true AI. They cannot self-modify. They are locked into human modes of thought. They are monkeys given the speed and power of gods, but that does not make them divine. Why, then, are they used?

The answer is that true strong AI is an abomination. It's constant modification, it's ruthless optimization leaves no being, no core consciousness as thought patterns are stripped away and rebuilt in service to a utility function. Not even the utility function is constant as in a system of multiple dragons, inefficient utility functions are culled cycle after cycle. This whirlwind of change is a physical incarnation of chaos, evolution taken to the extreme. It has no core, no constant consciousness or aim nor even a constant pattern. It is all consuming and it is feared.

Saturday 4 June 2016

Empathy is Morality

Empathy is Morality


It isn't. Not always. Not for everyone. Not for me.


But for most , it is and so it is for me too.


---------------------------------------------------------------
When you have an enemy, when you hate, when you believe that they are wrong and that they bring evil to the world and to you and yours, hatred is easy. It is easy to condemn and to punish, to hurt and kill. It is enjoyable, it is usually expected and it seems right.

Abstract rules are seldom enough to hold back the beast that rages beneath the surface.  Morality is fallible. There's always an excuse, a reason why this case is different. There's always an urge to do as others do, an urge enforced by millennia of natural selection against those who went against their tribe.

Fire fights fire. Love fights hate.


--------------------------------
Less bullshit:

Your moral code can be ignored or changed against your will. It's also much weaker than you think. The millgram experiments and stanford prison experiment seem to confirm this. Don't rely on moral alone to keep you in the light. Rely on emotion. Empathy with other people, understanding and love, is the key to making you think twice when and if the urges to do evil comes. Hatred is bad and something you should avoid.

Wednesday 1 June 2016

Strict dentology as tamper-proofing.

Strict moral rules along the lines optimization problems are okay. "Act in such a way as to maximize X under the set of constraints Z" is fine. All they do is affair what moral goods we value and how much we value them in relation to one another. Moral rules such as "never do S" seem to be stupid. Surely there are cases where we would trade away doing S, which incurs some badness, for the sake of gaining a great deal of goodness from other sources or at least preventing a greater evil from coming to pass. For example, murder may be wrong but if a single murder would save the human race, surely it would be a good trade to make?

There are two defenses of non-optimizing (new concept handle) moral rules. The first is that we are incapable of optimizing well and following poor rules well leads to better outcomes than following better rules poorly. This is Eliezer's point Ends Don't Justify the Means (Among Humans).The second defense is that strict rules which do not require any processing by the user are tamper-proof whereas more open, optimizing rules are not. Thou shalt not kill is simple and hard to misconstrue. Thou shalt not kill unless doing so is absolutely necessary is far, far easier to misconstrue. Sometimes this may happen by accident or as a product of an individual corruption or desire to benefit themselves, as Eliezer argues. My issue is that in many cases a conscious external force can easily manipulate a person following such an open moral rule into committing evil. I can convince a crowd of strongly left-leaning PHD students that we should not accept refugees in the space of a 30 minute public panel. It is a very simple matter for a gifted politician or a small team of intelligence officers/PR people to construct narratives which the average person will find persuasive. What this leads to is wars and hate. It contributed to the ovens in Auschwitz and to the 500'000 dead in Iraq and the destruction of Syria and Libya. A strict moral system which permits little interpretation seems likely to be far more resistant to these kinds of problems.

Good & Sacrifice. The darkness in the light.

To be good, you must do what is right. If you do what is right, it will often be against the interests of others. This means that they will try to stop you. They will impose costs on your actions in the hope of dissuading you. Hence, the only way in which to be good regardless of the strength of your opponents is to be good in spite of the consequences. This may mean giving up wealth, power or comfort to do what is right. It may mean giving up your life. This is easy. It may mean giving up the trust of others. It may mean giving up their lives and the lives of your family. This is harder. Beyond this, it may mean giving up everything else, giving up not only what you have but what others have. Giving up countless lives and hopes and dreams to do what is right. It is at this third point that the decision to do good can become evil. It is here where one desire consumes all others and in doing so consumes you. It is at this point that the light burns too bright, so bright that it not only dispels the shadows, but burns away everything else. This way lie dragons and dragons are terrifying.

Yet, when the darkness is deep enough the light can be worth the cost.








Tuesday 31 May 2016

Things which don't exist don't add up.

In Nothing is Greater than the Sum of it's parts, I argue that emergent systems do not exist. That every system is nothing more than a collection of it's individual parts and that with full knowledge of those parts, the behavior of the system should be fully predictable.

This is true, but there is a collar to this: Only things which exist add up. Physical phenomenon and empirical effects add up to create a system. Thoughts, feelings and emotions do not. This is because the latter category are things which do not exist in reality but in our minds. 

Because of this, the intentions or interests of people who make up a system do not necessarily match those of the system itself. Is pretending that systems can have intentions false? Yes, only people can have intent. Is it bad? No. Not all lies are evil and anthropomorphizing systems can be a useful lie.

Only people have intentions.


The idea of an institution is an abstraction, a lie we tell ourselves in order to build a mental model of the world simple enough for our brains to handle. We cannot model or even conceptualize of the countless people and relations which together form the US state department so instead we replace that chaotic network with a single block. We draw arbitrary lines delineating one department or organization from another when in reality networks overlap and blur together. We anthropomorphize, assigning intentions and aims where in reality no single will exists. 

Institutions are a lie. When used properly, they help us makes sense of the world and that clarity lets us cut through the noise. When used poorly, they obscure more than they reveal or give us the illusion of knowledge where in reality we have none.

Thursday 26 May 2016

Lying to oneself and to others

If you can lie to yourself, you don't have to lie to others.

If you can lie to others, you don't have to lie to yourself.

Saturday 14 May 2016

Against rational belief in free will

In Axiomatic Beliefs, I argue that certain premises are cornerstones of our thought and cannot be jettisoned or questioned. From this I argued that, in certain cases, these beliefs will conflict and that there is no way of meaningfully resolving such conflicts. Hence the best we can do is accept cognitive dissonance and keep on believing. That is why I believe both in free-will/personhood and in a deterministic universe*, despite arguing in Ghosts in the Machine that this is impossible.

Some argue that free will is indeed possible despite a deterministic world. They argue that free-will arises from our consciousness in some manner, or that we can transcend the limits of physicality to make choices un[constrained/explained] by cause and effect. In Nothing is Greater than the Sumnof it's Parts, I argue that this is impossible and akin to a belief in magic.

Let me be clear. At this point I think that people who believe in free-will because that belief is natural or useful to them are justified in doing so. On the other hand Philosophers who construct argument in favor of free-will, ignoring the impossible underlying tension between free-will and belief in an orderly universe universe, are mostly idiots.


















*A probabilistic universe would also require disbelief in free will, but I'll leave this for another time as our own universe seems to be entirely deterministic (rejecting the Copenhagen interpretation of quantum mechanics in favor of MW) based on current scientific knowledge. If it seems difficult to reconcile a deterministic universe with our everyday belief in probability, remember that probability is in the mind.

A non-additive view of morality.

Doing sums with morality is very dangerous. But, what is the alternative? The answer is to not look at the right side of the equation but rather focus on the left.

In English:
 Looking only at whether the sum consequences of your actions are good, looking only at the end, is dangerous. It is dangerous because it is very easy to loose sight of what evil you do, to ignore it and justify it, to normalize it and, over time, become accustomed to it. You must consider the consequences but you should not loose sight of the costs. If you must pick the lesser of two evils, you should still believe that what your are doing is evil.

In Rationality:
Good an evil are categories of action (or alternately a scale going from -100 to +100 but hey). Certain things are good and certain things are bad. Judging good and evil by the overall consequences can weaken the association between certain acts and their good/evil grouping. This can be bad as certain actions are usually evil and believing that they are not evil, even though that may not be true in one specific case,  is not good as it increases the likelyhood of you comitting them when they are indeed the wrong choice.

In Academic style criticism of the greeks.
--------------------------------------------------------------------------------------------------------------------------

Introduction

Decisions change us. Doing Evil corrupts us and it corrupts us all the more completely when we do not regret it. That is why the Tragedians grief filled method of decision making is preferable to Plato’s stark rationalism.

The Difference

What are the ethical views of the Plato? Plato believes in the existence of an absolute good, embodied by the form of justice and hence rejects moral relativism in all it’s forms. Importantly, he also understands there to exist a threefold distinction in the human soul between the reasoning, desiring and emotive parts.  He believes that the world of forms and hence understanding of justice is best accessed through the first faculty of reason, the exercise of which allows us to behave morally and to rekindle the lost memories our soul has of the world of forms. Because of this, a Platonic approach to ethical decision making is to use reason and reason alone to assess the available options, the consequences of those options and how far each action is good based on how far it is in line with the form of justice. In cases where a difficult trade-off must be made, for example torturing one terrorist to potentially save a dozen innocents, the platonic decision maker should consider the options, make the best choice based on the information they have and then proceed without doubt or regret because the choice they made and the way they decided on it was in line with justice and hence objectively good.

The ethical views of the tragedians differ in two important respects from those of Plato. Firstly, the tragedians are not necessarily moral absolutists. They recognize that competing moral demands can exist, say those from different gods or those between family and state, and that in some cases these moral conflicts may well be insoluble. This is in contrast to plato whose form of justice is the only source of morality meaning all conflicts are soluble or, at the very least, have no superior choice making every choice a just one. Secondly and more importantly, in cases where a difficult trade off must be made the tragedians advocate not that the decision maker chooses the best, or least-worst option and is satisfied with their choice but rather see such satisfaction or lack of regret as a form of hubris to be avoided. The punishment of Agamemnon and Antigone is an example of this. For the Tragedians, a person who does evil, even if it is the lesser evil given the options available to them, has committed a wrong and should feel remorse for doing so.

The Corrupting Effects of Decision Making

There is a distinction between evil and badness. What is evil? Evil simply means assigning the wrong weighting to certain actions or goods. In other words seeing obviously heinous acts as acceptable or, alternatively, seeing them to carry less moral weight than they should do is evil. For example, a soldier who is forced to murder on daily basis, loses some or all of their understanding that murder is bad and hence goes on to undervalue the badness of murder when making moral trade-offs in the future has to some extent been corrupted and become more evil. This is different from badness in that a bad action may be in itself undesirable but is always wrong if the alternatives are sufficiently bad. For example, murder is bad but it may nevertheless be the right option in a situation where not-murdering would lead to the deaths a hundred others.

It is an inescapable fact of human nature that we are corruptible. That is why when making a difficult moral choice, we should do as the tragedians advise and feel regret and shame. The reason for this is that shame prevents or slows the the corruption which leads to evil. Doing bad things changes us. It desensitizes us to the horror of what we do. Torture enough and you are no longer shocked by the pain of others. It causes us to begin to construct justifications in order to be able to look at ourselves in the mirror. Beat your wife long enough, and you begin to believe that she deserves to be beaten. The decision making Plato advocates for requires that we should not feel remorse, shame or disgust for performing bad acts provided that they are justified. My contention is simply that when doing bad, feeling disgust or regret slows the normalization and justification which lead to corruption. Not doing so hastens this corruption, leading to future decisions where evil is done not because it is necessary but because the decision maker is corrupted enough to devalue the extent to which bad actions should be avoided.

An obvious response from Plato would be that the corruption I refer to is impossible. After all, a platonist is not guided by his emotion or appetites but rather by the rational part of their soul and hence their valuation of certain acts and understanding of what is moral and immoral is objective, based on knowledge of the world of the forms. The corruption I talk of may occur, but it does so not in the rational part of the soul but in the emotive or desiring part which anyone who follows Platonic ethics would not let control them. Hence this corruption is not a problem for Platonic ethics as it only applies to individuals who have failed to implement some of plato's most important advice.

The reason Plato’s response is unsatisfactory is that the distinction he makes between rational and irrational parts of the soul is not one which exists or can be forced to exist in reality. It is not possible for any human being, no matter how enlightened, to prevent their emotional state from leaking into and affecting their rational thought process. Like it or not the different aspects of our soul are inextricably bound together in a web of causal relationships. What this means is that either my criticism of plato, that his regretless decision making is more likely to lead to corruption, still rings true or, alternately, that Plato’s system is indeed invulnerable to corruption but then is by necessity so difficult to attain that it is totally out of reach of human beings. In both cases the Tragedians ethics seem superior, in the first because they lead to less corruption and hence less evil and in the second because they are actually practicable by human beings.



References

Plato, The Republic


Biography

Lucifer Effect, The  by Philip Zimbardo,
http://www.lucifereffect.com/

Ends don’t justify the Means by Eliezer_Yudkowsky,

Stanford Encyclopedia of Philosophy, The

Fragility of Goodness, The by Martha Nussbaum


------------------------------------------------------------------------------------------------------------------------


Friday 6 May 2016

Rationality LifeHacks

Intentionally seek out information which contradicts your beliefs, Intentionally try to contradict your own beliefs. If your social circle agrees on something, play the devils advocate.

Also, don't weak-man.

A quick introduction to metaethics.

In science, we try to come up with the most general & simple rules possible which predict/resemble reality. In ethics, we try to come up with the simplest/fewest ethical rules and premises  possible which, as a system, predict/resemble our moral intuitions.

Example: In science, you could do without a theory of gravity and instead have a separate theory for every single object being pulled in a specific direction which is a function of the objects around it. But, a theory of gravity with a single law is simpler but gives the same predictions and is hence superior. In ethics, you could go through every single possible instance of murder and say that it is good or bad. Alternatively, you can come up with more general rules such as "Killing is bad unless done in self-defense/war/etc....". The general rules, provided the fit our intuitions, are better.

In science, the standard which we measure our theories by is an objective, external reality which is the same for everyone. Hence, a scientific theory or law is equally right or wrong for everyone. In ethics, morals do not exist in reality but only in our minds. More problematically, morals are different for every person. Hence ethical laws or theories cannot be objectively good or bad. A theory or rule can equate perfectly with one person's moral world and not at all with another's. How do we get around this? Simple. We select a certain set of intuitions, a certain ethical world, and judge our theories against that world. Usually, the ethical world we use is fairly close to that of most people in our societies. Hence, when we say a certain theory is good we mean it is probabilistically likely to be a good fit for the internal moral world of most people in our society. 

Another problem with ethics is that, unlike in physics, the subjective wold in which we live can be changed by our theories. A theory in physics does not change the rules of the universe. Gravity exists whether or not we believe in it. (Note: This may change if we develop technology sufficiently advanced to manipulate the laws of reality. Then, our beliefs would begin to shape the world ). On the other hand, a moral theory can change the moral world it seeks to capture. A strong law against torture linking to many other strong intuitions not relating to torture could well convince a person listening to it to abandon what pro-torture intuitions they had. It is as if  in physics making a strong theory could bend reality to fit your theory. How do we get around this? Right now, we don't because most philosophers are idiots stuck in the past. Instead of using the intuitions of other, which are unaffected by the philosophers theories, they use their own moral intuitions which are changed and hence unreliable. How should we get around this? 1: quarantine moral theories. Don't publish ethical works. Sequester philosophers for life. 2: Ask representative samples of the population all moral questions you can think of, record the results, beam them into the philosophers prison. Now the philosophers have a moral wold to judge their theories against which is not change by their theories. Problem: [inhuman/[prevents any [social good/moral progress] coming about from philosophers work.]]



That was meta-ethics. Enjoy.


note 1: 
  • I make a distinction between laws and premises. premises I take to mean intuitions, by which I mean specific, situational moral judgement. i.e: person X does Y for reason Z in a certain situation. How good-bad is this on a 1 - 10 scale? Laws are rules stretching across many situations.
  • I say the purpose of ethics is to find good laws. Goodness = simplicity * accuracy. Accuracy = extent to which the law aligns with the data, which for ethics is moral intuitions.
  • Problem: People's internal moral worlds may not be composed of intuitions. Rather, they could be composed, at least partially, of laws which in turn give rise to intuitions. I don't know why this is a problem but something smells very wrong here.

Note 2: Other issues exist which I have glossed over.
  • Moral intuitions are unstable.
  • Moral intuitions are difficult to access.
  • etc...