Nothing anyone says in a complex debate is really true.
If it were – if it included all of the qualifications, specifications, caveats and footnotes that are necessary to stop it being, to some degree, false – it would be too long to say, and nobody would get anywhere. If the topic is something concrete and specific, then all statements will leaves out important details; if it’s something abstract and general, then they’ll leave out the even more important principles of application.
Nevertheless, we get by. But this means that whenever anyone says something, we don’t just hear and respond to the statement they made, but to a fairly large and elaborate projection – the statement they made, and all the details, applications, etc. that we think they probably intended.
Complex discussions, therefore, involve a certain odd sort of ‘trust’: people are saying things that you might either agree with or disagree with, but you have to decide (with whatever degree of reflective awareness) whether to take them to mean something sensible or something wrong.
This can produce at least three sorts of dysfunction:
1) If there’s too much trust, discussions can become impossible to start – whatever someone says, it’s interpreted so as to make it reasonable and correct, and so any actual disagreements remain unstated. In extreme cases can lead to dreaded ‘groupthink’.
2) If there’s too little trust, discussions can become impossible to finish – even if someone says exactly what you had been about to say, your abiding suspicion of their nefariosity makes you interpret it in the worst way, and jump to objection. Lots and lots of ‘yes but…’s.
These two dysfunctions are, obviously, very common in political discussions – the former within and the latter between the various ‘tribes’ or ‘sects’ into which people form themselves. But the third, I find, is more common in philosophy.
3) This is where the link is unclear between the sort of filling-in-of-details that you endorse/reject (hence the people you ‘trust’/’distrust’, the ‘sides’ people are on) and not just the particular things people say, but any definite statement you can come up with.
Except, of course, for the words that people come up with for that purpose – but which then turn out to have a dozen different definitions. I’ve found this, most recently, with the idea of ‘reductionism’, in particular part-whole reductionism, or other-sciences-to-physics reductionism.
Some people are sympathetic to reductionism (including me) and other people are unsympathetic. And consequently we argue. But neither of us can quite put into words what we’re arguing over. What statements express the meaning of ‘reductionism’, or ‘anti-reductionism’? Everyone has a different idea – and often the idea of what they’re arguing against is one so transparently weak that it becomes incomprehensible why anyone would defend it. I’m sure people can add other examples.
And yet – there is a disagreement here. My subconscious makes me inclined to disagree with everything said by someone I’ve tagged as anti-reductionist, and they tend to disagree with everything I say. Or the agreement is ‘yes, but…’
There is some a philosophical opinion I hold, which others disagree with, but I don’t actually know what it is. This is disconcerting.
Moreover, it suggests to me that the way to proceed probably isn’t a painstaking logical analysis of the statements that get made, because those statements are a merely a symptom. Hair-splitting, though often productive (especially for enabling the -picking out of nits) won’t help much in this case.
Instead, these sort of debates make me feel that a psychological analysis is needed – a therapy almost, an investigation of what in our emotional set-ups disposes us to identify with these viewpoints that we can’t even really formulate.