Saturday, March 18, 2017

In Appreciation of Neuro-Divergent people

One of the main reasons I started this blog was to initiate dialog about mental health awareness. Particularly at a time like this, which sometimes feels like the "Empire Strikes Back" moment of the equity movement. An over-generalization of the cultural conversation we've had thus far concerning the reasonable limits of equity in North America that may be useful for characterizing where we currently are is as follows: the black community and women won the first battles fairly decisively. But these battles were very prolonged - which should really say something about the massive complexity of them. I mean, consider just how much data was being used to make these cases for equity: in the case of women, thousands of years of historical data, cross-corroborating the same recurrent patterns, such as male exclusivity in leadership positions and positions of power, male-centric language (for example, feminine grammatical gender being marked), etc. These data span across the domains of several disciplines. And in many cases, it isn't obvious how to formalize them in a manner that would be "scientifically" kosher. For example, anyone who has at least attempted to create a piece of art understands the difficulty in trying to explain it's "essence" using explicit language - the whole point is that the art, in a sense, speaks for itself. Art is to be interpreted, not Analyzed. And this interpretation includes subjective elements, not just objective. Subjective is almost considered to be a dirty word these days, what with the complete denigration of the humanities. And yet we forget that the original academic discipline - philosophy - is usually considered of the humanities. And that its methodologies - including the subjective elements - are what led to the methodologies of all other frameworks. Also, despite the hard sciences fetish for formalization, the most formal discipline of them all - mathematics - thoroughly indulges itself with the inherently subjective notion of intuition. The relationship between Mathematics and subjective intuition is so deep it even penetrates the disciplines mythos.

Consider the life of Ramanujan, an Indian Mathematician so brilliant that contemporary mathematicians had to work together to interpret his findings after his death. He was known for using a less orthodox approach to mathematics, which was almost certainly a result of his life in what we ought to call poverty. His parents were on the relatively low end of the economic spectrum, minimizing his opportunities for social mobility. He struggled in college as a result of his seeming inability to do anything other than math. This led him to lose his scholarship, run away from home, change schools, and eventually abandon getting a degree. He found it hard to support himself financially, relying heavily on the charity of his community. Understandably, this set of circumstances affected him pervasively. For example, his lack of finances restricted his access to the works and findings of contemporary mathematicians. Which forced him to derive his own analogous concepts, essentially reinventing the wheel. Paper was too expensive for him, so he could only afford to jot down the results of his proofs (leading to a large-scale effort in the 1970's to verify these findings, since they seemed useful for work in number theory). For more information on his life and work, check out this article from Steve Wolfram.

You may be wondering why I would bother bringing up the life of a mathematician in an article like this? Well, if you've been reading my posts, you've probably noticed I have a penchant for structuring my arguments in a non-linear order - often starting with an introduction that has nothing to do with the main point, and then drawing connections to it later. I do this largely because it is reflective of the manner in which I think. And, very probably, the manner in which many other humans think. Now, allow me to indulge where I am coming from with this observation.

We are often inconsistent with which aspects of evolutionary theory we like to appeal to in justifying our cultural, political, and social worldviews. Some people, I would argue, are too quick to dismiss the role of evolutionary processes in the historical development of homo sapiens some 200,000 years ago. Others, however, are too quick to ignore the dynamic psycho-social factors of these processes, and the pervasive effects they can have on groups and communities as well. It's the people who commit this latter fallacy that I am most interested in discussing here. For these are the sorts of people who are most often guilty of degenerating the humanities for its lack of "objectivity". Who most adamantly argue for the "specialist" as our model of the quintessential intellectual (as opposed to the Renaissance Man). And who argue for what my old Cognitive Science teacher called "Interdisciplinary Eclecticism": interdisciplinary communication not much better than small-talk. The most valuable degree, they argue, is the one that specializes most heavily on the findings of a narrow set of deeply related disciplines, each with its own well-defined parameters, and aiming to find clearly delineable solutions with immediately obvious pragmatic applications. This is often coupled with a more pragmatic notion of value - or even truth itself. For, in this uncertain and ever-changing world, what else could we use to ground our notions of truth and meaning but those principles describing and generating the uncertainty and change in the first place? This idea of consistency as truth goes back at least as far as Plato's theory of eternal forms which, as you may remember, were defined by the fact that they were unchanging (which was necessary, he argued, for their being ideal). In a world where a failure to adapt undermines a species survival, how can we afford to tolerate anything less than the absolute best for our species? As you know all too well, the world doesn't give a shit about you.

We are taught that the enlightenment stomped out the superstitious thinking of years past. That anything worth studying must be able to be attached to a well-defined, mathematical formalism. Anything less loses it's grounding reality, belonging to the same epistemically dubious category as pseudo-science like Creationism and Flat Earthism. This kind of thinking was especially clear during the behaviorist regime in psychology, in which the Gibsonian school of Ecological Perception, The role of Embodiment, and Connectionism were shafted as they weren't objective enough. However, as anyone with any background in Artificial Intelligence can tell you, this was a big mistake. And now that self-driving cars are on the horizon (and kinda a big deal), you should really pay attention to what their very existence entails. For these are built using the findings from the schools of thought mentioned above. The ones which, prior to the advent of the Cognitive Revolution, were not considered "sciencey" enough.

Consider the fact that the first academic discipline was philosophy. From its methodology and findings, all other disciplines were derived. Many intellectuals engaged in inter-disciplinary activities, hence the concept of the "Renaissance Man"  (or Polymath to use the gender-neutral term). However, in the modern period, it is often assumed that such a phenomenon is either dead or dying. For our modern academic disciplines have spent so much time asking new questions and engaging in discipline-internal conversation that it's no longer feasible to really follow the work of any other ones. As a result, our cultural idea of what constitutes as an "intellectual" changed from the Polymath to the Specialist. For how could one person know more about something than someone who Specializes in it? And, as time has passed by, how can a brain hope to carry all the world's knowledge at the same time. Even if there were enough space to store it all, there wouldn't be enough time to learn it, would there?

However, this whole line of reasoning is riddled with assumptions we can now dismiss as false. It implicitly assumes a "container" model of memory, in which static memory objects are passively stored in static containers. But we know, via tons of controlled experiments in psychology, that memory is fundamentally reconstructive in nature. So, although the mind must certainly have a finite storage capacity, we ought to frame discussion's on memory capacity not in terms of storage size, but in terms of reconstruction heuristics. So under this framework, a little can go a long way: if one's cognitive architecture is optimally preconfigured for learning (AKA they use research verified "learning strategies"), then we would expect them to be able to get more cognitive "bang for their buck", so to speak. Notice how this would not be specific to just memory: for memory is nothing more than attention to the past rather than present, and attention itself is the integration of many other cognitive processes. So assuming an improvement in one implicitly assumes an improvement in all. Which probably explains why so many of the best "study habits" are so holistic in nature. Such as getting enough sleep. And diet and exercise. And following a routine. Including strict timed work-break intervals. It's almost like you are an embodied entity who's physical composition determines your outcome. And we are struggling to adapt to an environment which is foreign to that which we originated from: one in which not going out for a jog meant not having food to eat. Nowadays you can get your groceries shipped to you.

The conventional wisdom is not entirely wrong, in that specialization has undoubtedly played a role in our development as a species. After all, certain tasks simply require a tedious, detail-oriented approach. Anything related to Computer Science, Physics, Engineering, or Mathematics will likely require a lot of symbolic manipulation using formal rule applications. So with that said, it would seem unreasonable to expect anyone to be able to focus on more than a few of these subjects, at least at the same time. And times are different now than in the past. After all, the "problem space" of the world was much less populated in the past than it is today. Clearly it was easier to be a polymath back in the day simply because there was less to learn. Even factoring in the role of reconstructive memory, the fact remains that there is not enough time in the most above average human lifespan to read the works of every discipline? Even learning just their "essentials" seems like a herculean task. So surely there is a role for specialization. And surely the death of the renaissance man isn't all bad.

But what if this whole line of reasoning is committed to a false dichotomy? What if we don't have to pick between pure specialist and pure polymath? This is where the life of Ramanujan comes in. If he demonstrates anything to us, it should be the inherent value of different kinds of thinkers. Thinkers from all sorts of different backgrounds, with different beliefs, opinions, etc. Thinkers that challenge traditional assumptions about the status quo. And can back them up with rigorous, philosophical argumentation. This also ties back nicely with the whole equity theme of many of my posts. For it's easy to see how disagreements arise when we privilege certain notions of truth and knowledge. This isn't to say objective truth, in some form, doesn't exist. But we need to be careful not to confuse the truth of a proposition with the ability for it to be formally proved. For virtually every living organism on the face of the earth bases its survival off of fundamentally non-formal reasoning. Hence why I used the word "rigorous". And I think much of the evidence for the claims of the equity movement are sufficiently rigorous to deserve attention. It may be uncomfortable evaluating claims of an "oppressive patriarchy", which seem to entail so much. But yet, when you look at the undeniable evidence for male supremacy throughout all of human history, across all of our civilizations, embedded in all of our mythologies, religions, cultural norms, etc. God is almost always depicted as a man. Men almost always possess all positions of power. Most marriage traditions disproportionately benefit men. People are surprised when they find out how radical some early feminists were, but they seldom ponder why that is. I guess we're so used to the constant reminders of how bigoted we were in the past that it takes the edge off. And it ignores the systematic damage that such oppression must entail. Effects so pervasive it shouldn't be a surprise we have yet to find a way to fully formalize it: after all, we're not even that good at constructing dynamical weather forecasting models. But surely there's value in discussing hurricane despite being unable to perfectly capture them. There's certainly value in discussing economic models despite their notorious limitations.

And this is where I (finally) get to the topic of this blog post: that the recent denigration of equity-minded persons - especially with regards to Neuro-Atypical people - isn't doing anything but holding us back. It seems to me that the pursuit of equity is one of the most important endeavors one can engage in. And I mention the life of Ramanujan because it exemplifies many of the reasons why. For one thing, Ramanujan's poverty undoubtedly hindered his productivity and potential, as he was chronically sickly and likely malnourished, dying at the age of 33. Not to mention the very high probability that he was unable to afford enough paper to fully lay out his proofs, leading to the aforementioned need for mathematicians to posthumously verify his findings. Combined with the racism he experienced, it's hard not to conclude that his potential was wasted.

As mentioned by some, these factors weren't all bad: his lack of formal training in mathematics likely gave him a fresh perspective, granting him a unique perspective from which to enter the problem space from. So, rather than have to go over all of the work of leading intellectuals at the time, he could focus on specializing on that which his mind was most adept at thinking about. Which, interestingly enough, is more in line with the kind of reasoning Plato used in the Republic when defending societies need for specialization. So we want thinkers that can at least sufficiently cohere with the "mainstream", as they have the highest probability of contributing something useful to the conversation. However, we don't want to attract too much of the same sort of thinker. For their work has a significantly higher probability of just agreeing with what has already been said, ultimately being redundant. So we want thinkers that are, in different senses, the same and different, from those currently in academia. Otherwise, the academic torch runs the risk of staying lit eternally but following the same linear path - parasitically surviving on the underbelly of society while not contributing anything to it.

Both Progressive and Conservative minded persons ought to at least agree with the sentiment that you can't do wrong with more knowledge. Which, in my mind, is the best possible case one can make for something like universal college. As has been argued, it's theoretically the only investment that can always pay for itself in some way or another. Although there are some economists that staunchly believe that a perfectly free market will always lead to the best (or rather, least bad) outcomes, this "market fundamentalism" is far from infallible. And although I don't have the time or space to deal with this worldview here and now, I'll just point out that most economists are fairly open about the shortcomings of their discipline. Of course, this isn't to say there is no value to studying economics and economic thought; for example - I think it should be mandatorily taught in high school. But I'm suspicious of any argument heavily or solely relying on it. At the very least, there must be corroboration with the works of other disciplines. Anything less implicitly undermines the sheer complexity of the universe we find ourselves in. As well as trivializes the herculean task the human mind performs - on a moment to moment basis - of actually managing to make sense of that world in spite of this complexity.

Okay, now to get to my main point: why are we okay with the blatant hostility we are seeing towards people who are Neuro-Atypical? Why is "triggered" now considered funny? How is this any different than saying slurs? Are we giving preferential treatment to visible minority groups (like blacks, women, etc) over those who's condition is less obviously ascertainable? This kind of thinking strikes me as odd, since we have little reason to think that the thoughts of such groups would be that foundationally different than that of those already in power. Don't get me wrong - discrimination against persons who are visibly different is very likely an old and primitive part of our cognitive processing architecture. However, conversations on this subject often get carried away with pointless debates over whether this is caused more by "nurture" or "nature" based factors. The former of which is apparently considered more tolerable than the latter. It also doesn't help that a lot of statisticians and psychologists are either products of their time or bad at communicating precisely what they mean, providing dangerous firepower to persons who aren't quite as nuanced.

However, with all that said, I still think it's a poor idea to privilege the status of visible minorities over that of the non-visible. For it reinforces that original sin we committed during the enlightenment - the assumption that everything needs to be completely and elegantly formalizable. Which often begs the question of which "formalization schema" we end up needing to use. Combined with the (largely) arbitrary process we use to confirm whether someone is "qualified" (which itself implicitly assumes the faulty passive storage model of memory), it should come as a surprise to no one that disciplines end up hitting theoretical brick walls. Just like Psychology did before the Cognitive Revolution. The only antidote to this poisonous feedback loop is to critically re-evaluate what it means to be an "expert" in something, and adjust society accordingly. If there is any one thing the many theories of education seem to agree on, it's that passively reading a textbook and doing a few really long multiple choice tests isn't really learning. Testing must be frequent, and its purpose is for long term encoding of propositions. Similarly, although a textbook can be a great way to transmit certain kinds of information, it's obvious that other systems (like 3D models) are more useful in certain contexts (like biology). This paper is a good example of what I mean. The researchers found that cell-biology students using 3D models (both tactile or visual) strongly preferred their use over that of the textbook, particularly when answering questions about complex, "higher level" relationships. Although the results didn't necessarily translate into higher grades in this particular study, the probability that it boosted the efficiency of learning for these students cannot be ruled out.

So, to go back to my original argument, we ought to celebrate the neuro-diverse, for their potential contributions to academia could be great. Which is why I'm disappointed by the appalling levels of ignorance concerning neuro-atypical folk that's been going around. Every time I hear someone josh about getting triggered, I can't help but think that person simply doesn't care enough to educate themselves on what triggering actually entails. It seems like a phenomenon deeply associated with the cognitive mechanisms commonly associated with memory and attention. It's commonly considered the result of an overactive "fight for flight" response mechanism, in which the person's brain, on some level, cannot psychologically comprehend that the trauma is truly over. Which could give us many clues about the mechanisms behind relevance realization (which is kinda an important question for Cognitive Scientists here at UofT). However, even if the condition itself weren't so theoretically useful because of what it says about the mind, I would still posit that it makes sense to protect persons with this condition; and this is because, as they say, everyone is good at something. We've long suspected that the mechanisms behind anxiety play an evolutionary role in our survival - after all, sometimes we gotta get up and run like there's no tomorrow. So having an anxiety disorder - although likely having a net negative benefit (hence the need for accommodations) - likely has a positive impact on certain, specific mechanisms. And it's been shown that, in the very least, PTSD is associated with a better ability to learn the "gist" of something with "negative" information, as well as "enhanced perceptual priming" for "threat-cues". In other words, they are better at "danger avoidance" based survival reasoning). This would appear to be corroborated by the phenomenon of Post Traumatic growth, in which survivors of Trauma are observed to undergo positive change as a result of overcoming their affliction. Ironically, one could say that it "builds character". In fact, it's been argued that The Hobbit was essentially the byproduct of the Post-Traumatic growth of Tolkien.

To be clear, this isn't to undermine the struggles that persons with PTSD face on a daily basis - but to point out the potential intellectual value of people who think differently than us. A potential which can expand quite a bit more if we loosen our idea of what kinds of intelligence are considered... well... intelligent. For example, the observation that there must be a relationship between great art and madness may be more than a stereotype. An observation which, once again, coheres well with what we know about the cognitive science: that some of those factors associated with creative thinking are linked to the phenomena of insight. Which is also kinda a big question Cognitive Scientists here at UofT ask. And of which it's notoriously hard to quantify (after all, what *is* an insight problem?)

And now to cash in on my mentioning of Ramajunan one last time. Believe it or not, he is commonly posthumously diagnosed with autism. A condition which, unlike PTSD, is really hard to determine whether or not the net payoff in cognitive ability is a net positive or negative. I'm not sure if there is a single condition more associated with the stereotypical genius than autism. An association of which, although we currently lack the data to conclusively prove or disprove, seems biologically plausible nonetheless.

Now, to be clear, this argument is not intended to be of the same logical form as the "great Beethoven Fallacy". For, in that case, the argument is that it is "better safe than sorry" to preserve a life which is currently a fetus, on the basis that it may potentially overcome the great handicaps it will face when it is born in order to achieve greatness. However, my argument deals with things that are - I would argue - more alive than the fetus. Additionally, my argument also provides an account for why some equity may be advantageous - because there is an inherent value in people who think differently. By most pro-lifers admissions, this doesn't extend to a fetus at a sufficiently low developmental stage (such as a Zygote). The fact of the matter is that we have these persons who can contribute something novel to society if given a chance, and I think it best to try and give it to them. Unlike a more mainstream thinker, the neurodivergent begin solving problems from a different, novel places in the "problem space". They frame the parameters in potentially different ways, reflective of the different ways they even begin to think about problems. We can also optimistically speculate that cultivating neurodivergent talent could increase our chances to ensure we don't squander the talents of the next Ramanujan; for there is a chance that the next Ramanujan necessarily must be neurodivergent, for their neuro divergence is the basis of their advanced abilities, at least with respect to solving the kinds of problems they solve (remember, Ramanujan was shit at everything but Math). And even in the worst case scenario, neurodivergent people are useful for being lab rats to be studied under a microscope. For, as abnormal psychology has made abundantly clear, it's hard to know anything about "normal" brains without actually studying the "abnormal" ones. I think, with all said and done, that there is great reason to at least take seriously the supposition that neurodivergent persons ought to be afforded a certain amount of equitable protections.

Which brings me to my final point: why are we treating mental illness as a joke? I mean, we've gotten to a point where we're using the word "triggered" as a slur. Because... according to some, some college students use it too much? Is this really that high priority? Have these critics ever met someone who actually gets triggered, in the strict psychological sense? They are fully aware of the existence of exposure therapy. To suggest otherwise is just condescending at best. And at worse, does nothing but drag the rest of humanity down into the mud of primitive, bigoted thinking. Understand that this isn't just about PTSD and triggering. I'm including Transgender persons in my description of neurodivergent too. And people with learning disabilities. You think the cultural crusade is going to end with the trans community? Professors like Jordan Peterson have already argued against the notion of accommodations, on the basis that society shouldn't have to be bothered with improving the well-being of certain kinds of people. Even if the actions taken to do so would be minute in comparison. Apparently, if you hand in an assignment late as a result of intrusive flashbacks caused by involuntary triggering, you deserve the late mark penalty. In the eyes of our system, your performance outcomes are all that matter.

All I can say is this: there are certain trends in beliefs among professors in Academia. Which, I think, can be partially accounted for by the simple fact that disciplines use, by an large, the same kinds of standardized testing criteria and the same method of information transference. So why should we be surprised by the probability that academia is, in some sense, self-selecting? It's hard not to notice the cognitive feedback loop. And then people wonder why there is intellectual homogeneity in certain key personal beliefs, opinions, etc of scholars, both in academia in general and in each discipline in particular. So I'll end this article by posting this: if one thinks that the problem with Academia is a prevalent Marxism, then it would seem the marketplace of ideas didn't do its job. But if it did, then I suppose that the academic vetting process did its job and there's something worth looking at in Marxist doctrine after all. Which would kinda undermine his characterization of Marxist doctrine as being dangerous because it necessarily leads to mass murder due to its possession of certain essential inalienable principles, which the Gulag Archipelago deductively proves, and apparent counter-examples of successful moderate left-leaning states like Sweden don't count because they got problems you just don't understand since it's the "rape capital of the world". So all variants of Socialism like Social Democracy must be icky.

I belittle this line of argumentation in my writings because, in case you haven't noticed, it's the main line of argumentation being used to force neurodivergent folks to conform to a more neurotypical way of thinking. Because that's what happens when you equate the ability to rote memorizing facts for multiple choice tests and forgetting them the next day with actually understanding something. With conflating the process by which we confirm knowledge with that of knowing itself. This is the reasoning that leads them to, for lack of a better word, deny accommodations to legitimately disabled people. Because unless your injury is explicit and visible, apparently it doesn't deserve attention. Which just makes our lives more difficult than they need to be. It's also the kind of empty reasoning behind statements concerning the "uselessness" of the humanities. Because, as many on the alt-right love to argue, they have become "safe spaces" for "triggered" college students to hide in, so their "feelings" don't get hurt. When the triggered college students are neurodivergent folks (like Transgender persons), no one gives a shit. This criticism is bipartisan, for a lot of left leaning thinkers hold a similar attitude: That, Uulike visible minorities and women, the "condition" of neurodivergent persons is invisible to the naked eye, which apparently means it can be ignored. On a related note, since we have great reason to suppose that women, blacks, etc are intellectually equivalent to white cis hetero males (since evolution doesn't work that quickly), we can understand why they would be deserving of "special treatment" - after all, they are equal. So many of the left consider these differences in outcomes to be unfair; a symptom of a greater disease. But when it comes to, say, learning disabilities.. those are, by definition, disabilities. Disabilities involving learning. Which is kinda a big part of what it means to be intelligent. So yeah, consider this a warning that enabling the current attitude against Transgender persons is enabling a similar attitude towards all neurodivergent people. Which could have pretty lousy consequences in the long term for everyone. And although I'm an optimist when it comes to the "nobility of intentions" that human nature allows, I am a deep pessimist about it's rationality. Anyways, that's all for now.

EDIT: mild edits, added a reference to Tolkien, and made font corrections

Saturday, February 11, 2017

Trigger Warning: This post will contain explicit references to war, sexual violence...

Trigger Warning: This post will contain explicit references to war, sexual violence, overt bigotry, and all other manners of potentially traumatic things.

There, was that so bad? I wasn't required to attach a content label to this blog post. And, if I simply blacked out the text, then you wouldn't even be spoiled. By doing this, all I did was give a heads up to surprisingly large number of Canadians who suffer from PTSD. Not that the quantity of sufferers should matter that much when we're talking about a condition with such an intense magnetude of effect on ones functioning. A condition which is scientifically sound and historically well understood.

Anyways, I'm going to lay out my main proposal now: the outrage against "trigger warnings" is overblown. Most people who have written on the subject do so in non-constructive terms, demanding that we completely eliminate the very concept, rather than working with activists to arrive at common sense solutions. Consider the fact that the AAUP report against Trigger Warnings didn't even acknowledge the fact that PTSD is best dealt with using controlled exposure. The "controlled" part is key. So if one only focuses on the "exposure" part then... well... they probably don't have many friends with PTSD. Or the ones they do have don't follow up with their psychiatrists. Because, after the debate is over, the PTSD sufferer gets to enjoy the trigger-warning free zones they worked so hard to preserve. And now, since I put a trigger warning up beforehand, I think it would be appropriate to show them some of the "benefactors" of this crusade against tyranny. People like child rape survivors, War Veterans, and the children from fucked up families we like to pretend don't exist. And while I'm at it, let me link to an article on what PTSD actually feels like. Of note is this passage:

"I didn't know any of what he was saying, I was still very oblivious to the words, but I knew that it triggered all the abuse I had suppressed. Afterward, I didn't sleep or leave the house for a month-and-a-half and was crying non-stop."

In case you were wondering: yes, that is a qualitative description of what it is like to be triggered. A brief description of the neurological underpinning can be found here. Doesn't sound much like a whiny college student who's feelings were hurt does it? Of course, we could just acknowledge that these people exist, and focus on the implementation details instead. That is, after all, why so many reasonable-seeming people are against "trigger warnings" nowadays, isn't it? Because of the unintended consequences they could have.

Or perhaps, as some people argue, the very idea of "trigger warnings" is simply incompatible with the concept of a perfectly free society. Of which case, all I can say is this: isn't is a shame that we have ratings systems for TV, Movies, Music, and, to a lesser extent, video games? I mean, that is what Content Advisory is, isn't it? They advise you of the content of what you are going to be exposed to. And even give you little symbols and labels to tell you how bad they are. I mean, how else would I know that cussing is bad? Or that an erect penis is worse than a flacid one? Really, aside from a few cases, they don't seem to serve any purpose aside from reinforcing "traditional family values". The industries largely "self govern", using unclear (and sometimes even unknown) criteria of which we have good reason to believe is often inneffective. Although that's hardly a surprise, given that they seem to be based on religious values. It's almost as if the entertainment industry is a corporation with political ties to certain special interests. Some of which are religious in nature. So with that said, I think it's reasonable to suggest that anyone against Trigger Warnings also ought to be against Content Advisory.

It could be argued that Trigger Warnings, in an analogous manner to Content Advisory, could be used as tools for the subjugation of our individual rights if they end up becoming more mainstream. But keep in mind that Trigger Warnings don't necessarily have to be issued by order of law in order to serve their function. Content Advisory, on the other hand, exists in a sort of legal grey area, in that, although they aren't legally enactable, many people act in compliance with them, often due to fear of potential future punishment. Which, by extension, could be considered coercing people to implicitly agree with the values they represent. So methinks that these "trigger warnings" alarmists ought to get their priorities straight. In fact, you'd have a lot of progressive allies that would love to help!

Although this might be a surprise to some, I don't agree with everything everyone who has ever endorsed trigger warnings has said or done. And I certainly don't want to see them follow in the footsteps of the Content Advisory crowd. But I think the concept of "trigger warnings" are an appropriate suggestion to be taken seriously in certain, common sense environments. Like classrooms which teach material relating to common traumas. As the AAUP report implicitly acknowledges, trigger warnings don't have to be mandatory, don't have to water down any content, and can be effective, since some triggers can be predicted. All they do is give students a chance to psychologically prepare themselves. Remember "controlled exposure"? I mean, keep in mind it's hard to get by in todays world without a degree, right? And we know that PTSD negatively affects academic performance. So why couch the debate in non-constructive terms? If one wants a conversation about the reasonable limits of "trigger warnings", then outline requests and make concessions. That is what politics is about, isn't it? And remember - many professors issue trigger warnings by their own volition, despite the AAAP's hard stance. So in addition to treating this as a binary - why treat it as, in some sense, anti-intellectual?

Also, on a related note, there's a trend I've observed about people against equitable things like Trigger Warnings: a belief that millennials are coddled. According to many of these folks, we are "resentful" that we aren't as "successful" as our parents, and that this "coddling" is making us too "weak" to "survive" in the "real world", which is necessarily cold. And if we try to change anything about it, we would be going against our fundamental human nature, which would imminently lead to communism or something like that. Oh yeah, and anyone who disagrees with this view is either ignorant or a radical cultural Marxist who is conspiring to corrupt college campuses nationwide with their silly talk of microaggressions, systemic racism, and more than 2 genders.

I'll have to do a more substantive post on that whole world-view, but for now I'll just say this: I've heard similar lines of reasoning from Republicans and right wing conspiracy theorists. Keep in mind that, when they complain about the indoctrination of students into a "radical leftist ideology", that often includes stuff like same-sex marriage, abortion rights, the reality of climate change, the banning of recreational drugs, etc. No matter what you do in politics, you're always going to side with people you disagree with. The question you have to ask yourself is: on which issues am I going to disagree with them? What are your priorities going to be?

Friday, February 3, 2017

The Professor & The Pronoun: Part II

Remember how I said I'd eventually do another post on the Peterson fiasco? Well, today is finally that day. Unsurprisingly, this will be a continuation of a prior blog post, which was a response to some of the sillier arguments some of his followers made (namely about the alleged "made up"-ness of the pronouns). I picked that question since I have a background in Linguistics, and it was one in which I felt I could make a useful contribution. Particularly because it implicitly assumed prescriptivism (that is, how one "should" speak) and undermined descriptivism (how people actually use their words). And while it's true that Orwellian control over language can be a powerful tool of control (think of the way the "War on Terror" was framed), it does not follow that, because there was a "language mutation", the corresponding "language mutator" must have Orwellian intentions. This was, by my reckoning, where my last post left off, and where this one resumes.

Before I go any further, I just want to clarify something: when I linked my last post to my Facebook wall, an unpleasant conversation ended up ensuing, culminating in a trans person being reported for using "a false name", and subsequently getting kicked out of their account. Although I was initially quiet about the ordeal, I'm sharing it here as a reminder that, well, not all people that say they believe in free speech actually believe in free speech.

Getting back on point - at the heart of conversations such as these, there's often a deep, ideological rift in which no amount of argumentation can settle. This is most clear upon evaluating the arguments concerning whether there is a scientific basis for the notion of transgender as an identity (aka whether or not gender corresponds so strongly with sex that there is no room for any variations). Here is one I have taken the time to ask permission for and censor out.

The key take away from them is this; there's no simplistic "transgender gene" that you can isolate and point to and say "hey, that's the trans gene!". We have some potential accounts of the data (more on this later), but no complete theory. So, do we view this absence of evidence as evidence of absence, or as the jumping board for better, more up to date science? Well, if you google "scientific basis for Transgenderism", you might get the impression that the former option won out. You'll probably read that argument about the dangers of re-assignment surgery. You may even forget that you're agreeing with and Breitbart for the first time! But then, you'll hopefully take a step back and ask yourself what the other side is saying. And boy is it interesting! As it turns out, they argue, the history of Transgenderism in psychiatry is intermingled with politics. John Hopkins pioneered gender reassignment surgery in 1966, but then stopped in 1979 after a study against gender re-assignment surgery was published. Although to be blunt, the criteria it used to determine the success of the surgery was fairly silly (spoiler: satisfaction on the part of the person receiving the surgery wasn't one of them). Of note is that many other studies came out arriving at the opposite conclusion before, during, and after this period. Also of note is that the guy in charge at the time - Dr. Paul McHugh - was a rather strict Catholic, admitted to wanting to close the program for a while, and was open about how the procedure really grossed him out. Nevertheless, John Hopkins has recently announced it will resume performing the treatment and in fact, has gone as far as to officially support the LGBTQ community. Other organizations with similar views include the WHO APA, AAAS, and CAMH (this article covers them more exhaustively). For more on the history of the transgender movement, I would suggest reading this for the short form, and this for the long).

Unsurprisingly, there's much more research on potential causes as a result of this more widespread acceptance. The prior link lists a few recent studies being done. These two articles reference some older studies (one is as far back as 1999). I won't cover all the findings of these studies or whether they are "scientific" enough to compel belief (I'm not a biologist and the details are irrelevant to my case), but there are plenty of articles written for laypersons that cover some of their findings, from reputable sources, such as National GeographicNew Scientist, and Scientific American. To reiterate, none of this should be taken as deductive evidence for any particular biological or psychological account for transgenderism. I know what an appeal to authority is. Rather, what it is supposed to undermine is the rhetoric that the transgender identity is "unscientific". It also puts the ball in the gender binarist's court - for why should the burden of proof fall on the side of the wider scientific community? Of course, this isn't to say that the wider scientific community can't be wrong. But you better have more than rhetoric if you want anyone to listen - let alone, change their mind. And for anyone surprised that transgenderism could potentially have validity despite previously being questioned by "experts", then all I can say is this: you haven't been paying attention to the history of science. It took until 1987 for Homosexuality to stop being classified as a mental disorder by the DSM. What were you expecting?

Oh, while I'm on this topic - I also want to address a common reductio ad absurdum argument, based on hypothetical "unicorn"-gendered people and Lauren Southern's legal gender. I'll hold my breath on the unicorn people until they mobilize and start demanding equity. And as far as I'm concerned, the legal status of Lauren Southern's gender matters about as much within the context of this debate as Rachel Dolezal's race does in discussions of racial injustice. Find a few more people like her and demonstrate it's a problem, and then we'll talk.

So now we're back to the original ideological question; do we consider the absence of evidence as evidence of absence, or as the basis of new research? My view is the latter. In fact, I'd argue that, given the massive complexities of the physiology and psychology of sexuality, there must be many potential deviations from the norm. I mean, how strange would it be for patterns in our biology - on the brain and body levels - to completely, 100% cohere with each other in such a manner so that every "man" is a "male", every "woman" is a "female", and nothing exists in between, both in terms of sex and gender. To be blunt, this sounds like magical thinking. Of which uncomfortably reminds me of Dr. McHugh, the article, and to a lesser extent, Jordan Peterson. I wonder if these are in some way related? At any rate, this massive complexity, combined with persons claiming a myriad of gender hypothesis, would be neatly predicted by the hypothesis I put forward. And if we step out of our comfort zones to objectively evaluate some of the less intuitive claims of gender theorists, then we may find something of value. After all, science is only as objective as it's era permits.

So with that said, I think we ought to consider the transgender identity legitimate. Which undermines Peterson's central argument that the government is compelling him to explicitly behave in accordance to factually incorrect information (by way of speech). Which weakens the link between the ramifications of Bill C-16 and those nefarious practices of Orwellian states - in which the compelled speech is of falsehoods. However, there is another aspect of the argument that I think is far more significant, particularly from a libertarian perspective - the compelled speech charges. That is to say, whether Bill C-16's requirement that one address (or more specifically, not repeatedly misaddress) a person by one of the 3 recognized pronouns ("he", "she", or "they") of their choice, constitutes as compelled speech. And whether compelled speech - no matter how minute - is just in principle wrong. I think this is the exact point of divergence b/w those who support and oppose Peterson on this issue. And probably another instance in which derivations from the axioms of the ethical systems associated with the 2 major political identities don't match up (see this for an interesting meta-ethical framework which informed my choice of words, and this on a potential Cognitive Science approach to this). However, to tackle broad, theoretical argument such an argument is out of the scope of this post and so, I will only focus on the more direct consequences of the bill.

Firstly, I want to emphasize that any conversation about the consequences of Bill C-16 must also acknowledge the pros. Note that I am not talking about the intentions - but the things the bill does right. Namely, that it provides protections for trans persons from silly things like being fired. Or being threatened with genocide. With that said, let's look at the bulk of Peterson's legal argument against Bill C-16. Or rather, skip to the part where I respond. I mean, we've all heard the arguments before, so I won't rehash them here (although these two articles, combined with any one of Peterson's debates, summarize the key points). It basically boils down to whether or not you think it's acceptable that there's a chance.. however tiny... that the law will be misused. I'd argue there's always a chance a law can be misapplied. And it doesn't always lead to a financial penalty). So keep this in mind when considering your moral outrage towards this issue as opposed to others. Anyways, the standard response to this challenge is to point out that, unlike false executions, compelled speech can lead to catastrophic systemic damage, as laid out by The Gulag Archipelago. Peterson argues that the book can predict with very high accuracy whether or not a state will fall under a totalitarian regime. And apparently, the fact that "bloody Neo-Marxists" have "taken over" college campuses and are "now out to get you" falls in line with its predictive framework. I won't comment on this yet, as I haven't read the book, and I may do a post on this view in it's entirety. But for now, I'll attempt to re-orient the framing of this argument.

Anyways, under the most charitable interpretation of his view (and there are far less pretty ones when it comes to the "Cultural Marxism" comment), the underlying form of the argument amounts to this:

"a group of people who disagree with me performed the sufficient legal machinations to encode their beliefs into law, and I think they will cause more bad than good".

Now, I think any rational person agrees that what is "lawful" is not necessarily what is "morally right". But with that said - I can think of many instantiations of the above sentiment that would be more worthy targets of outrage. Substitute "a group of people" for "Big Oil" and you'll have plenty to be upset about. But here's the thing: we live in a society with laws. Laws we sometimes agree with, and sometimes we don't. But regardless, we're expected to follow them because, as Peterson loves to remind us, it's the best thing we got right now and "you couldn't do any better". Even if we have to follow a system that enables the planet to be slowly but surely killed off. So it can be argued that Peterson applying a double standard in terms of the cultural expectation that we all abide by the law. Now, I suppose his response would be that this is a special case, since the evils of compelled speech would, "by his estimation", cause damage that is not only catastrophic and systemic but also irreversible. Like Big Brother, Canadiana's regime could be too powerful to take down. But here's the thing - my belief that the government is far too cozy with Big Oil, and that that relation leads to... many problems... is pretty much of the exact same form. It's a belief about the distal potential side effects of currently implemented laws (or rather, several laws). But just because I think my view can be all but formally proven, it doesn't follow that I should wholesale refuse to abide by those awful laws (say, by refusing to pay taxes if I somehow knew it would go directly towards a corrupt politician). The whole point of a democracy is to put decisions to a vote. So if Bill C-16 is passed to law - a far better alternative would be to coordinate with your followers to repeal it. After all, we don't want to set a precedent for unnecessarily breaking laws.

I'll be the first to admit that I haven't watched/read/attended all of Peterson's videos/papers/courses. And I admit to greatly enjoying much of what I've seen (some of his rhetoric comes across as "appeals to authority" and invocations of the "naturalistic fallacy" - but I can see his overriding point). I'm even considering factoring some of it into my own cognitive science theories regarding the role of environment over time to human intelligence. But I've seen nothing thus far in his works that demands the same level of certainty as my belief in Big Oil killing the planet. Sorry, I just don't. And to be honest, I see less evidence for it thus far than for the existence of a legitimate transgender identity (regardless of how it is precisely physiologically and/or psychologically realized).

As a final note, I'm not sure whether there is a principled difference between being compelled to explicitly affirm falsehoods via speech and implicitly affirm them via behavior on the scale to which Bill C16 operates. That last part is key. After all, there is a wealth of information about how what we don't say can matter just as much as what we do. And I'm not referencing that damn 55% thing. I'm talking about the role of contextual and psychological information when it comes to processing information, and how that almost certainly confounds over prolonged periods of time. So, considering the minute scale to which this occurs (adding 1 new pronoun to be employed on an admittedly small segment of society), it's certainly difficult to draw a realistic analogy to, say, the rise of the Soviet Union, and subsequent silliness that emerged as a result of that lack of freedom (like their "science"). I think the scale is really off and the slope just isn't that slippery.

Now, before I get accused of being ignorant of history, just keep in mind it's not my camp making the grandiose argument here. So I don't see why I need to justify to every Peterson supporter why I don't buy this part. As far as I'm aware, there is no historical consensus on the exact role of restriction of free speech on the rise of totalitarian regimes (let alone the Communist ones Peterson has in mind). It undoubtedly plays a role, especially when paired with propaganda. But so does the historical ripple effect of prior political relations, many of which inform said propaganda. And which can arguably be linked to our innate bigotry and cognitive biases that predispose us towards simple explanations for complex things. Assuming is reliable, there seems to be a number of characteristics that could be the root cause of totalitarianism. For example, there's usually someone on the top of the totalitarian pyramid. I'm not sure who would fill that role in this context - some starving trans activist colluding with Justin Trudeau? At least Big Oil fits the role.

While there may be controversies surrounding the weight we should assign to each of these factors (in our explanations of the rise of totalitarianism), it's disingenuous to act as though one account is the be all end all of the discussion. Including Peterson's apparent commitment to a "marketplace of ideas" approach in which free speech itself is the mechanism of human progress. How would one even go about proving that? While these accounts may be sophisticated and elegant, I'm not going to accept assertions of its truth as an argument for why Bill C-16 should be entirely scrapped. Especially considering the view isn't exactly universally accepted to the same degree that, say, climate change is. If climate change can't get special treatment I don't see why you think this does.

I hope it's clear what the purpose of this post was. I'm not arguing each of these lemmas to death, because they have been argued elsewhere and they are irrelevant to my key point: if you're going to privilege your political views over comparable views of others (in terms of magnitude of consequence), then there ought to be a really high likelihood it's prima facie accurate. Otherwise, you're coming across as a bigot, guided solely by a deep-seated sense of disgust at the thought of a penis behind a short skirt. And then the key concern - that freedom of speech should never be undermined - gets lost. After all, wouldn't such an argument work significantly better if there were actual, tangible instances where this law really was misapplied?

Yes, this would mean you'd have to suffer first. And no, I'm not saying this is a good thing. But it is the way most long lasting social progress happens. MLK was assassinated. Mandela was imprisoned. Oh, and lastly, I'm probably not going to write that post on freedom of speech anytime soon. I got courses to do. Plus, it's not the sort of topic I want to take lightly, since I agree that it's necessary for the advancement of society. I just don't think it alone is sufficient. And if you've been following the "conversations" over creationism and climate change denial, you'd understand why.

EDIT 1: Added a paragraph on why this post didn't come out earlier, and why I'm not linking it to my facebook page.
EDIT 2: Added a comment on how many scientists refuse to debate creationism
EDIT 3; Made Grammatical corrections and reworded for clarity. Also tinkered with links.