Conscience Is Not One-Sided

I have a lot of respect for Alex Reid. He has a lot of great posts that often help me bring some balance with AI and technology in general. His recent post, “The Conscience of AI Refusal”, raises questions that matter deeply to those of us who spend our professional lives thinking about how, what, and why we teach. Reid draws on a recent resolution passed at the Conference for College Composition and Communication, affirming the right of instructors and students to refuse generative AI in the writing classroom. He built a thoughtful philosophical argument grounding that refusal in conscience. It’s worth a read. I have genuine respect for that argument. But I think it stops one step short of where we actually need to go, and that gap has real consequences for higher education.

I am not here to dismiss conscientious refusal, defend corporate imposition of AI tools on unwilling faculty, or pretend that AI is somehow culturally neutral or free from the market pressures (which Reid rightly identifies). On all of those points, he and I agree completely. No institution should be demanding that all faculty use AI or any other tools, for that matter, if it violates their conscience. Besides, there is that thing called academic freedom (though some may argue elements of that are eroding away, too.)

What I want to challenge is this: the implicit assumption that conscience, in this debate, belongs primarily to one side. It absolutely does not, and should not be construed as such.

Conscience Has Two Edges

Reid argues that if we accept AI as culturally and historically embedded, it becomes “unconscionable” to view it simultaneously as an efficient cognitive proxy. I understand the logic, but I’m not sure it holds here. There is no lack of examples one could consider:

  • The printing press transformed power structures, enabled propaganda and religious divides (and still does), was (and still is) deeply political and (still is) commercially driven, but also genuinely democratized literacy. No one seriously argues we should have refused it on cultural grounds alone.
  • The calculator. Ah, one of my favorite examples. Didn’t math educators have the exact same debate many decades ago, suggesting that using it erodes deep numerical reasoning? It’s culturally situated (built by corporations, shaped by market forces) and it frees cognitive load for higher-order thinking. Both things are true. (If mathematicians think AI isn’t going to affect them, think again.)
  • The textbook is a commercial artifact produced by publishers. It has enormous economic and ideological interests, yet we don’t conclude that using textbooks is epistemically unconscionable.
  • The internet is probably the best example here. It is culturally saturated, corporately controlled, surveilled, and politically weaponized, and used for all sorts of horrific things (social media for starters!) And yes, it’s also the medium through which experts like Reid and I publish our thoughts for the community.
  • Writing itself is one you’ll see over and over in this debate. I was reminded by a friend earlier this year how Plato famously argued that writing would weaken memory and corrupt genuine knowledge! He was not wrong about writing’s cultural embeddedness. He was wrong to conclude that refusal was therefore the conscientious response.

Acknowledging the cultural situatedness of a tool does not, by itself, settle the question of whether or how to use it. More importantly, integration, i.e. the thoughtful, critical, pedagogically intentional use of AI, is itself available as a conscientious act. I’ll say it – I am deliberately bringing AI into my classroom, and I do so because I believe, in good conscience, that I have a responsibility to prepare students for a world already being reshaped by these tools. To send them out without the deep experiences to learn how to use AI in responsible ways, and likewise to critically interrogate and challenge AI intelligently, is, to me, its own form of abdication. That is my conscience speaking. It is no less grounded in shared knowledge than the conscience of refusal, right?

The Symmetry We Must Name

The framework must be applied symmetrically. If refusal can be an act of conscience, then can’t engagement be as well? The moral seriousness of the act is not a property of the direction you choose. It is a property of the care, honesty, and accountability with which you choose it. To deny that symmetry is to do exactly what Reid cautions against in his closing lines: to become a “refuser of the refusers,” generating an infinite regress of competing moral condemnations. He sees this danger clearly when it threatens the refusers. We need to see it equally clearly when it threatens those who integrate.

The Real Threat to Higher Education

Higher education is already in crisis. Trust in institutions is eroding. The political and cultural fractures running through our society run straight through our campuses. Bucknell is not unique in this challenge. The impact will be the same, no matter what type of institution you are. In this environment, nothing accelerates our decline faster than faculty turning on each other over questions of pedagogical conscience. That dynamic is not hypothetical – it is already playing out in our departments. If the dominant message coming out of disciplinary organizations is that one side of this debate has conscience and the other does not, we will drive a deeper wedge into an already fractured community at precisely the moment when students need us to model something better. What we need is not consensus. We need healthy discourse with rational arguments. Not feelings and emotions. Genuine disagreement is healthy. It’s good for any relationship! What we need is mutual recognition: the shared acknowledgment that reasonable, thoughtful, ethically serious educators can look at the same situation and reach different conclusions.

What Good Conscience Actually Requires

Stengers, whom Reid invokes to powerful effect, urges us to “think with” our tradition rather than transcending it through withdrawal. For Stengers, this is not a passive move, but a demand to stay inside the friction and resist the temptation of a clean exit. The AI-refuser still inhabits a world saturated with AI-generated text, AI-assisted research, and is likely using a profound number of tools built on AI (e.g. autocorrect, spell check, predictive text, spam filters, Google Search, navigation with Google Maps, product recommendations with Amazon or Netflix, or fraud detection for questionable credit card purchases, to name a few.) More importantly, the refuser teaches students who will enter AI-shaped workplaces. And likewise, the AI-adopter and AI-integrator will need to carry the weight of what these tools displace, distort, and commodify. Both positions are forms of dwelling in the difficulty. The question is only how we dwell there, and with what level of honesty about the costs. Neither side is clean. Neither is innocent. Both are plagued with challenges moving ahead. What neither side can afford is to treat its own conscience as the universal standard against which the other is measured and found wanting. That is not conscience! That is a mirror being held up as a window – one’s own reflection being mistaken for an objective view.

Here is what I am asking of all of us: Let’s build an academic culture where we can look across the aisle at a colleague who has made a different pedagogical choice and say, “I see what you are doing. I may not make that choice myself. But I recognize that you are doing it seriously, thoughtfully, and in good faith.” And then let’s get back to the shared work of teaching students well, in a world neither of us fully controls. To me, our students need a path that lets them experience both sides of the AI divide and develop the ability to follow their own conscience.

That is the conscience higher education needs right now. And it belongs to both sides.

Leave a Reply

Your email address will not be published. Required fields are marked *