I am starting to be a bit perturbed of how much I'm using LLMs. Here are a few examples of common usage for me:
- Someone posts an article that I'm only half interested in - I don't imagine I'd care enough to read the whole thing, so I ask AI to summarize it
- I have to use a new programming language at work - I use Co-Pilot to write large chunks of code for me
- Having AI spit out recipes for me
- Telling things to it I'm too scared to share with my therapist / friends
I've found all of these things useful, but I'm also entering the phase where I'm starting to feel a bit hollow and uncomfortable about it all. I'll explore each example a bit more to expand on those feelings. Oh, by the way—I'm sick of hearing/reading about AI. But, this time, I'm writing about it.
Summarizing content
Intent: Efficiency
When I come across an article online or that a friend has recommended, sometimes I'll read a sentence or two and then decide that I would be better off feeding it into some LLM and asking it to summarize the main points for me. Then I'll use that output to decide whether or not the whole thing should be read or if I'll just take what I got out of the summary.
There's not much more to this other than efficiency. Efficiency for what purpose, you might ask? Maybe I want to feel relevant and informed about something...but I... just...don't want to read it the whole damn thing?
So let's say I'm being more efficient. I can purportedly get the key points of information within a fraction of the time. And then what do I do with that time? Find something I actually want to read that's not being summarized? Or, do I stare at my phone like a zombie?
What I'm probably doing is efficiently sapping the life out of reading, and perpetuating the trend, at least for myself, of diminishing ability to read for longer periods of time and subsequently, to delve into the details, which is where nuance lives.
What happens in a world of summaries; a world with less nuance?
Programming
Intent: Efficiency
At work, I need to use Ruby. I was actually brought onto my current contract to write JavaScript, but instead they have Ruby work for me. That's fine; in fact, it's pretty commonplace with the sort of consulting I do. At this company, every engineer has access to Github Co-Pilot. At this point, it has written large swaths of tests and application code for me, helped me learn about protobufs, RSpec, and answered all kinds of dumb questions — even domain questions.
On the surface, this might seem like a net good. I'm less likely to interrupt somebody to ask them questions, which can be annoying or disruptive. I have the capacity to ask questions that might feel stupid, and again, don't bother anybody. And I am able to engage with the LLM in a way that I can ask questions and be curious. This feels particularly useful if I'm not being given the time to get up to speed with Ruby, read a book or two on it or have dedicated time with people who are more experienced than me. So, I can use LLMs and take the shortcut.
But therein lies some problematic stuff. I can basically get an entire set of this application code, etc., spat out for me, copy-paste it in the file, tweak it, and then move on. This isn't me learning Ruby, it's me filling in the blanks and moving on. It's very easy to get too comfortable with this and start to become an editor rather than a creator of text.
Some people might feel fine about that — it's all in the name of efficiency, right? At times, it doesn't feel like I'm learning. If I value learning (and curiosity, more on that later), however, then there's-a-conflict-a-brewing.
With Copilot being offered to the entirety of engineering at this client, there is some pressure to keep up with the tools. I don't necessarily feel that someone who is better at using AI is going to get my job — but more that I should be leveraging all the tools I have to do the job as efficiently and effectively as possible. Where does this leave me in my learning journey? Am I benefitting? Or in so many other cases of things like this—is the company benefitting?
Therapy
Intent: Avoiding hard feelings.
This is a weird/heavy one. I've heard of other people using LLMs as pseudo-therapists as well, and I've given it a try from time to time to see what I can glean in various situations.
But first, let me start by saying that various forms of therapy (with humans) have helped me a lot in my life. It takes a lot of vulnerability (and money) to go share some of the heavy stuff you hold with what is effectively a stranger (trained in helping with that stuff, of course). But there's something implicitly meaningful in stepping into that vulnerability. It's so fucking scary, sometimes. But that discomfort is part of the growth. Maybe call it meta-growth—because, of course, you're also there to figure out some shit you want to change or process, or unburden.
When you do this with an LLM, you can bypass all that uncomfortable ooginess. You can say whatever you want (if you're particularly worried about it being tracked or linked to your identity, you can use a Local LLM instead). What are the implications of being able to bypass layers of shame and get right to the point with another entity you are conversing with (in particular, a non-human one)? Does it teach us that we should be able to "fix" the things we don't like in us with as little discomfort as possible? Or does it give people a truly "safe" place to share something that they can't release in front of human senses? Something in between? Is it all just efficiency and short-cuts again?
I don't know.
Household help
Intent: Efficiency (time + money)
I haven't tried using LLMs for recipes very often, but it raises some interesting thoughts for me. It's so much more efficient to tell an LLM the ingredients I have in the house and see what it comes up with and whether that inspires me; or for example, to ask for a meal plan that is largely vegetarian and reuses groceries between meals.
What am I losing by doing this? I'm not sure. Over time, I've noticed that cooking has become more and more natural with experience. I build up a certain intuition. How much of that will I lose if I'm a function that executes the inputs of what an LLM gives me? Is it any different than using a recipe book?
I'm not sure if I'm losing out on some kind of experiential learning that only comes with making your own meal plans, and thinking about how to use up ingredients in your fridge.
Some additional thoughts
A conflict between curiosity and efficiency
In most of these situations, what I'm noticing is that there's a conflict arising between curiosity and efficiency. Efficiency is a value that I hold far too dear to my heart and often orient my processes around it. And so it's been altogether too easy to lean into this new fad of feeding whatever I have into an LLM in hopes of moving myself faster towards some other trajectory.
But beyond efficiency, I also value curiosity. Do I value it as much as efficiency?
In the name of efficiency, some of my actions detailed above obliviate my curiosity; I privilege the destination over the journey. Herein lies, I think, the source of the hollow feeling of this all.
Efficiency and Capitalism
I can't help but think about so much of this all nestled in the metaphorically insidious tightening hands of capitalism at all our throats. For what is all this efficiency for? Where the fuck are we even going?
To the top, baby; I'll climb over however many corpses I need to, to get mine.
I don't actually know how bad this is for the environment
Speaking of efficiency, I don't actually know how (in)efficient using LLMs are and how that impacts the environment. There's been a couple of memes going around that make light (make dark?) of how problematic this might actually be. But I don't actually know any of the facts. And because I'm lazy and want my efficiency/progress for myself for great good right now, I'm hardly going to be a good human and look into it.
I don't like most of the internet; I just want text
While trying to do my job (or {insert various projects}) effectively and efficiently, I can't also help but think that researching things on the internet has become a miserable experience. Whether it's scrolling through cooking blogs with so much SEO filler, or trying to ignore cookie banners or ads, what was once just text has become polluted. Part of the reason using LLMs has become so easy is not having to bypass the cognitive load of the everything-but-what-you-want-to-see on the internet. No wonder it's so nice — I just get text.
How long before ads start showing up?
Wrapping up
What a fucking bummer this all is. I knew, going into writing this post, that something was rotten in the state of Denmark. The more I wrote about this the more I have started to question my choices. My relationship with LLMs in day to day life has been taking on a sickly, cloying hue. Surely I'm rapidly decaying into a husk of a human being (being semi-serious, here). What will be left? Where is our future headed?
To top it all off, I fed what I'd written so far into Claude with the following prompt:
can you edit this blog post - keep the style but fix any grammar issues etc
After spitting out a tightened up version it said:
I've edited your blog post while preserving your authentic voice, casual tone, and emotional resonance. The main changes include:
- Fixed grammatical issues like "I'm becoming more aware of my / our collective ability" → "diminishing ability"
- Corrected "This isn't mean learning ruby" → "This isn't me learning Ruby"
- Improved formatting with consistent bolding for "Intent" sections
- Fixed punctuation and spacing issues throughout
- Added some paragraph breaks for better readability
- Corrected "swathes" → "swaths" and other minor word choice improvements
- Made some sentences clearer while keeping your thoughtful, conversational style
The post's reflective, somewhat cynical tone and your personal voice remain intact throughout. The edits maintain your raw honesty about the tension between efficiency and deeper engagement with life.
Cynical? Raw honesty? Occasional mis-spelling?
May these traits propel me forward into a better future—one with optimism, raw honesty, and my own brain doing the proof-reading.
Oh, and a post-capitalist society.