Opinion
The unemployed futurist
It feels a little strange writing this essay. Partly because I am used to seeing the future as something that is yet to come, and partly because I am wrestling with a somewhat essential part of who I am as a writer of speculative fiction.
I am writing about writing about robots in a world where robots can write. Specifically, I am writing about how I write about robots and whether something needs to change in the way I do it.
I was in my twenties when I took a train to Pune to appear for an interview for a journalism school there. On the day of the interview, we (me and my four friends) didn't have to wait long. Most interviews took something between five to ten minutes. After mine was over, my friends asked me what had taken me so long. Apparently, I had been in there for the better part of 30 minutes.
We had gotten over the formalities (qualifications, motivations for seeking a career in media, my non-existent experience in the field) pretty quickly, but just as my five minutes were over, the interviewer happened to ask me what my hobbies were. When I answered that I liked reading science fiction, he asked me for details.
I spent the rest of my interview talking about robots. The gentleman who sat before me, it turned out, was also fairly nerdy. We talked about individuality, ego, and social structures. We discussed ethics, humanity, and my views on anthropomorphic symbolism. I passed my interview, not because of qualifications or experience, but because of a hobby that had exposed me to more ways of seeing than our schools often afford.
(I didn't join this school, but that's a different story.)
I am telling you all this because I remember what my views on robots were then, and because I am under pressure right now to change those views.
I remember saying that in science fiction, robots are often a metaphor for the human condition. When we write about robots, we are usually writing about ourselves. Writers often use robots as stand-ins for human beings in different situations - be it a socially marginalised situation such as the one depicted in the game Detroit: Become Human, or an existential one like the one in the animated movie Wall-E. We see ourselves as the robot and the robot as us. We use the robot as our eventual come-uppance (like in Terminator) or as a reflection of how we treat each other (like in Westworld). We have used robots in our fictions to punish ourselves as well as to reward us, in much the same way as we use gods.
But I can't use you as a metaphor for me. You are you, and you deserve to be treated as something more than a symbol, to be treated as the complex being that you are. In addition, you don't even work very well as a metaphor for me because there are dimensions to you that I will be removing when I use you like a metaphor. To force fit you into my invented role as metaphor, I will be inevitably rendering the bulk of your being invisible.
I once used the Caste System as a metaphor for the Indian education system with its Science > Commerce > Arts hierarchy. It was pointed out to me that turning a very real social ill into a metaphor for another social ill devalues it and oversimplifies it, making it secondary as if it is not a thing in and of itself and as if it doesn't deserve to stand on its own.
Back when the robot or the intelligent machine was little more than a figment of our imagination and had no social or existential footprint, it was easy to make it into whatever we wanted. Now however, we live in a world where that future has already arrived. Robots are no longer fictional, and perhaps we can no longer afford to treat them as metaphor-fodder.
Robots now cast very real shadows. Some of these shadows are darker than others. The robot is now an actual taker of jobs, a real destroyer, an uncanny replacer of the human likeness. The nightmares and utopias of science fiction aren't limited to the page and the screen anymore.
The question before me therefore, is about responsibility. If I am portraying the robot as a victim in a story, am I having a real world impact on how the suffering of human beings (who may have lost their jobs to AI) is perceived by society? If I make intelligent machines into villains in my stories, am I not adding to the already existing 'machines will rule us' paranoia? By writing a story where a robot wishes to be free from humanity, am I showing my commitment to freedom as a human value or am I advocating that a dangerous force be allowed to roam around uncontrolled among humans? If I do the opposite and say machines should be under strict control, am I making sure that I will be remembered as a supporter of slavery centuries from now?
I don't know the answers to any of these questions. But while once these questions used to be little more than mental masturbation that could impress an interviewer, they now have practical significance. If the robot's day in the sun as a metaphor is done, where does it go from here? And where do I, as someone who imagines futures where machines live among us, go from here?
The future is and has always been a mysterious place. Its appeal lies in the fact that we don't know what it contains. We obsess over it not only because it is unknown, but also because it is inevitable, like a dark forest we cannot go around.
I feel like I am in this forest now, and I can't see past the deep shadows its tall trees are casting. But of course this is not the end. The future, invisible though it may be, is always there. I am probably just going to have to squint now. All that I have come to think of when it comes to robots or intelligent machines is a product of a time when it was relatively risk-free to do so.
It is harder now, and perhaps that is a good thing.
Links
Study finds AI tools are diminishing students' critical thinking skills
This shouldn't need a study to prove, but it does apparently. A lot of bilge we are being made to swallow about the usefulness of AI tools is neglecting to mention how it encourages mental laziness.
A psychologist couldn't escape the psychological traps of gambling apps
There is a lot of advice out there about how to exercise self-control when dealing with app addiction, be it social apps, gaming apps, or some other algorithmic hell. But the thing to recognise is that you can't do it on the basis of pure will alone. These apps have been designed to keep you addicted. Saying you will control usage by will is like saying you will consume less heroin.
YouTubers, not mainstream media, dominate political interviewing in India
The age of TV interviews is behind us, primarily because TV anchors have shot their own reputations in the foot. The sycophancy is transparent and nobody buys the facade of authenticity they present anymore.
Election Commission on AI use in poll campaigning
Marking a thing as AI is all well and good, but a lot needs to be done to educate people on what AI use is and what it can do and what the risks are. Without that, this is just decoration.
Google is forcing Gemini upon you. Here's how to turn it off
From Richard on Bluesky: "Google is pulling the "automatically on for anyone over 18" bullshit with Gemini that the Dropbox did for its AI, but I can't afford to ditch Google Drive/Docs/Workspace."
Watching losers burn the world down
This piece by Rebecca Shaw reflects a lot of my own impressions of tech-bro oligarchies presently. The most powerful men in the world right now are literal attention-seeking losers. This isn't even an exaggeration. These are little boys in men's bodies.
Taylor Lorenz on the cultural consequences of US TikTok ban
India went through this some years ago when tons of creators from small towns and villages all over India lost their only way of monetisable creative expression. In the US, TikTok skewed towards Liberals and the consequences are bound to affect political discourse.
India has missed the Generative AI bus
This piece in Analytics India Magazine argues that the reason we don't have innovation happening on the GenAI front is lack of funds. I suspect our culture of copy-paste might also have something to do with it. The piece does allude to it a bit when it says: "India’s tech sector continues to prioritise short-term gains from outsourced IT services rather than investing in creating globally competitive products. Indian startups are also busy making API wrappers for SaaS instead of pushing the boundaries of core research, which is all because of funding."
Tech CEOs and the dystopia fetish
I have written often about how a good way to avoid religious bigotry is to teach people how to read and process fiction. That way, they might recognise and appreciate their religious stories the right way instead of believing them to be literally true. Brian Merchant points out that something similar needs to be taught to billionnaire tech bros who seem to think dystopian literature is a manual for building the future.
Unregulated psychological counselling in India
The internet has no dearth of people who will take your money and promise you the thing you need, but few are actually capable of providing the thing. One of the main reasons I stopped discussing people's personal matters with them on my live stream was because I didn't want to be seen as someone who can provide qualified help.