Large Language Models like ChatGPT and AI-powered code editors like Cursor are making software development much faster and easier. This is unquestionably a good thing – for developers, aspiring developers, and the companies that hire them.
But if it’s so quick and easy to produce software now, what happens to the business model? I spoke with one of the smartest people I know in the software business – Saleh Kayyali. He has years of experience as a designer of interfaces and interactions. And thanks to the new tools, he’s become a software developer too.
He prompts us to look beyond the technology, to the economics behind it. And there he sees a new world of disposable, just-in-time software, endless subscriptions and services, and companies that are not only monetizing software but creating whole new realities around it. This changes, he argues, the nature of trust, and the things and the ways that we learn.
In this interview, we talk about the de-skilling of design, the uselessness of research, the limitations of language as an interface, and those new economic realities being created.
No more software
For a bookshop, it’s pretty noisy. We’re in the cafe upstairs at Foyles, the storied bookshop on Charing Cross Road in London. My trusty Zoom audio recorder worked perfectly, but human error arose – I neglected to put an SD card in. Luckily, my journalism training kicked in and I used my phone as a backup recorder.
This reflects the content of our discussion – it’s about technological augmentation, the failure of certain practices, and the value of working across devices.
Our conversation continually shifts between the state-of-the-art and the early days – 1990s, 80s, even back to mainframes in the 60s. Because Kayyali is also an expert in computer history – visit his YouTube channel.
“When you go to vintage software or computers from the late 90s and 2000s,” Kayyali tells me, “somehow you trust them more. Because they feel solid. It feels like, ‘Oh my goodness, they were able to do all that with two megabytes of memory!’”
“Look at software development now,” he continues. “You have to pay for this, pay for that, host it on that server. It explodes out of proportion.”
He mentions the book There is No Software, from 2015. “It's a collection of essays highlighting the fact that whatever you think you know about software, it's not that,” he says. “For example, we used to buy it in boxes, now it's all online.”
“But in fact,” he goes on, “software was originally a service – it was bundled with mainframes. The U.S. Department of Justice sued IBM back then, and forced it to break this bundling of software and hardware. And that initiated the huge software development movement in the late 80s, 90s and early 2000s.”
He adds that now, the DoJ is suing Google for anti-competitive practices.
“Now we are in this explosion of the service model going out of control. I think that now, because of AI and Large Language Models, and all the discussion around them, it’s bringing these forgotten discussions back to software-as-a-service, because the whole purpose of software itself is being questioned – how it's built, how it's marketed, why people need software – if you can build what they call just-in-time software through LLMs.”
If you are anywhere near the software industry, you know this well by now. I’ve experienced it myself, building websites and apps large and small with the help of the AI-powered new tools.
“You can ask an LLM that’s clever with coding – like Claude,” Kayyali explains, “‘Build me a bill-splitting software,’ and it will build it for you. It might be small, highly inefficient, not necessarily the best in the world. But since it does the job, it's just-in-time software. You forget about it and you move on. You don't need it anymore. It's kind of disposable.”
$12 profit v. million-dollar investments
Kayyali also has first-hand experience with this.
“Right after when ChatGPT was released, I was seeing some things about how it can generate software. I know the basics, but I was never able to sit and build something myself. So I thought, ‘Okay, let me try that’.
“So I built this thing called NodePad. It's basically just an infinite canvas for writing. I always thought, ‘What if, for writing, you have nodes, and each node is a document? And you can connect them?’ And I added a couple of AI features. Because AI is a stochastic parrot, it's really only good for brainstorming and generating things that you know that you're going to dump 90 percent of.
“What’s interesting, though – this shows you the state of the industry that we're in. I used a platform called Replit. It's an online IDE [Integrated Development Environment], it's all cloud-based. I was one of the first people back then to use a feature of Replit called Deployments, which is to able to actually deploy the software and use it. So they interviewed me about my experience.
“And because they interviewed me, that thing spread. Computer World – a huge publication included the software that I built in a list about top 10 innovations in LLMs.
“They classified what I did as an LLM. But it wasn't! Wow. But that article was translated into like 7000 languages, from Le Monde Diplomatique in France to a Korean website. It's funny now, almost all software supports nodes. Back then it was new.
“Some got it right, like O’Reilly Radar.
“And then, ‘Are you looking for investment?’ I wasn't really thinking about building anything. And I even called it like an experiment using OpenAI's API. So I didn't claim anything.
“That was my first experiment. Think of it as a research paper but done through software.”
“Then I built another one recently,” he continues. “This one I did to sell. It's a Mac app called Presence Tracker. It's a stupid app, based on a watch that I saw once – the Durr. It vibrates every five minutes. That's it.” [As one of my own recurring themes is time, I mention that watch here.] “The app simply lets me know when five minutes passes, or maybe ten minutes, or 60 minutes. And then it plays a beep.”
“It's a timer,” I said.
“It's a timer. But the thing about it is that it’s repetitive. It's like the vibration on that watch. It doesn't tell you what to do. I sell it for 99 cents. I sold like 12. Friends and family.
“But the comments were quite interesting. ‘Finally, an app that doesn't force me to take breaks!’ ‘Where are the analytics?’ I purposely didn't want analytics. I hate productivity analytics.
“Now I'm starting to see videos, but they are promoted by content creators just to get views and go viral. About how to build an app in ten hours, using a combination of Claude and Cursor.
“Now, from a business perspective, this abundance means that it's worthless. I mean, unless you can scam some venture capitalist to get some money, it's really a struggle just to build something, make it sticky, and somehow convince people to pay for it. I listen to quite a lot of VC podcasts, and they're starting to talk about the struggle of, ‘What are we raising money for?’
“The whole thing is around achieving 80 percent of what software does now, with 10 percent of the resources – Chamath Palihapitiya is doing this. Because of AI, I need to build software that pays, using AI. But without hiring a team of ten people, like five or ten years ago. Now maybe with five superhumans who use AI. This changes things.
“It goes back to the book There is No Software. It's from 2015, but it's quite fascinating is that labour laws around the development of software are changing again. So how are you going to pick up speed as a developer right now without using these tools? You won't. Which means you won't be a super-developer, a super-achiever.
“We have two problems. Chat interfaces go on too long and then you forget about what's going on. So what if we use nodes? Number two: the generation that comes from LLMs is untrustworthy. Maybe we should label them differently and call them generations for brainstorming. Out there, outlandish thoughts. And just think of it as this.”
Agents of instability
Okay. But what comes next?
“What is the interface that's going to be ubiquitous across multiple devices, that works with you all the time?” he asks. “You see a startup coming up with one idea, another startup coming up with another idea, and they don't talk to each other. There's no standard – it's not in their favor to build a standard, because if there was a standard, no one can monetize. And not just monetize, but completely capitalize – dominate the model, which is what everyone is struggling with, I feel.”
Things are moving quickly, but it’s still early days.
“One effort seems to be about LLMs,” he says. “You type what you want into a text box, and prompts will give us everything. But with all the complications and problems that come from the service model for software, this will add even more complications,” he adds.
But doesn’t this simply democratize a complex process?
“A lot of people say, ‘Alright, people with language skills are the future, because they can communicate with LLMs very well,’” he responds. “But I've seen a discussion by Anthropic researchers that it's not necessarily the case. Because it's all predictability. Your language can be 20 percent there, and the LLM will compensate. So even if you're talking crap, it will make sense of it. For 90 percent of use cases, you'll be able to get a somehow satisfying result out of it.
“What’s even more dangerous is that we're adding more layers of abstraction in our interactions with machines. Language will probably be just one of these layers.
“Now it's funny how fast these things go, but now the hot topic is LLM agents. To build a certain piece of software, you'll need to do five different steps, and you need to know them to a certain degree. Like, ‘I need the database designed to be this style, I need the interface such that you use this model.’
“Everyone now is trying to build this agent that can do things on your behalf. You can think of agents for research, agents for software development, agents for whatever.”
I know what you’re thinking: Will these agents replace people?
“As a worker,” Kayyali explains, “someone might tell you, ‘Alright, it used to take you, I don't know, two hours to read this report. Cut that in half.’ There's no way you can say no. Because five other people will do it – even if it's 60 percent correct– they’re getting results faster than me. You see it in education, and how students write research papers now.”
Sure, you could look for another job. But try going to LinkedIn. That job posted an hour ago? It’s already got more than 100 applicants. Maybe some of those are already some jobseekers’ automated agents. And you can bet that they’re also using ChatGPT to write the application too.
“But you have levels of it,” Kayyali counters. “You have the lazy people who will use it to generate and copy/paste. But some people can get very smart about it. If I have to write a paper, I can inject ten different resources and prompt it to give me the most important quotes and build a case based on them.
“And then, this synthetic output is leading into a synthetic discussion that happens based on that paper that I wrote. With a teacher who may or may not know. Maybe they’ll check if the students wrote it. But they might never be able to know for certain.
“And it's not about something being completely generated. It affects how we learn things. Because you're more likely to say, ‘Well, I have this topic, let me use an LLM to summarize 15 different papers instead of reading them. They're mostly jargon anyway!’”
“This synthetic output coming from LLMs – by default, you don't trust it. Everyone says, ‘Trust and verify’. That might work for things that you're really worried about. Or responsible for – for example, doing a research about something and I don't want to look like an idiot, so I want to make sure that it's correct. But on a daily basis, I'm not going to trust and verify.
“So that changes us, that changes how we interact with computers. It’s kind of prophetic, Marshall McLuhan’s idea that you shape your tools, and your tools shape you. It means that your mindset becomes completely different.
“We have a device that's not stable. We have software that's not stable, and we have additional devices that we can do work with – you can build prototypes on your phone.
“Now agents will come. And when these agents come to do things on your behalf, you have even less agency. The patience required to dig deep into things won't be even there. And because agents are doing like 15 tasks – or 150 tasks – in a row, it will feel like forever to you. Back then I accepted the fact that to do one task, I need five minutes.”
De-skilling and poor service
“Maybe it's a strange question,” I say, “but what do you want to be called, or known for? Is it interface design?”
“So far I label myself as an interaction designer,” Kayyali replies. “I keep it this way.”
Now there's a trending title for people who are designing code, they call them Design Engineers. You'll see some companies looking for Design Engineers. Which sounds like a hybrid of the developer. They want you to do what, back in the day, two people used to do.
“And a lot of people, especially if the job market is bad, they'll say, ‘I’m a designer, I'll do it.’ Right?. ‘What does that mean? I don't know, but I'll do it.’”
“What's the difference between a UX architect and a UX engineer?,” he continues. “These are all – if you look at it from an academic perspective – it's all bullshit. But that doesn't help me. Because if I look online for jobs, I want to be able to meet the criteria to work as a team. But they want you to do them all.”
What, then, to tell aspiring designers these days?
“I’d give them some money to take a couple of courses,” Kayalli says. “If they grew up with Roblox, for example, they will be able to build nice stuff in Roblox. But they won't be able to build software.
“When I was growing up, for me, it was like, ‘Oh, this software called Photoshop, I don't know what it is, but I installed it and played with it. There's this software called Maya, you can make spheres and cubes.’ But the risk of being locked in an ecosystem is high.
“Most of design work used to happen in Photoshop. You build a kind of muscle memory designing software in Photoshop until, I don't know, 2012, when a software called Sketch became popular and everyone started to use that. Until 2018, when the software called Figma became popular and everyone started using that.
“It’s continuous change. It's continuous adaptation. It's not a good thing, but it is what it is. It's not like being a physician. ‘I’m a doctor. That's what I've been doing for 40 years.’ I can't say I'm a designer. Because I don't even know what it is anymore. A company that comes along and says, ‘Our designers must be developers, must be front-end engineers. Because if you don't understand the code, the medium, you're just drawing squares and circles.’
“I think it's a valid argument. The software industry was built for years on specialization and separation. Now, when they start combining roles, it raises more and more challenges.
“So now I have to spend X amount of money to go into a bootcamp that teaches me how to use this software. I'll need two to three years to master it. What if I'm in my fifth, sixth, seventh, whatever year, and senior designers come and tell me, ‘If you want to be a good designer, you have to drop this software’?
“Like everyone who works in software, if you disappear for eight years and then come back, you're almost starting fresh. One of the things that I did, when I started to become a manager in my previous job – you're just going to focus more on team leadership, and those specific software skills will go. You'll be slow, or not know what's going on, or you'll do things old-fashioned way, which makes you slower.
“So you either keep adapting and keep up with software, or you go to this higher level. And that's why now, every two, three years, I switch from one place to another, because it keeps me doing things.
“The problem,” he explains, “is that this is always framed within a certain economic perspective. When these ethical and moral discussions are framed economically, I think you'll never get an answer.”
This reminds me of something the sci fi writer Ted Chiang said:
I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.
Kayyali continues, “There was a poll, quite recent, about how in the US, especially youngsters are starting to look into vocational training more, because they can't find jobs anymore in the service industry – including technology.
“In the early 2000s, there was a job called a Social Media Editor. You already know that the trajectory of it is not really ideal, right? But some of these graduates have spent five, six years doing this job.
“We could classify that as a bullshit job. But, now, I personally believe that most of what we do nowadays are bullshit jobs. It's a service. Within a capitalist system. And the problem with service, compared to a product that I can use, is how useful it is.
“It might be useless, but it generates so much money. And that money, we don't know how much of it is real anyway. But we’re talking about a service that disappears almost instantly. If we accept life in a service-driven economy, we accept the fact that whatever service you do is not a service that is going to live forever.
And forget that PhD
“It’s the same with research now. I remember someone with a PhD in psychology, who just finished – she wants to get into research, in a company, right? But what she will do there has nothing to do with rigorous academic research. Eventually it justifies a cause, it gives a data point that will just be forgotten. Such research is the first to go when there’s an economic problem.
“As a model, think of it as the value that you provide, compared to the value that you get out of it. For every supermodel, there's a million struggling models.
“Everyone is playing around with the interface right now. But I don't think it will mean anything, because we’re working within this commercial culture. Apple has Wall Street pressure, startups are supposed to be middle-of-the-road. They're all trying to make money out of it somehow.
“And now you see, for example, Microsoft Research and all the other research labs have absolutely no input. There's a very famous designer that works there about talking about the dangers of AI, and then you look at Bing, for example. Complete separation, between what the researcher is doing and what the business is doing. And as long as that exists, we'll be always in a state of unclear why, where is it all going? Because you live in your reality, I live in my reality, and we will not benefit from anything other than writing tools.”
At this point I interject. If you’ve read some of my previous articles, you know that “reality” is a recurring interest.
“Some people go further and say that our consensus reality has collapsed,” I suggest. “And I think we're already at a point where you don't know what's real or what's not. What's true and what's false.
“I was watching a training video today and there was a woman, doing a voiceover. I'm not sure if she was real. AI can be completely realistic now.”
“I don't think it will matter anymore,” Kayyali says. “In the sense that, the assumption will be that it's synthetic.
“Alan Kay talks about this quote that all startups use: ‘If you want to predict the future, you have to invent it.’ And that's just cynical in itself. From a tech perspective, it makes sense. But it's kind of what everything is revolving around right now, because you make up the whole reality. It becomes true. And our reality is increasingly commercial.
“You might sound like a Marxist, but the thing is, the current economic system is not capitalist, it's not Marxist. It's something new.
“Yanis Varoufakis wrote a book called Techno Feudalism, basically saying that the cloud – the companies that run it – are like feudalists. They control the cloud and you pay them, like in rent, for a service that you get from them, and they control it, and no one can compete with them.”
By now, Foyles cafe – a good old bricks-and-mortar service business – had quieted down. We were politely told to pack up – we closed the place down, talking for two hours or so. There’s so much more to say, so I hope to talk with him again soon. In the meantime, follow the money.